|
--- |
|
language: |
|
- en |
|
- ru |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
base_model: HuggingFaceTB/SmolLM2-360M |
|
datasets: nyuuzyou/EagleSFT |
|
co2_eq_emissions: |
|
emissions: 5484 |
|
source: "Calculated based on power consumption and regional carbon intensity" |
|
training_type: "fine-tuning" |
|
geographical_location: "Saint-Petersburg, Russia" |
|
hardware_used: "1 RTX 4090 GPU" |
|
--- |
|
# SmolLM2-360M-Eagle |
|
SmolLM2-360M-Eagle is a fine-tuned version of the [SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) model on the [EagleSFT](https://huggingface.co/datasets/nyuuzyou/EagleSFT) dataset, designed to improve the model's capabilities in both Russian and English language tasks. |
|
GGUF version of this model is available at: [SmolLM2-360M-Eagle-GGUF](https://huggingface.co/nyuuzyou/SmolLM2-360M-Eagle-GGUF) |
|
|
|
## Model Description |
|
SmolLM2-360M-Eagle is a lightweight language model that has been fine-tuned specifically to handle bilingual content. This fine-tuning extends the base model's capabilities to better understand and generate content in Russian while maintaining its English competency. |
|
|
|
### Base Model |
|
The model is built upon SmolLM2-360M, a compact language model with 360 million parameters that offers a good balance between performance and resource requirements. |
|
|
|
## Fine-tuning Details |
|
### Dataset |
|
The model was fine-tuned on the EagleSFT dataset, which contains 536,231 pairs of human questions and machine-generated responses in both Russian and English languages. The dataset primarily focuses on educational content but also includes everyday questions and casual conversations. |
|
|
|
### Environmental Impact |
|
- **Training duration**: 41h 14m total in Saint-Petersburg, Russia |
|
- **Power consumption**: 380W average |
|
- **Hardware**: 1 x RTX 4090 |
|
- **Carbon emissions**: Approximately 5.48 kg CO2eq |
|
- Calculated based on average power consumption and average CO2eq/kWh (350g) in this region |
|
- Saint-Petersburg: 380W * 41.23h * 350g/kWh = 5.48 kg CO2eq |
|
|
|
### Training Parameters |
|
- **Training approach**: Supervised Fine-Tuning (SFT) |
|
- **Training epochs**: 2 |
|
- **Learning rate**: 3.0e-04 |
|
- **Precision**: bfloat16 |
|
|
|
## Limitations and Capabilities |
|
It's important to note that this model was not pre-trained but only underwent SFT on a relatively small number of tokens. This means that the model has a limited amount of data to rely on when answering in Russian compared to its English capabilities. |
|
|
|
Despite extensive limitations, the model shows minimal improvement in: |
|
- Basic recognition of Russian prompts (though with frequent misunderstandings) |
|
- Handling simple tasks formatted as "{question in Russian}, answer in English" |
|
- Basic translation from Russian to English (though quality remains poor) |
|
|
|
The model's minimal understanding of Russian language comes solely from the supervised fine-tuning process without any proper pre-training with Russian text corpus, resulting in severely limited capabilities. |
|
|
|
## Experimental Capabilities |
|
The model demonstrates some experimental capabilities, but with significant limitations: |
|
- Basic Russian text understanding (with frequent errors and misinterpretations) |
|
- Limited question answering in Russian (quality significantly lower than English) |
|
- Basic Russian to English translation (better than English to Russian) |
|
|
|
## Limitations |
|
- **NOT SUITABLE FOR PRODUCTION USE**: This model should not be used in production environments in any form |
|
- Extremely limited knowledge base for Russian language due to lack of pre-training with Russian text |
|
- Unoptimized tokenizer performance for Russian language results in inefficient token usage |
|
- Output quality in Russian will be unsatisfactory for most use cases |
|
- May produce inaccurate, inconsistent, or inappropriate responses, especially in Russian |
|
- All limitations of the base SmolLM2-360M model still apply |
|
|