SmolLM2-360M-Eagle

SmolLM2-360M-Eagle is a fine-tuned version of the SmolLM2-360M model on the EagleSFT dataset, designed to improve the model's capabilities in both Russian and English language tasks. GGUF version of this model is available at: SmolLM2-360M-Eagle-GGUF

Model Description

SmolLM2-360M-Eagle is a lightweight language model that has been fine-tuned specifically to handle bilingual content. This fine-tuning extends the base model's capabilities to better understand and generate content in Russian while maintaining its English competency.

Base Model

The model is built upon SmolLM2-360M, a compact language model with 360 million parameters that offers a good balance between performance and resource requirements.

Fine-tuning Details

Dataset

The model was fine-tuned on the EagleSFT dataset, which contains 536,231 pairs of human questions and machine-generated responses in both Russian and English languages. The dataset primarily focuses on educational content but also includes everyday questions and casual conversations.

Environmental Impact

  • Training duration: 41h 14m total in Saint-Petersburg, Russia
  • Power consumption: 380W average
  • Hardware: 1 x RTX 4090
  • Carbon emissions: Approximately 5.48 kg CO2eq
    • Calculated based on average power consumption and average CO2eq/kWh (350g) in this region
    • Saint-Petersburg: 380W * 41.23h * 350g/kWh = 5.48 kg CO2eq

Training Parameters

  • Training approach: Supervised Fine-Tuning (SFT)
  • Training epochs: 2
  • Learning rate: 3.0e-04
  • Precision: bfloat16

Limitations and Capabilities

It's important to note that this model was not pre-trained but only underwent SFT on a relatively small number of tokens. This means that the model has a limited amount of data to rely on when answering in Russian compared to its English capabilities.

Despite extensive limitations, the model shows minimal improvement in:

  • Basic recognition of Russian prompts (though with frequent misunderstandings)
  • Handling simple tasks formatted as "{question in Russian}, answer in English"
  • Basic translation from Russian to English (though quality remains poor)

The model's minimal understanding of Russian language comes solely from the supervised fine-tuning process without any proper pre-training with Russian text corpus, resulting in severely limited capabilities.

Experimental Capabilities

The model demonstrates some experimental capabilities, but with significant limitations:

  • Basic Russian text understanding (with frequent errors and misinterpretations)
  • Limited question answering in Russian (quality significantly lower than English)
  • Basic Russian to English translation (better than English to Russian)

Limitations

  • NOT SUITABLE FOR PRODUCTION USE: This model should not be used in production environments in any form
  • Extremely limited knowledge base for Russian language due to lack of pre-training with Russian text
  • Unoptimized tokenizer performance for Russian language results in inefficient token usage
  • Output quality in Russian will be unsatisfactory for most use cases
  • May produce inaccurate, inconsistent, or inappropriate responses, especially in Russian
  • All limitations of the base SmolLM2-360M model still apply
Downloads last month
7
Safetensors
Model size
362M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nyuuzyou/SmolLM2-360M-Eagle

Finetuned
(38)
this model
Quantizations
1 model

Dataset used to train nyuuzyou/SmolLM2-360M-Eagle

Collection including nyuuzyou/SmolLM2-360M-Eagle