library_name: transformers license: apache-2.0 base_model:
- Qwen/Qwen2.5-0.5B-Instruct tags:
- llama-factory
- full
- generated_from_trainer model-index:
- name: QwenThinker0.5B datasets:
- open-thoughts/open-thoughts-114k
QwenThinker0.5B
This model is a fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct on the OpenThoughts-114k dataset.
The dataset is derived by distilling DeepSeek-R1 using the data pipeline available on github. More info about the dataset can be found on the dataset card at OpenThoughts-114k dataset.
Trained with LLaMA-Factory
Training hyperparameters
- 288 global batch size
- learning_rate: 1e-05
- num_epochs: 1.0
- learning_rate: 1e-05.