πŸš€ Qwen3-50M C4 Pretrained (FP16) - Notebook Version

Pretrained Qwen3-50M model on C4 dataset using FP16 precision in notebook environment.

πŸ“Š Training Results

  • Final Training Loss: 4.0267
  • Final Validation Loss: 4.120617866516113
  • Training Samples: 1,000,000
  • Epochs: 3
  • Precision: FP16

πŸš€ Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Mostafa8Mehrabi/qwen3-50m-c4-final")
model = AutoModelForCausalLM.from_pretrained(
    "Mostafa8Mehrabi/qwen3-50m-c4-final", 
    torch_dtype=torch.float16,
    device_map="auto"
)

# Generate text
prompt = "The future of artificial intelligence is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“ Checkpoints

Training checkpoints (also in FP16) are available at: Mostafa8Mehrabi/qwen3-50m-c4-checkpoints

πŸ”§ Training Environment

This model was trained in a notebook environment with the following configuration:

  • Batch Size: 128
  • Learning Rate: 5e-05
  • Max Length: 512
  • Number of Processes: 8
Downloads last month
74
Safetensors
Model size
71.6M params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Mostafa8Mehrabi/qwen3-71M-c4-final

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(4)
this model