Model Information

This model is a fine-tuned version of the meta-llama/Llama-3.2-1B-Instruct large language model.

Fine tuning was performed using PEFT (Parameter Efficient Fine Tuning) with LoRA (Low-Rank Adaptation) on the chat subset of the nvidia/Llama-Nemotron-Post-Training-Dataset dataset.

LoRA Configuration:

lora_config = LoraConfig(
    task_type="CAUSAL_LM",
    r=32,
    lora_alpha=32,
    lora_dropout=0.1,
    target_modules=["q_proj", "k_proj", "v_proj"],
    modules_to_save=["lm_head", "embed_token"],
)

Use with Transformers

pip install transformers
pip install torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("suwesh/llamatron-1B-peft").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft")

input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

Or with pipeline

import torch
import transformers
from transformers import pipeline
pipe = pipeline(
    "text-generation",
    model=("suwesh/llamatron-1B-peft"),
    tokenizer=transformers.AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft"),
    torch_dtype=torch.bfloat16,
    device="cuda",
)
def to_model(input_text, system_message):
    messages = [
        {"role": "system", "content": system_message},
        {"role": "user", "content": input_text}
    ]
    outputs = pipe(
        messages,
        max_new_tokens=512,
        temperature=0.6,
        top_p=0.95
    )
    return outputs[0]["generated_text"][-1]['content']

response = to_model("Write a joke about windows.", "detailed thinking on")

Load adapter checkpoint for further fine tuning

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft")
model = PeftModel.from_pretrained(base_model, "suwesh/llamatron-1B-peft", subfolder="checkpoint-11000")

Training details

Downloads last month
4
Safetensors
Model size
1.5B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for suwesh/llamatron-1B-peft

Finetuned
(1005)
this model

Dataset used to train suwesh/llamatron-1B-peft