lambda-1v-1b — Lightweight Math & Logic Reasoning Model
lambda-1v-1b is a compact, fine-tuned language model built on top of TinyLlama-1.1B-Chat-v1.0
, designed for educational reasoning tasks in both Portuguese and English. It focuses on logic, number theory, and mathematics, delivering fast performance with minimal computational requirements.
Model Architecture
- Base Model: TinyLlama-1.1B-Chat
- Fine-Tuning Strategy: LoRA (applied to
q_proj
andv_proj
) - Quantization: 8-bit (NF4 via
bnb_config
) - Dataset:
HuggingFaceH4/MATH
— subset:number_theory
- Max Tokens per Sample: 512
- Batch Size: 20 per device
- Epochs: 3
Example Usage (Python)
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lxcorp/lambda-1v-1b")
tokenizer = AutoTokenizer.from_pretrained("lxcorp/lambda-1v-1b")
input_text = "Problema: Prove que 17 é um número primo."
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
About λχ Corp.
λχ Corp. is an indie tech corporation founded by Marius Jabami in Angola, focused on AI-driven educational tools, robotics, and lightweight software solutions. The lambdAI model is the first release in a planned series of educational LLMs optimized for reasoning, logic, and low-resource deployment.
Stay updated on the project at lxcorp.ai and huggingface.co/lxcorp.
Developed with care by Marius Jabami — Powered by ambition, faith, and open source.
- Downloads last month
- 1,747
Model tree for lxcorp/lambda-1v-1B
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0