lambda-1v-1b — Lightweight Math & Logic Reasoning Model

lambda-1v-1b is a compact, fine-tuned language model built on top of TinyLlama-1.1B-Chat-v1.0, designed for educational reasoning tasks in both Portuguese and English. It focuses on logic, number theory, and mathematics, delivering fast performance with minimal computational requirements.


Model Architecture

  • Base Model: TinyLlama-1.1B-Chat
  • Fine-Tuning Strategy: LoRA (applied to q_proj and v_proj)
  • Quantization: 8-bit (NF4 via bnb_config)
  • Dataset: HuggingFaceH4/MATH — subset: number_theory
  • Max Tokens per Sample: 512
  • Batch Size: 20 per device
  • Epochs: 3

Example Usage (Python)

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("lxcorp/lambda-1v-1b")
tokenizer = AutoTokenizer.from_pretrained("lxcorp/lambda-1v-1b")

input_text = "Problema: Prove que 17 é um número primo."
inputs = tokenizer(input_text, return_tensors="pt")

output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))

About λχ Corp.

λχ Corp. is an indie tech corporation founded by Marius Jabami in Angola, focused on AI-driven educational tools, robotics, and lightweight software solutions. The lambdAI model is the first release in a planned series of educational LLMs optimized for reasoning, logic, and low-resource deployment.

Stay updated on the project at lxcorp.ai and huggingface.co/lxcorp.


Developed with care by Marius Jabami — Powered by ambition, faith, and open source.



Downloads last month
1,747
Safetensors
Model size
1.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lxcorp/lambda-1v-1B

Finetuned
(358)
this model

Spaces using lxcorp/lambda-1v-1B 3