P.png

Pisces-QwenR1-1.5B

Pisces-QwenR1-1.5B is a small reasoning model that enhances the reasoning capabilities of edge large language models (LLMs) using reinforcement learning (RL). Fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B, it offers lightweight yet powerful performance in mathematical reasoning, coding, and error correction, making it ideal for edge deployments and on-device intelligent agents.

Key Improvements

  1. Mathematical Reasoning Enhancements:
    Equipped with refined capabilities in mathematical logic, symbolic computation, step-by-step problem-solving, and numerical accuracy — even in resource-constrained environments.

  2. Coding and Debugging Proficiency:
    Capable of generating, understanding, and debugging code in Python, JavaScript, C++, and other languages, making it a versatile assistant for lightweight coding tasks and educational tools.

  3. Intelligent Error Correction:
    Can identify logical inconsistencies, detect structural errors (in formats like JSON, XML), and offer corrective suggestions — optimized for fast inference and low-latency feedback.

  4. Efficient Instruction Following:
    Fine-tuned to accurately follow multi-step and nested instructions, delivering reliable outputs across compact prompts and conversations.

  5. Edge-Optimized Context Handling:
    Supports long-context inputs up to 128K tokens and outputs up to 8K tokens, balancing context-awareness with memory efficiency for edge devices and embedded systems.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Pisces-QwenR1-1.5B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain the difference between breadth-first search and depth-first search with Python code examples."
messages = [
    {"role": "system", "content": "You are a knowledgeable assistant skilled in reasoning, coding, and explanation."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Intended Use

  1. Edge Inference and Reasoning:
    Ideal for reasoning and structured output generation on edge devices such as mobile phones, embedded systems, and low-power AI modules.

  2. Compact Programming Assistant:
    Efficient for lightweight coding tasks, debugging, and educational environments where smaller models are preferred.

  3. Mathematical Toolkits:
    Solves mathematical problems and logical reasoning challenges with minimal resource overhead.

  4. Conversational Agents:
    Enables intelligent, context-aware bots and virtual assistants in constrained hardware setups.

  5. Multilingual Support & Translation:
    Useful for lightweight multilingual inference and content generation across various languages.

  6. Structured Content Generation:
    Outputs well-formatted data such as JSON, XML, tables, and Markdown — suitable for embedded AI use cases.

Limitations

  1. Compute Constraints:
    While optimized for edge use, still requires adequate hardware (e.g., modern GPUs or NPUs) for efficient large-context processing.

  2. Knowledge Cutoff:
    No real-time access to current events or external data beyond its training.

  3. Potential Biases:
    May exhibit inherited biases or inaccuracies from training data.

  4. Variability in Creative Output:
    Creative writing or abstract tasks may yield variable consistency or style.

  5. Prompt Sensitivity:
    Responses depend heavily on how well prompts are structured — minor changes can impact output significantly.

Downloads last month
25
Safetensors
Model size
1.78B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Pisces-QwenR1-1.5B

Finetuned
(275)
this model
Finetunes
1 model
Quantizations
3 models

Collection including prithivMLmods/Pisces-QwenR1-1.5B