Pisces-QwenR1-1.5B
Pisces-QwenR1-1.5B is a small reasoning model that enhances the reasoning capabilities of edge large language models (LLMs) using reinforcement learning (RL). Fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B, it offers lightweight yet powerful performance in mathematical reasoning, coding, and error correction, making it ideal for edge deployments and on-device intelligent agents.
Key Improvements
Mathematical Reasoning Enhancements:
Equipped with refined capabilities in mathematical logic, symbolic computation, step-by-step problem-solving, and numerical accuracy — even in resource-constrained environments.Coding and Debugging Proficiency:
Capable of generating, understanding, and debugging code in Python, JavaScript, C++, and other languages, making it a versatile assistant for lightweight coding tasks and educational tools.Intelligent Error Correction:
Can identify logical inconsistencies, detect structural errors (in formats like JSON, XML), and offer corrective suggestions — optimized for fast inference and low-latency feedback.Efficient Instruction Following:
Fine-tuned to accurately follow multi-step and nested instructions, delivering reliable outputs across compact prompts and conversations.Edge-Optimized Context Handling:
Supports long-context inputs up to 128K tokens and outputs up to 8K tokens, balancing context-awareness with memory efficiency for edge devices and embedded systems.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Pisces-QwenR1-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the difference between breadth-first search and depth-first search with Python code examples."
messages = [
{"role": "system", "content": "You are a knowledgeable assistant skilled in reasoning, coding, and explanation."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use
Edge Inference and Reasoning:
Ideal for reasoning and structured output generation on edge devices such as mobile phones, embedded systems, and low-power AI modules.Compact Programming Assistant:
Efficient for lightweight coding tasks, debugging, and educational environments where smaller models are preferred.Mathematical Toolkits:
Solves mathematical problems and logical reasoning challenges with minimal resource overhead.Conversational Agents:
Enables intelligent, context-aware bots and virtual assistants in constrained hardware setups.Multilingual Support & Translation:
Useful for lightweight multilingual inference and content generation across various languages.Structured Content Generation:
Outputs well-formatted data such as JSON, XML, tables, and Markdown — suitable for embedded AI use cases.
Limitations
Compute Constraints:
While optimized for edge use, still requires adequate hardware (e.g., modern GPUs or NPUs) for efficient large-context processing.Knowledge Cutoff:
No real-time access to current events or external data beyond its training.Potential Biases:
May exhibit inherited biases or inaccuracies from training data.Variability in Creative Output:
Creative writing or abstract tasks may yield variable consistency or style.Prompt Sensitivity:
Responses depend heavily on how well prompts are structured — minor changes can impact output significantly.
- Downloads last month
- 25
Model tree for prithivMLmods/Pisces-QwenR1-1.5B
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B