11.png

Omega-Qwen3-Atom-8B

Omega-Qwen3-Atom-8B is a powerful 8B-parameter model fine-tuned on Qwen3-8B using the curated Open-Omega-Atom-1.5M dataset, optimized for math and science reasoning. It excels at symbolic processing, scientific problem-solving, and structured output generation—making it a high-performance model for researchers, educators, and technical developers working in computational and analytical domains.

Key Features

  1. Math & Science-Centric Reasoning Fine-tuned on the Open-Omega-Atom-1.5M dataset, built from high-quality math, science, and symbolic reasoning tasks—ideal for analytical domains including algebra, calculus, physics, and chemistry.

  2. Scientific Concept Breakdown Explains theories, derivations, and concepts across STEM fields with clarity—solves equations step-by-step, handles formula-based questions, and provides interpretive insights.

  3. Symbolic Computation & Chain-of-Thought Supports multi-step reasoning, symbolic derivations, and proof-based problem solving with a strong focus on accuracy and transparency.

  4. Structured Output Generation Outputs precise formats in LaTeX, Markdown, JSON, and YAML for scientific writing, educational materials, and data pipeline integration.

  5. Optimized for Efficient Scientific Workflows While based on an 8B model, it is optimized for offline inference, research clusters, and GPU workstations that need high symbolic precision and performance.


Quick Start with Hugging Face Transformers🤗

!pip install transformers huggingface_hub accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Omega-Qwen3-Atom-8B"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "A alone can do a piece of work in 6 days and B alone in 8 days. A and B undertook to do it for Rs. 3200. With the help of C, they completed the work in 3 days. How much is to be paid to C?"
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

# parsing thinking content
try:
    # rindex finding 151668 (</think>)
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content)
print("content:", content)

Answer

thinking content: <think>
Okay, let's see. So the problem is about A, B, and C working together to complete a piece of work. We need to figure out how much money C should get for his help. The total payment is Rs. 3200, and we have to divide that among A, B, and C based on their contributions. Let me try to break this down step by step.

First, let's understand the work rates of A and B. A can finish the work in 6 days, so his work rate is 1/6 of the work per day. Similarly, B can finish it in 8 days, so his work rate is 1/8 per day. When they work together, their combined work rate would be 1/6 + 1/8. Let me calculate that:

1/6 + 1/8. To add these, find a common denominator, which is 24. So, 4/24 + 3/24 = 7/24. So together, A and B can do 7/24 of the work in one day.

But the problem says that with the help of C, they completed the work in 3 days. That means all three working together finished the job in 3 days. Let's denote C's work rate as 1/x per day, where x is the number of days C would take alone. So, the combined work rate of A, B, and C is 1/6 + 1/8 + 1/x.

Since they completed the work in 3 days, their combined work rate multiplied by 3 should equal 1 (the whole work). So:

(1/6 + 1/8 + 1/x) * 3 = 1

Let me solve for 1/x first. Let's compute 1/6 + 1/8:

As before, 1/6 is 4/24 and 1/8 is 3/24, so together they are 7/24. So:

(7/24 + 1/x) * 3 = 1

Divide both sides by 3:

7/24 + 1/x = 1/3

Subtract 7/24 from both sides:
...

400
\boxed{400}


Intended Use

  • Math and science tutoring, equation solving, and symbolic reasoning
  • Educational tools for high-school to postgraduate-level STEM
  • Research-grade assistant for physics, chemistry, and applied math
  • Structured technical content generator for papers, lab work, and datasets
  • STEM-focused chatbot/API for integration into science education platforms

Limitations

  • Not trained for open-domain chat or emotional dialogue
  • May struggle with very large codebases or long multi-part tasks
  • Best suited for STEM fields—general language understanding may vary
  • Prioritizes correctness and formality over conversational tone.
Downloads last month
12
Safetensors
Model size
8.19B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Omega-Qwen3-Atom-8B

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(207)
this model
Quantizations
3 models

Dataset used to train prithivMLmods/Omega-Qwen3-Atom-8B

Collection including prithivMLmods/Omega-Qwen3-Atom-8B