AudioOnlyThinker

This model is a lightweight variant of Qwen2.5-Omni-7B, customized to remove the vision encoder and support only audio and text.

It is intended for use in audio-to-text instruction following, voice chat, and ASR-style tasks, and supports generation through generate() as with any decoder-only model.

πŸ”§ How this model was built

We extracted only the Thinker component from the full Qwen2.5-Omni model:

  • βœ… Kept: Audio encoder (audio_tower) + Language model (model)
  • ❌ Removed: Vision encoder (visual) + Talker (speech decoder)
  • βœ… Manually deleted vision_config from config.json
  • βœ… Class modified via subclassing Qwen2_5OmniThinkerForConditionalGeneration

πŸ“¦ Usage: πŸ”§ How to use with AudioOnlyThinker class

This model uses a custom subclass AudioOnlyThinker, which disables the vision encoder.

You must define this class before calling .from_pretrained(). Example:

from transformers import Qwen2_5OmniThinkerForConditionalGeneration

class AudioOnlyThinker(Qwen2_5OmniThinkerForConditionalGeneration):
    def __init__(self, config):
        super().__init__(config)
        self.visual = None
        if hasattr(self.config, "vision_config"):
            del self.config.vision_config

    def forward(self, *args, pixel_values=None, pixel_values_videos=None, **kwargs):
        return super().forward(*args, pixel_values=None, pixel_values_videos=None, **kwargs)

model = AudioOnlyThinker.from_pretrained("chunhuizng/AudioOnlyThinker")

from audio_only_processor import AudioOnlyProcessor

processor = AudioOnlyProcessor.from_pretrained("chunhuizng/AudioOnlyThinker")

conversation = [
    {
        "role": "user",
        "content": [
            {"type": "audio", "path": "your_audio.wav"},
            {"type": "text", "text": "What is being said in this audio?"}
        ]
    }
]

inputs = processor.apply_chat_template(conversation, tokenize=True, return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(**inputs, max_new_tokens=128)

response = processor.batch_decode(outputs, skip_special_tokens=True)[0]
print(response)

license: mit

Downloads last month
80
Safetensors
Model size
8.93B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for chunhuizng/AudioOnlyThinker

Finetuned
(13)
this model