File size: 2,471 Bytes
de3ce3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10c642f
 
 
 
 
de3ce3c
 
10c642f
 
 
 
 
 
 
 
 
 
 
 
 
392ead7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10c642f
de3ce3c
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
language: 
  - zh
  - en
library_name: transformers
tags:
  - qwen2.5
  - audio
  - open-source
  - thinker
pipeline_tag: text-generation
model_type: qwen2_5_omni
base_model: Qwen/Qwen2.5-Omni-7B
---

# AudioOnlyThinker

This model is a lightweight variant of [Qwen2.5-Omni-7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B), customized to **remove the vision encoder** and support only **audio and text**.

It is intended for use in audio-to-text instruction following, voice chat, and ASR-style tasks, and supports generation through `generate()` as with any decoder-only model.

## πŸ”§ How this model was built

We extracted only the `Thinker` component from the full Qwen2.5-Omni model:

- βœ… Kept: Audio encoder (`audio_tower`) + Language model (`model`)
- ❌ Removed: Vision encoder (`visual`) + Talker (speech decoder)
- βœ… Manually deleted `vision_config` from `config.json`
- βœ… Class modified via subclassing `Qwen2_5OmniThinkerForConditionalGeneration`

## πŸ“¦ Usage: πŸ”§ How to use with `AudioOnlyThinker` class

This model uses a custom subclass `AudioOnlyThinker`, which disables the vision encoder.

You must define this class before calling `.from_pretrained()`. Example:

```python
from transformers import Qwen2_5OmniThinkerForConditionalGeneration

class AudioOnlyThinker(Qwen2_5OmniThinkerForConditionalGeneration):
    def __init__(self, config):
        super().__init__(config)
        self.visual = None
        if hasattr(self.config, "vision_config"):
            del self.config.vision_config

    def forward(self, *args, pixel_values=None, pixel_values_videos=None, **kwargs):
        return super().forward(*args, pixel_values=None, pixel_values_videos=None, **kwargs)

model = AudioOnlyThinker.from_pretrained("chunhuizng/AudioOnlyThinker")

from audio_only_processor import AudioOnlyProcessor

processor = AudioOnlyProcessor.from_pretrained("chunhuizng/AudioOnlyThinker")

conversation = [
    {
        "role": "user",
        "content": [
            {"type": "audio", "path": "your_audio.wav"},
            {"type": "text", "text": "What is being said in this audio?"}
        ]
    }
]

inputs = processor.apply_chat_template(conversation, tokenize=True, return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(**inputs, max_new_tokens=128)

response = processor.batch_decode(outputs, skip_special_tokens=True)[0]
print(response)
```

---
license: mit
---