--- license: gpl-3.0 pipeline_tag: any-to-any tags: - omni --- # Stream-Omni: Simultaneous Multimodal Interactions with Large Language-Vision-Speech Model [](https://arxiv.org/abs/2506.13642) [](https://github.com/ictnlp/Stream-Omni) [](https://huggingface.co/ICTNLP/stream-omni-8b) [](https://huggingface.co/datasets/ICTNLP/InstructOmni) [](https://github.com/ictnlp/Stream-Omni) > [**Shaolei Zhang**](https://zhangshaolei1998.github.io/), [**Shoutao Guo**](https://scholar.google.com.hk/citations?user=XwHtPyAAAAAJ), [**Qingkai Fang**](https://fangqingkai.github.io/), [**Yan Zhou**](https://zhouyan19.github.io/zhouyan/), [**Yang Feng**](https://people.ucas.edu.cn/~yangfeng?language=en)\* The introduction and usage of Stream-Omni refer to [https://github.com/ictnlp/Stream-Omni](https://github.com/ictnlp/Stream-Omni). Stream-Omni is an end-to-end language-vision-speech chatbot that simultaneously supports interaction across various modality combinations, with the following features💡: - **Omni Interaction**: Support any multimodal inputs including text, vision, and speech, and generate both text and speech responses. - **Seamless "see-while-hear" Experience**: Simultaneously output *intermediate textual results* (e.g., ASR transcriptions and model responses) during speech interactions, like the advanced voice service of GPT-4o. - **Efficient Training**: Require only a small amount of omni-modal data for training.