leduckhai commited on
Commit
459d0ab
·
verified ·
1 Parent(s): 5543d0c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -2
README.md CHANGED
@@ -176,8 +176,7 @@ tags:
176
  </p>
177
 
178
  * **Abstract:**
179
- Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants. This technology improves patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics. In this work, we introduce \textit{MultiMed}, the first multilingual medical ASR dataset, along with the first collection of small-to-large end-to-end medical ASR models, spanning five languages: Vietnamese, English, German, French, and Mandarin Chinese. To our best knowledge, \textit{MultiMed} stands as the world’s largest medical ASR dataset across all major benchmarks: total duration, number of recording conditions, number of accents, and number of speaking roles. Furthermore, we present the first multilinguality study for medical ASR, which includes reproducible empirical baselines, a monolinguality-multilinguality analysis, Attention Encoder Decoder (AED) vs Hybrid comparative study and a linguistic analysis. We present practical ASR end-to-end training schemes optimized for a fixed number of trainable parameters that are common in industry settings. All code, data, and models are available online: [https://github.com/leduckhai/MultiMed/tree/master/MultiMed](https://github.com/leduckhai/MultiMed/tree/master/MultiMed).
180
-
181
  * **Citation:**
182
  Please cite this paper: [https://arxiv.org/abs/2409.14074](https://arxiv.org/abs/2409.14074)
183
 
@@ -190,6 +189,22 @@ Please cite this paper: [https://arxiv.org/abs/2409.14074](https://arxiv.org/abs
190
  }
191
  ```
192
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
193
  ## Contact:
194
 
195
  If any links are broken, please contact me for fixing!
 
176
  </p>
177
 
178
  * **Abstract:**
179
+ Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants. This technology improves patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics. In this work, we introduce MultiMed, the first multilingual medical ASR dataset, along with the first collection of small-to-large end-to-end medical ASR models, spanning five languages: Vietnamese, English, German, French, and Mandarin Chinese. To our best knowledge, MultiMed stands as **the world’s largest medical ASR dataset across all major benchmarks**: total duration, number of recording conditions, number of accents, and number of speaking roles. Furthermore, we present the first multilinguality study for medical ASR, which includes reproducible empirical baselines, a monolinguality-multilinguality analysis, Attention Encoder Decoder (AED) vs Hybrid comparative study and a linguistic analysis. We present practical ASR end-to-end training schemes optimized for a fixed number of trainable parameters that are common in industry settings. All code, data, and models are available online: [https://github.com/leduckhai/MultiMed/tree/master/MultiMed](https://github.com/leduckhai/MultiMed/tree/master/MultiMed).
 
180
  * **Citation:**
181
  Please cite this paper: [https://arxiv.org/abs/2409.14074](https://arxiv.org/abs/2409.14074)
182
 
 
189
  }
190
  ```
191
 
192
+ ## Dataset and Pre-trained Models:
193
+
194
+ Dataset: [🤗 HuggingFace dataset](https://huggingface.co/datasets/leduckhai/MultiMed), [Paperswithcodes dataset](https://paperswithcode.com/dataset/multimed)
195
+
196
+ Pre-trained models: [🤗 HuggingFace models](https://huggingface.co/leduckhai/MultiMed)
197
+
198
+ | Model Name | Description | Link |
199
+ |------------------|--------------------------------------------|----------------------------------------------------------------------|
200
+ | `Whisper-Small-Chinese` | Small model fine-tuned on medical Chinese set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-chinese) |
201
+ | `Whisper-Small-English` | Small model fine-tuned on medical English set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-english) |
202
+ | `Whisper-Small-French` | Small model fine-tuned on medical French set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-french) |
203
+ | `Whisper-Small-German` | Small model fine-tuned on medical German set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-german) |
204
+ | `Whisper-Small-Vietnamese` | Small model fine-tuned on medical Vietnamese set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-vietnamese) |
205
+ | `Whisper-Small-Multilingual` | Small model fine-tuned on medical Multilingual set (5 languages) | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-multilingual) |
206
+
207
+
208
  ## Contact:
209
 
210
  If any links are broken, please contact me for fixing!