Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
By pretraining on large amounts of speech data, finetuning the model on only one hour of labeled speech data in a low-resource language can still produce high-quality results compared to previous ASR systems trained on 100x more labeled data.