Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
processor.tokenizer.set_target_lang("fra")
model.load_adapter("fra")
inputs = processor(fr_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
"ce dernier est volé tout au long de l'histoire romaine"
In the same way the language can be switched out for all other supported languages.