File size: 5,023 Bytes
57bdca5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124

Whisper
Overview
The Whisper model was proposed in Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
The abstract from the paper is the following:
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
This model was contributed by Arthur Zucker. The Tensorflow version of this model was contributed by amyeroberts.
The original code can be found here.
Usage tips

The model usually performs well without requiring any finetuning.
The architecture follows a classic encoder-decoder architecture, which means that it relies on the [~generation.GenerationMixin.generate] function for inference.

One can use [WhisperProcessor] to prepare audio for the model, and decode the predicted ID's back into text.

To convert the model and the processor, we recommend using the following:

python src/transformers/models/whisper/convert_openai_to_hf.py --checkpoint_path "" --pytorch_dump_folder_path "Arthur/whisper-3" --convert_preprocessor True
The script will automatically determine all necessary parameters from the OpenAI checkpoint. A tiktoken library needs to be installed
to perform the conversion of the OpenAI tokenizer to the tokenizers version.
Inference
Here is a step-by-step guide to transcribing an audio sample using a pre-trained Whisper model:
thon

from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration
Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]
waveform = audio_sample["array"]
sampling_rate = audio_sample["sampling_rate"]
Load the Whisper model in Hugging Face format:
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
Use the model and processor to transcribe the audio:
input_features = processor(
     waveform, sampling_rate=sampling_rate, return_tensors="pt"
 ).input_features
Generate token ids
predicted_ids = model.generate(input_features)
Decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'

Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Whisper. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

A fork with a script to convert a Whisper model in Hugging Face format to OpenAI format. 🌎
Usage example:

pip install -U openai-whisper
python convert_hf_to_openai.py \
    --checkpoint openai/whisper-tiny \
    --whisper_dump_path whisper-tiny-openai.pt

WhisperConfig
[[autodoc]] WhisperConfig
WhisperTokenizer
[[autodoc]] WhisperTokenizer
    - set_prefix_tokens
    - build_inputs_with_special_tokens
    - get_special_tokens_mask
    - create_token_type_ids_from_sequences
    - save_vocabulary
    - batch_decode
    - decode
    - basic_normalize
    - normalize
WhisperTokenizerFast
[[autodoc]] WhisperTokenizerFast
    - set_prefix_tokens
    - build_inputs_with_special_tokens
    - get_special_tokens_mask
    - create_token_type_ids_from_sequences
    - save_vocabulary
    - batch_decode
    - decode
    - basic_normalize
    - normalize
WhisperFeatureExtractor
[[autodoc]] WhisperFeatureExtractor
    - call
WhisperProcessor
[[autodoc]] WhisperProcessor
    - call
    - from_pretrained
    - save_pretrained
    - batch_decode
    - decode

WhisperModel
[[autodoc]] WhisperModel
    - forward
    - _mask_input_features
WhisperForConditionalGeneration
[[autodoc]] WhisperForConditionalGeneration
    - forward
    - generate
WhisperForCausalLM
[[autodoc]] WhisperForCausalLM
    - forward
WhisperForAudioClassification
[[autodoc]] WhisperForAudioClassification
    - forward

TFWhisperModel
[[autodoc]] TFWhisperModel
    - call
TFWhisperForConditionalGeneration
[[autodoc]] TFWhisperForConditionalGeneration
    - call

FlaxWhisperModel
[[autodoc]] FlaxWhisperModel
    - call
FlaxWhisperForConditionalGeneration
[[autodoc]] FlaxWhisperForConditionalGeneration
    - call
FlaxWhisperForAudioClassification
[[autodoc]] FlaxWhisperForAudioClassification
    - call