Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
otherwise, the LM won't be available to the pool's sub-processes
select number of processes and batch_size based on number of CPU cores available and on dataset size
with get_context("fork").Pool(processes=2) as pool:
result = dataset.map(
map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"]
)
result["transcription"][:2]
['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"]
Wav2Vec2 specific outputs
[[autodoc]] models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput
Wav2Vec2Model
[[autodoc]] Wav2Vec2Model
- forward
Wav2Vec2ForCTC
[[autodoc]] Wav2Vec2ForCTC
- forward
- load_adapter
Wav2Vec2ForSequenceClassification
[[autodoc]] Wav2Vec2ForSequenceClassification
- forward
Wav2Vec2ForAudioFrameClassification
[[autodoc]] Wav2Vec2ForAudioFrameClassification
- forward
Wav2Vec2ForXVector
[[autodoc]] Wav2Vec2ForXVector
- forward
Wav2Vec2ForPreTraining
[[autodoc]] Wav2Vec2ForPreTraining
- forward
TFWav2Vec2Model
[[autodoc]] TFWav2Vec2Model
- call
TFWav2Vec2ForSequenceClassification
[[autodoc]] TFWav2Vec2ForSequenceClassification
- call
TFWav2Vec2ForCTC
[[autodoc]] TFWav2Vec2ForCTC
- call
FlaxWav2Vec2Model
[[autodoc]] FlaxWav2Vec2Model
- call
FlaxWav2Vec2ForCTC
[[autodoc]] FlaxWav2Vec2ForCTC
- call
FlaxWav2Vec2ForPreTraining
[[autodoc]] FlaxWav2Vec2ForPreTraining
- call