whisper-large-v3-turbo-q4
This model was converted to MLX format from openai/whisper-large-v3-turbo
.
Use with mlx
pip install mlx-whisper
import mlx_whisper
result = mlx_whisper.transcribe(
"FILE_NAME",
path_or_hf_repo=mlx-community/whisper-large-v3-turbo-q4,
)
- Downloads last month
- 101
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.