929
Model Memory Utility
๐
Calculate model vRAM usage
Official organization for the Hugging Face Accelerate library
pip install -U huggingface_hub[hf_xet]
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="my-cool-company")
image = client.text_to_image(
"A majestic lion in a fantasy forest",
model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")
transformers
in dedicated releases!v4.49.0-SmolVLM-2
and v4.49.0-SigLIP-2
.huggingface-cli upload-large-folder
. Designed for your massive models and datasets. Much recommended if you struggle to upload your Llama 70B fine-tuned model ๐คกpip install huggingface_hub==0.25.0
huggingface_hub
Python library!ModelHubMixin
!ModelHubMixin
integrations! HfFileSystem
!!huggingface_hub
Python library!PyTorchHubMixin
now supports configs and safetensors!audio-to-audio
supported in the InferenceClient!