gguf quantized version of wan2.2 models

  • drag wan to > ./ComfyUI/models/diffusion_models
  • drag umt5xxl to > ./ComfyUI/models/text_encoders
  • drag pig to > ./ComfyUI/models/vae
Prompt
a cute anime girl picking up a little pinky pig and moving quickly
Negative Prompt
blurry ugly bad
Prompt
drone shot of a volcano erupting with a pig walking on it
Negative Prompt
blurry ugly bad
Prompt
drone shot of a volcano erupting with a pig walking on it
Negative Prompt
blurry ugly bad

screenshot tip: for 5b model, use pig-wan2-vae [1.41GB]

screenshot tip: for 14b model, use pig-wan-vae [254MB]

update

  • upgrade your node (see last item from reference) for new/full quant support
  • get more umt5xxl gguf encoder either here or here

reference

Downloads last month
20,905
GGUF
Model size
127M params
Architecture
pig
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

This task can take several minutes

Model tree for calcuis/wan2-gguf

Quantized
(1)
this model