File size: 1,907 Bytes
625dad7 ed59e6d 4118f0c 625dad7 94e0cf9 dad70a4 625dad7 dad70a4 625dad7 ed59e6d 5d78bc6 e00b2f3 5d78bc6 625dad7 ec2a78d ed59e6d ec2a78d 625dad7 52d6e6d 625dad7 906c173 5d78bc6 8a80a9b 5d78bc6 814e2f3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
base_model:
- QuantStack/Wan2.1_T2V_14B_FusionX_VACE
base_model_relation: quantized
library_name: gguf
quantized_by: lym00
tags:
- text-to-video
- image-to-video
- video-to-video
- quantized
language:
- en
license: apache-2.0
---
This is a GGUF conversion of [QuantStack/Wan2.1_T2V_14B_FusionX_VACE](https://huggingface.co/QuantStack/Wan2.1_T2V_14B_FusionX_VACE).
All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) GitHub repository.
## Usage
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | -------------------------------- | ------------------------------ | ---------------- |
| Main Model | Wan2.1_T2V_14B_FusionX_VACE-GGUF | `ComfyUI/models/unet` | GGUF (this repo) |
| Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
| VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
[**ComfyUI example workflow**](https://docs.comfy.org/tutorials/video/wan/vace)
### Notes
*All original licenses and restrictions from the base models still apply.*
## Reference
- For an overview of quantization types, please see the [GGUF quantization types](https://huggingface.co/docs/hub/gguf#quantization-types). |