This is a GGUF conversion of MAGREF-Video/MAGREF.

All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.

Usage

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

Type Name Location Download
Main Model MAGREF_Wan2.1_I2V_14B-GGUF ComfyUI/models/unet GGUF (this repo)
Text Encoder umt5-xxl-encoder ComfyUI/models/text_encoders Safetensors / GGUF
CLIP Vision clip_vision_h ComfyUI/models/clip_vision Safetensors
VAE Wan2_1_VAE_bf16 ComfyUI/models/vae Safetensors

ComfyUI example workflow

Demos

Input Output
Reference Image 1 Reference Image 2 Generated Output
Two men taking a selfie together in an indoor setting. One of them, with a bright and expressive smile, holds the smartphone at armโ€™s length to frame the shot. He has voluminous, natural-textured hair and appears enthusiastic and energetic. Standing beside him is another man with neatly styled hair and a composed expression, wearing a white athletic jersey with black accents.

Notes

All original licenses and restrictions from the base models still apply.

Reference

Downloads last month
2,935
GGUF
Model size
16.4B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantStack/MAGREF_Wan2.1_I2V_14B-GGUF

Quantized
(1)
this model