wsbagnsv1 commited on
Commit
73d3998
·
verified ·
1 Parent(s): a2a279b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -3
README.md CHANGED
@@ -1,9 +1,24 @@
1
  ---
2
  license: apache-2.0
 
3
  base_model:
4
  - Wan-AI/Wan2.1-VACE-14B
 
 
 
 
5
  ---
6
- Work in progress Seems to work at least the v2v and start end frame part, ill test further though before making the modelcard.
7
 
8
- Please share your results with me on discord to check if everything is working correctly (;
9
- th3pun1sh3r
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: gguf
4
  base_model:
5
  - Wan-AI/Wan2.1-VACE-14B
6
+ tags:
7
+ - video
8
+ - video-generation
9
+ pipeline_tag: text-to-video
10
  ---
 
11
 
12
+ This is a direct GGUF conversion of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B)
13
+
14
+ All quants are created from the FP32 base file, though I only uploaded the Q8_0 and less, if you want the F16 or BF16 one I would upload it per request.
15
+
16
+ The model files can be used with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node.
17
+
18
+ Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions.
19
+
20
+ The VAE can be downloaded from [this repository by Kijai](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors)
21
+
22
+ Please refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types.
23
+
24
+ For conversion I used the conversion scripts from [city96](https://huggingface.co/city96)