Update README.md
Browse files
README.md
CHANGED
@@ -12,8 +12,16 @@ language:
|
|
12 |
license: apache-2.0
|
13 |
---
|
14 |
|
|
|
|
|
15 |
This is a GGUF conversion of [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX) with additional VACE functionality.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
|
18 |
|
19 |
| Type | Name | Location | Download |
|
@@ -26,4 +34,8 @@ The model files can be used in [ComfyUI](https://github.com/comfyanonymous/Comfy
|
|
26 |
|
27 |
### Notes
|
28 |
|
29 |
-
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
|
|
|
|
|
|
|
|
|
|
12 |
license: apache-2.0
|
13 |
---
|
14 |
|
15 |
+
# vrgamedevgirl84/Wan14BT2VFusionX GGUF Conversion
|
16 |
+
|
17 |
This is a GGUF conversion of [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX) with additional VACE functionality.
|
18 |
|
19 |
+
All quantized versions were created from the base FP16 model [Wan14BT2VFusioniX_fp16_.safetensors](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX/blob/main/Wan14BT2VFusioniX_fp16_.safetensors) using the conversion scripts provided by city96, available at the [ComfyUI-GGUF GitHub repository](https://github.com/city96/ComfyUI-GGUF/tree/main/tools).
|
20 |
+
|
21 |
+
The process involved first patching and converting the safetensors model to a FP16 GGUF, then quantizing it, and finally applying the 5D fixes.
|
22 |
+
|
23 |
+
## Usage
|
24 |
+
|
25 |
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
|
26 |
|
27 |
| Type | Name | Location | Download |
|
|
|
34 |
|
35 |
### Notes
|
36 |
|
37 |
+
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
|
38 |
+
|
39 |
+
## Reference
|
40 |
+
|
41 |
+
- For an overview of quantization types, please see the [LLaMA 3 8B Scoreboard quantization chart](https://github.com/ggml-org/llama.cpp/blob/b3962/examples/perplexity/README.md#llama-3-8b-scoreboard).
|