--- base_model: Lightricks/LTX-Video library_name: gguf quantized_by: wsbagnsv1 tags: - ltx-video - text-to-video - image-to-video language: - en license: other license_link: LICENSE.md --- This is a direct GGUF conversion of the 13b-0.9.7-distilled variant from [Lightricks/LTX-Video](https://huggingface.co/Lightricks/LTX-Video). The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders: | Type | Name | Location | Download | | ------------ | ----------------------- | --------------------------------- | ------ | | Main Model | ltxv-13b-0.9.7-distilled| `ComfyUI/models/diffusion_models` | GGUF (this repo) | | Text Encoder | T5-V1.1-XXL-Encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main) / [GGUF](https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf) | | VAE | ltxv-13b-0.9.7-vae | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-dev-GGUF/blob/main/ltxv-13b-0.9.7-vae-BF16.safetensors) | [**Example workflow**](https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-distilled-GGUF/blob/main/exampleworkflow.json) - based on the [official example workflow](https://github.com/Lightricks/ComfyUI-LTXVideo/tree/master/example_workflows/) ### Notes *As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.* *Comfyui now supports the ggufs natively, so you just need to update comfyui to the latest version and if some issues persist update all the nodes in the workflow* *Other T5 clips will probably work as well, just use one that you like, you can get them as safetensors or ggufs. The best I tried was the t5 v1.1 xxl one* *Loras do work, but you need to follow the steps in the example workflow and dont use torchcompile with loras!* *Teacache works with ltx but atm not really good. The rel_l1_thresh only seems to work at 0.01 in my testing and even that causes some noticable quality drops, best leave it disabled.*