Converted with ComfyUI-GGUF/Tools

shoutout ostris, lodestone, city96, and others for being inspiring individuals

Want other GGUF quantizations? Check out hum-ma's repo! (Q8_0, Q6_K, Q5_K_M, Q5_0, Q4_0, Q3_K_S, Q3_K_M)

Downloads last month
20
GGUF
Model size
8.16B params
Architecture
flux
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Clybius/Flex.1-alpha-Q4_K_M-GGUF

Quantized
(2)
this model