Thank you

#10
by cloud19 - opened

The model looks excellent, thank you. I deployed it in Comfy and recreated your t2i workflow; it works with a light lora as before. On the H200, inference takes about 3–4 seconds per generation. This is likely the best open-source reference mode available today.
The only thing mentioned earlier was that a lighter model would be ideal to make scaling easier. I'll wait.

Sign up or log in to comment