How to use outside of ComfyUI?

#3
by gabdab - opened

Is there a tutorial somewhere that I can refer to for usage with python llama_cpp or ctransformers?

Owner

seems you could find from omnigen2 repo

Copilot answer (github code inspection): "Based on the code in your workspace, OmniGen2 expects models in HuggingFace or PyTorch checkpoint formats (see OmniGen2Pipeline.from_pretrained and OmniGen2Transformer2DModel.from_pretrained). There is no code or loader for the GGUF format (used by llama.cpp and similar projects) in the pipeline or inference scripts.

To use GGUF models with OmniGen2:

You would need to convert your GGUF model to a format supported by HuggingFace Transformers or PyTorch.
Alternatively, you would need to implement a GGUF loader and modify the pipeline to accept GGUF models, which is non-trivial and not currently supported.
Summary:
OmniGen2 does not natively support GGUF models. You must use models in HuggingFace or PyTorch formats. If you need GGUF support, you would have to add custom code for loading and inference.

For reference, see OmniGen2Pipeline.from_pretrained and OmniGen2Transformer2DModel.from_pretrained for supported formats."

Owner

what you need to do is : follow the description/steps in the Model card then you can probably run the model without problem

Sign up or log in to comment