kpsss34's picture
Update README.md
6ef5271 verified
---
library_name: sana
tags:
- text-to-image
- Sana
- 1024px_based_image_size
- Multi-language
language:
- en
- zh
base_model:
- Efficient-Large-Model/Sana_600M_1024px_diffusers
pipeline_tag: text-to-image
---
# Note
- Weakness in Complex Scene Creation: Due to limitation of data, our model has **limited** capabilities in generating complex scenes, text, and human hands.
- **Enhancing Capabilities**: The model’s performance can be improved by **increasing the complexity and length of prompts**. Below are some examples of **prompts and samples**.
### Model Description
- **Developed by:** NVIDIA, Sana
- **Model type:** Linear-Diffusion-Transformer-based text-to-image generative model
- **Model size:** 590M parameters
- **Model resolution:** This model is developed to generate 1024px based images with multi-scale heigh and width.
- **License:** [NSCL v2-custom](./LICENSE.txt). Governing Terms: NVIDIA License. Additional Information: [Gemma Terms of Use | Google AI for Developers](https://ai.google.dev/gemma/terms) for Gemma-2-2B-IT, [Gemma Prohibited Use Policy | Google AI for Developers](https://ai.google.dev/gemma/prohibited_use_policy).
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts.
It is a Linear Diffusion Transformer that uses one fixed, pretrained text encoders ([Gemma2-2B-IT](https://huggingface.co/google/gemma-2-2b-it))
and one 32x spatial-compressed latent feature encoder ([DC-AE](https://hanlab.mit.edu/projects/dc-ae)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/NVlabs/Sana) and the [Sana report on arXiv](https://arxiv.org/abs/2410.10629).
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/NVlabs/Sana),
which is more suitable for both training and inference and for which most advanced diffusion sampler like Flow-DPM-Solver is integrated.
[MIT Han-Lab](https://nv-sana.mit.edu/) provides free Sana inference.
```python
# pip install git+https://github.com/huggingface/diffusers
# pip install transformer
import torch
from diffusers import SanaPAGPipeline
pipe = SanaPAGPipeline.from_pretrained(
"kpsss34/SANA600.fp16_Realistic_SFW_V1",
torch_dtype=torch.float16,
)
pipe.to("cuda")
pipe.text_encoder.to(torch.bfloat16)
pipe.vae.to(torch.bfloat16)
prompt = 'A cute 🐼 eating 🎋, ink drawing style'
image = pipe(
prompt=prompt,
height=1024,
width=1024,
guidance_scale=5.0,
pag_scale=2.0,
num_inference_steps=20,
generator=torch.Generator(device="cuda").manual_seed(42),
)[0]
image[0].save('sana.png')
```