flux-lora-manhawa

- Prompt
- in s456-style; an elegant ice elf sitting at a snowy café, drinking a warm cup of coffee, blushing softly, cozy winter vibes,

- Prompt
- in s456-style; a full-body shot of a powerful lean young hooded hunter, with light gray cape, wielding twin red sword one in each arm. the background is set in a fiery barren land with smoke and ashes

- Prompt
- in s456-style; a moss-covered train station in the middle of a forest, where glowing fireflies float lazily in the air. a lone traveler with an umbrella waits beside an ancient vending machine, as a silver train with paper lanterns for lights slowly glides in without making a sound.

- Prompt
- in s456-style; a little girl with a smile on her face, in a raincoat dances barefoot in a puddle as soft rain falls. The puddle reflects not the sky—but a starry night full of constellations.

- Prompt
- in s456-style; a field of giant blooming eyeball-flowers under a blood-red sky, strange shadows moving in the periphery, a lone girl in a vintage dress holding a glowing lantern
Trigger words
Please use in s456-style;
to trigger the image generation in manhawa style.
Blog
Check out our blog post on Medium
Inference
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'Rachit22/simpletuner-flux-manhawa'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)
prompt = "in s456-style; a powerful male hunter with grey armor and glowing bluee eyes, shadow summons rising behind him, in a dark dungeon filled with broken statues, high detail"
## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
model_output = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=1024,
height=1024,
guidance_scale=3.0,
).images[0]
model_output.save("output.png", format="PNG")
Co-Author: Riya Ranjan
- Downloads last month
- 16
Model tree for Rachit22/flux-lora-manhawa-style
Base model
black-forest-labs/FLUX.1-dev