model_card: model_id: Rupsa description: | Rupsa is a LoRA (Low-Rank Adaptation) model fine-tuned on the Flux.1 Dev base model, designed for NSFW text-to-image generation. It is stored in the .safetensors format for efficient and secure weight storage.

model_details: developed_by: [More Information Needed] funded_by: [More Information Needed] shared_by: [More Information Needed] model_type: LoRA (Low-Rank Adaptation) for fine-tuning languages: Not applicable license: Apache-2.0 finetuned_from: Flux.1 Dev version: 1.0 date: 2025-06-24

model_sources: repository: [More Information Needed] paper: None demo: [More Information Needed]

uses: direct_use: | The model can be used directly for generating NSFW images from text prompts using the Flux.1 Dev pipeline with the LoRA weights applied. Suitable for creative applications or prototyping. downstream_use: | The model can be further fine-tuned or integrated into creative platforms or design tools for NSFW content generation. out_of_scope_use: | - Generating harmful, offensive, or misleading content. - Real-time applications without optimized hardware due to potential latency. - Tasks outside the scope of the Flux.1 Dev base model’s capabilities, such as text generation.

bias_risks_limitations: bias: | The model may inherit biases from the Flux.1 Dev base model or the fine-tuning dataset, potentially affecting output fairness or quality. risks: | Improper use could lead to generating inappropriate or offensive content. Users must validate outputs for sensitive applications. limitations: | - Performance depends on prompt quality and relevance. - High computational requirements for inference (recommended: 8GB+ VRAM). - Limited testing in edge cases or specific domains. recommendations: | Users should evaluate outputs for biases and appropriateness. For sensitive applications, implement additional filtering or validation. More information is needed to provide specific mitigation strategies.

how_to_get_started: code: | ```python from diffusers import DiffusionPipeline import torch

  # Load base model
  base_model = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev")

  # Load LoRA weights
  base_model.load_lora_weights("path/to/rupsa.safetensors")

  # Move to GPU if available
  device = "cuda" if torch.cuda.is_available() else "cpu"
  base_model.to(device)

  # Example inference
  output = base_model("your prompt here").images[0]
  output.save("output.png")
Downloads last month
78
Inference Providers NEW
Examples

Model tree for rstudioModel/rupsa_model_ai_flux1d

Adapter
(33008)
this model