YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

T5-Base AI Art Prompt Generator

A fine-tuned T5-base model for bidirectional AI art prompt transformation, trained on 48K+ high-quality prompts with advanced bias protection.

πŸš€ Quick Start

# Test the model interactively
python3 ../test_model.py --model fine_tuned_t5_base --interactive

# Run batch tests
python3 ../test_model.py --model fine_tuned_t5_base --batch

# Single transformations
python3 ../test_model.py --model fine_tuned_t5_base --elaborate "cat sitting"
python3 ../test_model.py --model fine_tuned_t5_base --simplify "detailed cat prompt..."

πŸ“Š Model Stats

  • Parameters: 220M (T5-base)
  • Training Samples: 48,034
  • Validation Loss: 0.4293
  • Training Epochs: 5
  • Bias Protection: βœ… Active

🎯 Capabilities

  • Simple β†’ Elaborate: Transform basic descriptions into detailed art prompts
  • Elaborate β†’ Simple: Extract core concepts from complex prompts
  • Multi-Platform: Trained on NightCafe, Civitai, and community data
  • Quality Filtered: Advanced filtering for high-quality outputs

πŸ“ Files

  • MODEL_CARD.md - Comprehensive model documentation
  • config.json - Model architecture configuration
  • model.safetensors - Model weights (850MB)
  • training_info.txt - Training metrics and parameters
  • *.json - Tokenizer and generation configurations

πŸ”§ Requirements

  • Python 3.8+
  • Transformers 4.20.0+
  • PyTorch 1.10.0+
  • 2GB+ RAM (4GB GPU recommended)

πŸ“ˆ Performance

Compared to T5-small:

  • Quality: Significantly improved detail and coherence
  • Speed: ~2.3x slower inference
  • Memory: ~850MB GPU memory usage
  • Bias: Better diversity and reduced oversaturation

See MODEL_CARD.md for detailed performance analysis and examples.

Downloads last month
5
Safetensors
Model size
223M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support