Fill in this form to immediatly access the model for non commercial use

Bria AI Model weights are open source for non commercial use only, per the provided license.

Log in or Sign Up to review the conditions and access this model content.

BRIA 2.3 FAST: Text-to-Image Model for Commercial Licensing

Note: Bria 3 generation is available: Bria 3.1

Introducing Bria AI 2.3 FAST, is the LCM version of BRIA 2.3. This model is the best combination of quality and latency for the 2.X family. This model was explicitly trained on legal data, uniquely combining technological innovation with ethical responsibility and legal security. To obtain legal liability install the Bria Agent

Get Access

Bria 2.3Fast is available everywhere you build, either as source-code and weights, ComfyUI nodes or API endpoints.

For Commercial Use

  • Purchase: for commercial license simply click Here.

For more information, please visit our website.

Join our Discord community for more information, tutorials, tools, and to connect with other users!

What's New

BRIA 2.3 FAST is a speedy version of BRIA 2.3, that provides an optimal balance between speed and accuracy. Engineered for efficiency, it takes only 1.64 seconds to generate images on a standard NVIDIA A10 GPU, achieving excellent image quality with an 80% reduction in inference time.

The model was distilled using the LCM technique and supports multiple aspect ratios, with the default resolution being 1024x1024. Similar to Bria AI 2.3, it presents improved realism and aesthetics.

Our evaluations show that our model achieves image quality comparable to its teacher, BRIA 2.3, and outperforms the SDXL LCM. While SDXL Turbo is faster, our model produces significantly better human faces as it supports higher resolution. These assessments were conducted by measuring human preferences.

CLICK HERE FOR A DEMO

Key Features

  • Legally Compliant: Offers full legal liability coverage for copyright and privacy infringements. Thanks to training on 100% licensed data from leading data partners, we ensure the ethical use of content.

  • Patented Attribution Engine: Our attribution engine is our way to compensate our data partners, powered by our proprietary and patented algorithms.

  • Enterprise-Ready: Specifically designed for business applications, Bria AI 2.3 delivers high-quality, compliant imagery for a variety of commercial needs.

  • Customizable Technology: Provides access to source code and weights for extensive customization, catering to specific business requirements.

Model Description

  • Developed by: BRIA AI
  • Model type: Text-to-Image model
  • License: BRIA 2.3 FAST Licensing terms & conditions.
  • Purchase is required to license and access the model.
  • Model Description: BRIA 2.3 Fast is an efficient text-to-image model trained exclusively on a professional-grade, licensed dataset. It is designed for commercial use and includes full legal liability coverage.
  • Resources for more information: BRIA AI

Code example using Diffusers

pip install diffusers
from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler
import torch

unet = UNet2DConditionModel.from_pretrained("briaai/BRIA-2.3-FAST", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("briaai/BRIA-2.3-BETA", unet=unet, torch_dtype=torch.float16)
pipe.force_zeros_for_empty_prompt = False

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")

prompt = "A portrait of a Beautiful and playful ethereal singer, golden designs, highly detailed, blurry background"

image = pipe(prompt, num_inference_steps=8, guidance_scale=1.0).images[0]

Some tips for using our text-to-image model at inference:

  1. You must set pipe.force_zeros_for_empty_prompt = False
  2. Using negative prompt is recommended.
  3. We support multiple aspect ratios, yet resolution should overall consists approximately 1024*1024=1M pixels, for example: (1024,1024), (1280, 768), (1344, 768), (832, 1216), (1152, 832), (1216, 832), (960,1088)
  4. The Fast model works well with just 8 steps
  5. For the Fast models use guidance_scale 1.0 or 0.0, note that in this configuration negative prompt is not relevant
Downloads last month
23
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using briaai/BRIA-2.3-FAST 1