This repository provides a streamlined inference codebase for running text-to-image and image-to-image generation using our FLUX latent rectified flow transformers.
Inference Partners
We are excited to collaborate with Replicate, FAL, Mystic, and Together, allowing you to sample our models through their platforms. Below are the relevant links:
Replicate:
FAL:
Mystic:
Together:
Local Installation
To set up the models locally, run the following commands:
cd $HOME && git clone https://github.com/black-forest-labs/flux
cd $HOME/flux
python3.10 -m venv .venv
source .venv/bin/activate
pip install -e ".[all]"
Available Models
We provide three models with varying features:
- FLUX1.1 [pro]: API access only.
- FLUX.1 [pro]: API access only.
- FLUX.1 [dev]: A guidance-distilled variant.
- FLUX.1 [schnell]: A guidance and step-distilled variant.
Name | HuggingFace Repo | License | md5sum |
---|---|---|---|
FLUX.1 [schnell] | HuggingFace Link | Apache-2.0 | a9e1e277b9b16add186f38e3f5a34044 |
FLUX.1 [dev] | HuggingFace Link | FLUX.1-dev Non-Commercial License | a6bd8c16dfc23db6aee2f63a2eba78c0 |
FLUX.1 [pro] | Only available via API. | ||
FLUX1.1 [pro] | Only available via API. |
The autoencoder weights are available under Apache-2.0 and can be accessed in the HuggingFace repositories above.
Usage
The model weights will be automatically downloaded from HuggingFace when starting a demo. To use FLUX.1 [dev], make sure to log in to HuggingFace. If you download the weights manually, you can set their paths using environment variables:
export FLUX_SCHNELL=<path_to_flux_schnell_sft_file>
export FLUX_DEV=<path_to_flux_dev_sft_file>
export AE=<path_to_ae_sft_file>
To run interactive sampling:
python -m flux --name <name> --loop
For generating a single sample:
python -m flux --name <name> --height <height> --width <width> --prompt "<prompt>"
We also provide demos for an interactive experience:
- Streamlit:
streamlit run demo_st.py
(supports text-to-image and image-to-image). - Gradio:
python demo_gr.py --name flux-schnell --device cuda
(for a web-based interface).
Diffusers Integration
FLUX.1 [schnell] and FLUX.1 [dev] are compatible with the 🧨 diffusers library. To use with diffusers, install it using:
pip install git+https://github.com/huggingface/diffusers.git
Then, you can run the model using the FluxPipeline
:
import torch
from diffusers import FluxPipeline
model_id = "black-forest-labs/FLUX.1-schnell"
pipe = FluxPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()
prompt = "A cat holding a sign that says hello world"
seed = 42
image = pipe(prompt, output_type="pil", num_inference_steps=4, generator=torch.Generator("cpu").manual_seed(seed)).images[0]
image.save("flux-schnell.png")
For more details, see the diffusers documentation.
API Usage
Our API provides direct access to all FLUX models. Documentation can be found here. To use the API, register at api.bfl.ml
and generate an API key.
You can use the API in Python:
from flux.api import ImageRequest
request = ImageRequest("A beautiful beach", name="flux.1.1-pro")
# Access generated image
request.url
request.save("outputs/api.jpg")
Or from the command line:
python -m flux.api --prompt="A beautiful beach" url
python -m flux.api --prompt="A beautiful beach" save outputs/api
python -m flux.api --prompt="A beautiful beach" image show
Try FLUX on Poe
FLUX is also available on Poe, providing an easy-to-use platform for exploring its capabilities. Give it a try and experience the power of FLUX models directly from your browser.