Quick Guide: Running FLUX Schnell Model on Habana
Step 1: Pull and run the Habana PyTorch Docker image with the necessary configurations
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all --ipc=host --cap-add=sys_nice --ulimit memlock=-1:-1 --security-opt seccomp=unconfined \
-v /home/ubuntu/workspace:/workspace \
-v ~/.cache/huggingface:/root/.cache/huggingface \
vault.habana.ai/gaudi-docker/1.18.0/ubuntu24.04/habanalabs/pytorch-installer-2.4.0:latest /bin/bash
Step 2: Clone the optimum-habana repository inside the container
git clone https://github.com/dsocek/optimum-habana.git -b flux /workspace/optimum-habana
Step 3: Navigate to the optimum-habana repository
cd /workspace/optimum-habana
Step 4: Install the optimum-habana package
pip install .
Step 5: Navigate to the stable-diffusion example directory
cd examples/stable-diffusion/
Step 6: Install the required Python packages for stable-diffusion and pin diffusers
pip install -r requirements.txt
pip install diffusers==0.30.3
Step 7: Run the FLUX Schnell model
python3 text_to_image_generation.py \
--model_name_or_path black-forest-labs/FLUX.1-schnell \
--prompts "A cat holding a sign that says hello world" \
--num_images_per_prompt 10 \
--batch_size 1 \
--num_inference_steps 4 \
--image_save_dir /tmp/flux_1_images \
--scheduler flow_match_euler_discrete \
--use_habana \
--use_hpu_graphs \
--gaudi_config Habana/stable-diffusion \
--bf16