MLX native implementations of state-of-the-art generative image models
npx skills add https://github.com/filipstrand/mflux --skill mflux-cliقم بتثبيت هذه المهارة باستخدام واجهة سطر الأوامر (CLI) وابدأ في استخدام سير عمل SKILL.md في مساحة عملك.

Run the latest state-of-the-art generative image models locally on your Mac in native MLX!
MFLUX is a line-by-line MLX port of several state-of-the-art generative image models from the Huggingface Diffusers and Huggingface Transformers libraries. All models are implemented from scratch in MLX, using only tokenizers from the Huggingface Transformers library. MFLUX is purposefully kept minimal and explicit, @karpathy style.
If you haven't already, install uv, then run:
uv tool install --upgrade mflux
After installation, the following command shows all available MFLUX CLI commands:
uv tool list
To generate your first image using, for example, the z-image-turbo model, run
mflux-generate-z-image-turbo \
--prompt "A puffin standing on a cliff" \
--width 1280 \
--height 500 \
--seed 42 \
--steps 9 \
-q 8

The first time you run this, the model will automatically download which can take some time. See the model section for the different options and features, and the common README for shared CLI patterns and examples.
Create a standalone generate.py script with inline uv dependencies:
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "mflux",
# ]
# ///
from mflux.models.z_image import ZImageTurbo
model = ZImageTurbo(quantize=8)
image = model.generate_image(
prompt="A puffin standing on a cliff",
seed=42,
num_inference_steps=9,
width=1280,
height=500,
)
image.save("puffin.png")
Run it with:
uv run generate.py
For more Python API inspiration, look at the CLI entry points for the respective models.
If you encounter a ValueError: Fast download using 'hf_transfer' is enabled (HF_HUB_ENABLE_HF_TRANSFER=1) but 'hf_transfer' package is not available, you can install MFLUX with the hf_transfer package included:
uv tool install --upgrade mflux --with hf_transfer
This will enable faster model downloads from Hugging Face.
uv tool install --python 3.13 mflux
MFLUX supports the following model families. They have different strengths and weaknesses; see each model’s README for full usage details.
| Model | Release date | Size | Type | Training | Description |
|---|---|---|---|---|---|
| Z-Image | Nov 2025 | 6B | Distilled & Base | Yes | Fast, small, very good quality and realism. |
| FLUX.2 | Jan 2026 | 4B & 9B | Distilled & Base | Yes | Fastest + smallest with very good qaility and edit capabilities. |
| FIBO | Oct 2025+ | 8B | Distilled & Base | No | Very good JSON-based prompt understanding. Has edit capabilities. |
| SeedVR2 | Jun 2025 | 3B & 7B | — | No | Best upscaling model. |
| Qwen Image | Aug 2025+ | 20B | Base | No | Large model (slower); strong prompt understanding and world knowledge. Has edit capabilities |
| Depth Pro | Oct 2024 | — | — | No | Very fast and accurate depth estimation model from Apple. |
| FLUX.1 | Aug 2024 | 12B | Distilled & Base | No (legacy) | Legacy option with decent quality. Has edit capabilities with 'Kontext' model and upscaling support via ControlNet |
General
Model-specific highlights
See the common README for detailed usage and examples, and use the model section above to browse specific models and capabilities.
[!NOTE]
As MFLUX supports a wide variety of CLI tools and options, the easiest way to navigate the CLI in 2026 is to use a coding agent (like Cursor, Claude Code, or similar). Ask questions like: “Can you help me generate an image using z-image?”
MFLUX would not be possible without the great work of:
This project is licensed under the MIT License.