Quick Facts
- Category: Cloud Computing
- Published: 2026-05-05 22:38:34
- Rethinking Neanderthal Intelligence: Brain Size May Not Tell the Whole Story
- VS Code Python Extension Gets Turbo Boost: Rust-Powered Indexer and Smarter Package Navigation Land in March 2026 Update
- How to Catch Up and Watch Apple TV's Hottest Sci-Fi Returns This Summer
- A New Climate Summit Emerges: Can Colombia Break the Fossil Fuel Deadlock?
- New Breakthroughs Reveal Dinosaurs Were Far More Social and Intelligent Than Previously Believed
Introduction
Imagine having the power of a DALL-E-like image generator running entirely on your own machine—no cloud fees, no privacy worries, no annoying content filters. With Docker Model Runner and Open WebUI, this is not just a dream but a simple setup you can complete in minutes. This guide walks you through pulling an image generation model, connecting it to a polished chat interface, and generating images locally. You'll gain full control over your AI workflows while keeping your data private. Let's get started.

What You Need
- Docker Desktop (macOS) or Docker Engine (Linux) – latest version installed
- At least 8 GB of free RAM for a small image model; 16 GB or more recommended for better performance
- A GPU is optional but highly recommended: NVIDIA (CUDA) on Windows/Linux, Apple Silicon (MPS) on Mac, or CPU fallback (slower)
- Basic command-line familiarity – you should be comfortable running terminal commands
- Internet connection – needed only for initial model download (after that, everything runs offline)
To verify Docker is ready, run: docker model version. If it returns version info without errors, you're set.
How This All Connects: The Big Picture
Before diving into steps, understand the architecture: Docker Model Runner acts as a control plane that downloads image generation models (packaged in DDUF format), manages inference backends, and exposes a fully OpenAI-compatible API—including the critical POST /v1/images/generations endpoint. Open WebUI, a feature-rich chat interface, is pre-configured to talk to that endpoint. The result: you type a prompt in a beautiful chat window, and images appear as if by magic, all running locally.
Step-by-Step Guide
Step 1: Pull an Image Generation Model
Docker Model Runner uses the DDUF (Diffusers Unified Format) to package diffusion models as OCI artifacts on Docker Hub. This single-file format bundles the text encoder, VAE, UNet/DiT, and scheduler config into one portable artifact.
- Open a terminal and run:
docker model pull stable-diffusion - Wait for the download to complete – the model size is around 7 GB, so grab a coffee.
- Confirm the model is ready by inspecting it:
docker model inspect stable-diffusion
You should see output similar to:{ "id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823", "tags": ["docker.io/ai/stable-diffusion:latest"], "created": 1768470632, "config": { "format": "diffusers", "architecture": "diffusers", "size": "6.94GB", "diffusers": { "dduf_file": "stable-diffusion-xl-base-1.0-FP16.dduf", "layout": "dduf" } } }
Tip: If you have limited disk space, you can specify a different model using docker model pull <model-name>. Check Docker Hub for available alternatives.
Step 2: Launch Open WebUI
Here's the magic part: Docker Model Runner has a built-in launch command that automatically wires up Open WebUI against your local inference endpoint. No manual configuration needed.
- Run this single command:
docker model launch openwebui - Wait for the container to start – you'll see logs indicating the web interface is ready at
http://localhost:8080(or a different port if 8080 is busy). - Open your browser and navigate to that URL. You should see the Open WebUI login/registration page.
- Create an account (local, no email needed) and log in.
That's it! Open WebUI is now connected to Docker Model Runner's API. You can start a new chat and use the image generation feature by typing your prompt (e.g., "a dragon wearing a business suit in a corporate boardroom").
Step 3: Generate Your First Image
Once Open WebUI is running, image generation is as simple as typing a prompt.
- In the chat interface, select the image generation mode (usually a toggle or button in the input area).
- Enter a descriptive prompt – be creative! For example: "a cyberpunk cat riding a hoverboard through a neon-lit city, photorealistic".
- Adjust optional parameters like image size (e.g., 1024x1024), number of images, negative prompt, etc., if the interface exposes them.
- Click the generate button and watch the magic happen. The model runs locally, so no data leaves your machine.
- View and download the generated images directly from the chat. Each image is stored locally in your Docker volume.
Note: The first generation may be slower because the model loads into memory. Subsequent generations will be faster.

Step 4: Manage Your Models and Resources
You can pull additional models and switch between them easily.
- List all locally available models:
docker model list - Switch to another model (e.g., a faster version or a different style) by using the Open WebUI settings or by restarting the launch command with a different model name:
docker model launch openwebui --model <model-name>(check the CLI documentation for exact syntax). - Remove an unused model to free up disk space:
docker model rm <model-name> - Monitor GPU usage with
nvidia-smi(Linux/Windows) orpowermetrics(macOS) to ensure your system isn't overwhelmed.
Step 5: Customize the Experience
Open WebUI offers many settings to tailor the interface and generation behaviour.
- Change the theme under Settings – Appearance. Choose between light, dark, or any custom color.
- Enable conversation history to revisit past prompts and generations.
- Set default generation parameters (like always use a negative prompt for “ugly, blurry, low quality”).
- Integrate with other Docker services – since Open WebUI runs in a container, you can add it to a Docker Compose stack with other AI tools.
Tips & Troubleshooting
- Performance: If generations are slow, close other memory-hungry apps. Use
docker statsto see container resource usage. - GPU not detected? Ensure Docker is configured to use your GPU. For NVIDIA, install the NVIDIA Container Toolkit. For Apple Silicon, Docker Desktop should automatically use Metal.
- Storage space: DDUF models can be several GB. Regularly clean unused models with
docker model prune(removes all cached models you no longer need). - Safety filters: By default, some models come with built-in NSFW filters. You can disable them (at your own risk) by passing environment variables during launch – check the model's documentation.
- Backup your creations: Generated images are stored inside the Open WebUI container volume. To persist them outside, configure a bind mount or copy them out manually.
- Update: Keep Docker Model Runner and Open WebUI up to date:
docker model updateanddocker pull ghcr.io/open-webui/open-webui:main(or use the launch command which always pulls the latest). - Community models: Explore custom models on Docker Hub or create your own DDUF packages from Hugging Face checkpoints.
- Remember: All processing is local – no internet required after the initial download. Perfect for sensitive projects or offline tinkering.
Enjoy your private, uncensored, always-available image generator. You've just built your own AI art studio, and it's all yours.