Run Your Own Private AI Image Generator on Your Machine with Docker and Open WebUI

From Moocchen, the free encyclopedia of technology

Introduction

Imagine needing a few custom images for a project, but instead of uploading prompts to a cloud service and worrying about privacy, credit limits, or content filters that block a perfectly reasonable request for a dragon in a business suit, you could generate them right on your own computer — with a sleek chat interface. That’s now possible with Docker Model Runner. By combining it with Open WebUI, you get a fully local, private, and unlimited image generation setup that rivals cloud-based tools like DALL·E — no subscription required.

Run Your Own Private AI Image Generator on Your Machine with Docker and Open WebUI
Source: www.docker.com

In this guide, we’ll walk you through pulling an image generation model, launching Open WebUI, and creating images from a chat window — all running on your own hardware.

What You’ll Need

  • Docker Desktop (macOS) or Docker Engine (Linux)
  • Around 8 GB of free RAM for a smaller model (more RAM recommended)
  • A GPU is optional but highly recommended: NVIDIA (CUDA), Apple Silicon (MPS), or CPU fallback

If you can run docker model version without errors, you’re ready to proceed.

How Docker Model Runner Works with Open WebUI

Before diving into the steps, here’s the high-level architecture:

Docker Model Runner acts as a control plane. It downloads the model, manages the inference backend lifecycle, and exposes a 100% OpenAI-compatible API — including the POST /v1/images/generations endpoint that Open WebUI already knows how to call. This means you can use a familiar chat interface while everything runs locally.

Step 1: Pull an Image Generation Model

Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) to distribute image generation models through Docker Hub, just like any other OCI artifact. This single-file format bundles all diffusion model components — text encoder, VAE, UNet/DiT, and scheduler configuration — into one portable artifact.

Pull a model with the following command:

docker model pull stable-diffusion

Once downloaded, confirm it’s ready:

docker model inspect stable-diffusion

This will return details like the model ID, tags, creation date, and configuration — including the DDUF file inside. The model is stored locally as a DDUF file that Docker Model Runner knows how to unpack at runtime.

Step 2: Launch Open WebUI

Here’s the magic part. Docker Model Runner includes a built-in launch command that automatically wires up Open WebUI against your local inference endpoint. Just run:

docker model launch openwebui

That single command sets up everything — the model serving endpoint, the Open WebUI container, and the network connection between them. You don’t need to manually configure any environment variables or ports.

Run Your Own Private AI Image Generator on Your Machine with Docker and Open WebUI
Source: www.docker.com

After a few seconds, Open WebUI will be accessible at http://localhost:8080 (or a similar address). Open it in your browser and you’ll see a clean chat interface ready for image generation.

Step 3: Generate Your First Image

In the Open WebUI chat, simply type a prompt like “a dragon wearing a business suit, digital art style” and hit send. The request goes to the locally running Docker Model Runner, which processes it through the Stable Diffusion model and returns an image right in the chat window. No credits, no filters, no cloud.

You can adjust parameters like image size, number of images, or negative prompts by modifying the API call configuration (advanced users can customize the default settings in Open WebUI’s admin panel).

Tips and Troubleshooting

  • GPU performance: If you have a compatible GPU, make sure Docker is configured to use it (e.g., with NVIDIA Container Toolkit or Apple Silicon’s Metal Performance Shaders). Without a GPU, generation will be slower but still functional.
  • RAM usage: The stable-diffusion model uses about 6.94 GB of disk space and requires several GB of RAM at runtime. Close other heavy applications to avoid out-of-memory issues.
  • Multiple models: You can pull additional models (e.g., for different styles or resolutions) and switch between them by modifying the launch command or default model in Open WebUI.
  • Storage: Downloaded models are stored in Docker’s data directory. If you need to free up space, remove unused models with docker model rm.

Conclusion

With Docker Model Runner and Open WebUI, you now have a fully private, local alternative to cloud-based image generation services. No subscription fees, no data leaving your machine, and no arbitrary content restrictions. You can generate as many images as you want, for any purpose, right from a chat interface. Give it a try — your own personal DALL·E is just a few commands away.