Skip to content

Latest commit

 

History

History
209 lines (153 loc) · 6.33 KB

File metadata and controls

209 lines (153 loc) · 6.33 KB

Docker image for ComfyUI

How it works

  1. On first start, the entrypoint script will copy the ComfyUI instance bundled in the image to a local storage directory, and run the copied local instance.

  2. The whole ComfyUI will be stored in a local folder (./storage/ComfyUI).

  3. If you already have an existing ComfyUI bundle, place it in the directory above, and the entrypoint script will skip the copy step.

  4. Use ComfyUI-Manager (in ComfyUI web page) to update ComfyUI, manage custom nodes, and download models.

  5. Models and user files are mounted separately (storage-models and storage-user).

    • These mounts are optional, if not provided, all files will be stored in storage. This was designed to be backward-compatible with previous versions of ComfyUI-Docker images.

Prerequisites

  • NVIDIA GPU with latest driver

    • Either Game or Studio edition will work.

    • You don’t need to install drivers inside containers. Just make sure it’s working on your host OS.

  • Docker/Podman Installed

    • Linux user may need to install NVIDIA Container Toolkit (only on host OS). It will enable containers' GPU access.

    • Windows user could use Docker Desktop with WSL2 enabled, or Podman Desktop with WSL2 and GPU enabled.

    • WSL2 users please note that NTFS ⇆ ext4 “translation” is very slow (down to <100MiB/s), so you may want to use an in-WSL folder (or Docker volume) to store ComfyUI.

Usage

Run with Docker

mkdir -p \
  storage \
  storage-models/models \
  storage-models/hf-hub \
  storage-models/torch-hub \
  storage-user/input \
  storage-user/output \
  storage-user/workflows

docker run -it --rm \
  --name comfyui-cu130 \
  --runtime nvidia \
  --gpus all \
  -p 8188:8188 \
  -v "$(pwd)"/storage:/root \
  -v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
  -v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
  -v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
  -v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
  -v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
  -v "$(pwd)"/storage-user/workflows:/root/ComfyUI/user/default/workflows \
  -e CLI_ARGS="--fast" \
  yanwk/comfyui-boot:cu130-slim

Run with Podman

mkdir -p \
  storage \
  storage-models/models \
  storage-models/hf-hub \
  storage-models/torch-hub \
  storage-user/input \
  storage-user/output \
  storage-user/workflows

podman run -it --rm \
  --name comfyui-cu130 \
  --device nvidia.com/gpu=all \
  --security-opt label=disable \
  -p 8188:8188 \
  -v "$(pwd)"/storage:/root \
  -v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
  -v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
  -v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
  -v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
  -v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
  -v "$(pwd)"/storage-user/workflows:/root/ComfyUI/user/default/workflows \
  -e CLI_ARGS="--fast" \
  docker.io/yanwk/comfyui-boot:cu130-slim

Note the CLI_ARGS, see CLI_ARGS Reference below for details.

Once the app is loaded, visit http://localhost:8188/

Tips and Tricks

Pre-start scripts

The entrypoint script will create two example user script files at first start:

./storage/user-scripts/set-proxy.sh
./storage/user-scripts/pre-start.sh

The set-proxy.sh is for setting up proxy, it will start before everything else.

The pre-start.sh is for user operations, it will start just before ComfyUI starts.

Note
The entrypoint script no longer downloads anything from the Internet to speed up startup time, but set-proxy.sh is retained for backward compatibility.

Major Update

You can perform a major update (e.g. to a new PyTorch version) by swapping the Docker image:

docker pull yanwk/comfyui-boot:cu130-slim

# remove the container if not using an ephemeral one
docker rm comfyui-cu130

# Then 'docker run' again

CLI_ARGS Reference

args description

--fast

Enable experimental optimizations. (e.g. float8_e4m3fn matrix multiplication on Ada Lovelace and later GPUs). Might lower image quality.
Turn it off if you want stability over speed.

--disable-smart-memory

Force ComfyUI to offload models from VRAM to RAM more frequently. Slows performance but reduce memory leaks.

--lowvram

Force ComfyUI to split the model (UNET) into parts to use less VRAM, at the cost of speed. Use only if your GPU has less than 6 GB of VRAM.

--novram

Use system RAM only, no VRAM at all. Very slow.

--cpu

Run on CPU. Very slow. Used for testing purposes.

--disable-xformers

Disable xFormers. xFormers is not installed in this image by default.

--use-pytorch-cross-attention

Use PyTorch’s built-in cross-attention. Disable xFormers, FlashAttention and SageAttention.

More CLI_ARGS available at ComyfyUI’s cli_args.py.

Environment Variables Reference

Variable Example Value Memo

HTTP_PROXY
HTTPS_PROXY

http://localhost:1081
http://localhost:1081

Set HTTP proxy. Works the same as set-proxy.sh.

PIP_INDEX_URL

'https://pypi.org/simple'

Set mirror site for Python Package Index.

HF_ENDPOINT

'https://huggingface.co'

Set mirror site for HuggingFace Hub.

HF_TOKEN

'hf_your_token'

Set HuggingFace Access Token. More info

HF_XET_HIGH_PERFORMANCE

1

Enable HuggingFace Hub’s high performance mode. Only make sense if you have >5Gbps and VERY STABLE connection (e.g. cloud server). More info