-
On first start, the entrypoint script will copy the ComfyUI instance bundled in the image to a local storage directory, and run the copied local instance.
-
The whole ComfyUI will be stored in a local folder (
./storage/ComfyUI). -
If you already have an existing ComfyUI bundle, place it in the directory above, and the entrypoint script will skip the copy step.
-
Use ComfyUI-Manager (in ComfyUI web page) to update ComfyUI, manage custom nodes, and download models.
-
Models and user files are mounted separately (
storage-modelsandstorage-user).-
These mounts are optional, if not provided, all files will be stored in
storage. This was designed to be backward-compatible with previous versions of ComfyUI-Docker images.
-
-
NVIDIA GPU with latest driver
-
Either Game or Studio edition will work.
-
You don’t need to install drivers inside containers. Just make sure it’s working on your host OS.
-
-
Docker/Podman Installed
-
Linux user may need to install NVIDIA Container Toolkit (only on host OS). It will enable containers' GPU access.
-
Windows user could use Docker Desktop with WSL2 enabled, or Podman Desktop with WSL2 and GPU enabled.
-
WSL2 users please note that NTFS ⇆ ext4 “translation” is very slow (down to <100MiB/s), so you may want to use an in-WSL folder (or Docker volume) to store ComfyUI.
-
mkdir -p \
storage \
storage-models/models \
storage-models/hf-hub \
storage-models/torch-hub \
storage-user/input \
storage-user/output \
storage-user/workflows
docker run -it --rm \
--name comfyui-cu128 \
--runtime nvidia \
--gpus all \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
-v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
-v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
-v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
-v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
-v "$(pwd)"/storage-user/workflows:/root/ComfyUI/user/default/workflows \
-e CLI_ARGS="--fast --disable-xformers" \
yanwk/comfyui-boot:cu128-slimmkdir -p \
storage \
storage-models/models \
storage-models/hf-hub \
storage-models/torch-hub \
storage-user/input \
storage-user/output \
storage-user/workflows
podman run -it --rm \
--name comfyui-cu128 \
--device nvidia.com/gpu=all \
--security-opt label=disable \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
-v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
-v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
-v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
-v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
-v "$(pwd)"/storage-user/workflows:/root/ComfyUI/user/default/workflows \
-e CLI_ARGS="--fast --disable-xformers" \
docker.io/yanwk/comfyui-boot:cu128-slimNote the CLI_ARGS, see CLI_ARGS Reference below for details.
Once the app is loaded, visit http://localhost:8188/
The entrypoint script will create two example user script files at first start:
./storage/user-scripts/set-proxy.sh ./storage/user-scripts/pre-start.sh
The set-proxy.sh is for setting up proxy, it will start before everything else.
The pre-start.sh is for user operations, it will start just before ComfyUI starts.
|
Note
|
The entrypoint script no longer downloads anything from the Internet to speed up startup time, but set-proxy.sh is retained for backward compatibility.
|
| args | description |
|---|---|
--disable-xformers |
Disable xFormers. Enabling xFormers can cause issues on Blackwell GPUs, but may be required for some video workflows (e.g. SVD). |
--use-pytorch-cross-attention |
Use PyTorch’s built-in cross-attention. Works the same as |
--fast |
Enable experimental optimizations.
(e.g.
float8_e4m3fn
matrix multiplication on Ada Lovelace and later GPUs).
Might lower image quality. |
--disable-smart-memory |
Force ComfyUI to offload models from VRAM to RAM more frequently. Slows performance but reduce memory leaks. |
--lowvram |
Force ComfyUI to split the model (UNET) into parts to use less VRAM, at the cost of speed. Use only if your GPU has less than 6 GB of VRAM. |
--novram |
Use system RAM only, no VRAM at all. Very slow. |
--cpu |
Run on CPU. Very slow. Used for testing purposes. |
More CLI_ARGS available at ComyfyUI’s
cli_args.py.
| Variable | Example Value | Memo |
|---|---|---|
HTTP_PROXY |
Set HTTP proxy. Works the same as |
|
PIP_INDEX_URL |
Set mirror site for Python Package Index. |
|
HF_ENDPOINT |
Set mirror site for HuggingFace Hub. |
|
HF_TOKEN |
'hf_your_token' |
Set HuggingFace Access Token. More info |
HF_XET_HIGH_PERFORMANCE |
1 |
Enable HuggingFace Hub’s high performance mode. Only make sense if you have >5Gbps and VERY STABLE connection (e.g. cloud server). More info |