MEGAPAK uses the same base mechanism as the slim images. The key differences are:
-
Includes 40+ custom nodes. See the full list.
-
Includes CUDA development kit for compiling PyTorch C++ extensions,
.cufiles, etc. -
Includes performance optimization libraries such as Nunchaku and SageAttention (powerful but may have compatibility issues).
-
Includes additional tools and dependencies.
-
As a result, the image is larger and receives updates later than the
slimimages, especially during PyTorch version changes, since some packages are specific to particular PyTorch versions.
Please successfully run the slim image before attempting the megapak image. The prerequisites/setup sections are omitted from this document.
mkdir -p \
storage \
storage-models/models \
storage-models/hf-hub \
storage-models/torch-hub \
storage-user/input \
storage-user/output \
storage-user/workflows
docker run -it --rm \
--name comfyui-megapak \
--runtime nvidia \
--gpus all \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
-v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
-v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
-v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
-v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
-v "$(pwd)"/storage-user/workflows:/root/ComfyUI/user/default/workflows \
-e CLI_ARGS="--fast --disable-xformers" \
yanwk/comfyui-boot:cu128-megapakNote the --fast in CLI_ARGS. Remove it if you meet quality issues.
mkdir -p \
storage \
storage-models/models \
storage-models/hf-hub \
storage-models/torch-hub \
storage-user/input \
storage-user/output \
storage-user/workflows
podman run -it --rm \
--name comfyui-megapak \
--device nvidia.com/gpu=all \
--security-opt label=disable \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
-v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
-v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
-v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
-v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
-v "$(pwd)"/storage-user/workflows:/root/ComfyUI/user/default/workflows \
-e CLI_ARGS="--fast --disable-xformers" \
docker.io/yanwk/comfyui-boot:cu128-megapakNote the --fast in CLI_ARGS. Remove it if you meet quality issues.
| args | description |
|---|---|
--disable-xformers |
Disable xFormers. Enabling xFormers can cause issues on Blackwell GPUs, but may be required for some video workflows (e.g. SVD). |
--use-pytorch-cross-attention |
Use PyTorch’s built-in cross-attention. Disable xFormers, FlashAttention and SageAttention. |
--fast |
Enable experimental optimizations.
(e.g.
float8_e4m3fn
matrix multiplication on Ada Lovelace and later GPUs).
Might lower image quality. |
--disable-smart-memory |
Force ComfyUI to offload models from VRAM to RAM more frequently. Slows performance but reduce memory leaks. |
--lowvram |
Force ComfyUI to split the model (UNET) into parts to use less VRAM, at the cost of speed. Use only if your GPU has less than 6 GB of VRAM. |
--novram |
Use system RAM only, no VRAM at all. Very slow. |
--cpu |
Run on CPU. Very slow. Used for testing purposes. |
More CLI_ARGS available at ComyfyUI’s
cli_args.py.
| Variable | Example Value | Memo |
|---|---|---|
HTTP_PROXY |
Set HTTP proxy. Works the same as |
|
PIP_INDEX_URL |
Set mirror site for Python Package Index. |
|
HF_ENDPOINT |
Set mirror site for HuggingFace Hub. |
|
HF_TOKEN |
'hf_your_token' |
Set HuggingFace Access Token. More info |
HF_XET_HIGH_PERFORMANCE |
1 |
Enable HuggingFace Hub’s high performance mode. Only make sense if you have >5Gbps and VERY STABLE connection (e.g. cloud server). More info |
TORCH_CUDA_ARCH_LIST |
7.5 |
Build target for PyTorch and its extensions. For most users, no setup is needed as it will be automatically selected on Linux. When needed, you only need to set one build target just for your GPU. More info |
CMAKE_ARGS |
'-DBUILD_opencv_world=ON -DWITH_CUDA=ON -DCUDA_FAST_MATH=ON -DWITH_CUBLAS=ON -DWITH_NVCUVID=ON' |
Build options for CMAKE projects using CUDA. |