Skip to content
6 changes: 5 additions & 1 deletion docs/getting_started/installation/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,8 @@ vLLM-Omni is a Python library that supports the following GPU variants. The libr
## Set up using Docker

### Pre-built images
To be released... Stay tuned!

=== "NVIDIA CUDA"

--8<-- "docs/getting_started/installation/gpu/cuda.inc.md:pre-built-images"

19 changes: 19 additions & 0 deletions docs/getting_started/installation/gpu/cuda.inc.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,22 @@ uv pip install --no-build-isolation --editable .
</details>

# --8<-- [end:build-wheel-from-source]


# --8<-- [start:pre-built-images]

The Docker image file is available on [vllm-omni_v0.11.0rc1](#link-to-be-added)

```bash
docker load -i vllm-omni_v0.11.0rc1.tar.gz
docker run -itd \
--name vllm-omni \
--shm-size=64g \
--privileged=true \
--restart=always \
--gpus all \
--net=host vllm-omni:v0.11.0rc
docker exec -it vllm-omni bash
```

# --8<-- [end:pre-built-images]
Loading