This repository showcases an enhanced version of Kitware's VolView, extended with cutting-edge healthcare foundation models from NVIDIA Clara.
This special build integrates three powerful AI capabilities into the VolView interface, each running on a scalable, independent backend server. For general VolView features, please see the official repository.
This version of VolView adds three new tabs: Curate, Reason, and Generate.
The Segment tab uses the NVIDIA Clara NV-Curate-CTMR-v2 model to perform automatic 3D segmentation of anatomical structures.
-
How to Use:
-
Load a compatible 3D CT or MR dataset.
-
Navigate to the Segment tab.
-
Click Run Segmentation.
-
-
Output: A new segmentation layer is automatically added to the scene.
The Reason tab integrates a multimodal chatbot powered by the NVIDIA Clara NV-Reason-CXR-3B model, allowing you to have a text-based conversation about the loaded medical image.
-
How to Use:
-
Load a medical image.
-
Navigate to the Reason tab and select Clara NV-Reason-CXR-3B.
-
Type your question (e.g., "Are there any visible fractures?") into the chat box.
-
-
Output: The model's text response appears directly in the chat window.
The Generate tab uses the NVIDIA Clara NV-Generate-CTMR-v2 model to create synthetic 3D CT scans based on your specifications.
-
How to Use:
-
Navigate to the Generate tab.
-
Configure the desired parameters (body region, anatomy, resolution, etc.).
-
Click Generate CT Scan.
-
-
Output: A new, realistic 3D volume is generated and loaded into VolView.
Before getting started, ensure your system meets the following requirements:
- Operating System: Linux (Ubuntu 18.04+ recommended), macOS, or Windows with WSL2
- GPU: NVIDIA GPU with at least 24GB VRAM (RTX 3090, RTX 4090, or equivalent)
- CUDA: CUDA 11.8+ or CUDA 12.x
- System RAM: At least 32GB recommended
- Storage: At least 50GB free space for models and data
- Node.js: Version 18.0+
- npm: Version 8.0+ (comes with Node.js)
- Python: Version 3.11-3.13
- Poetry: Latest version for Python dependency management
- PyTorch: 2.0+ with CUDA support
- MONAI: For medical AI model execution
-
Install Node.js and npm:
# Ubuntu/Debian curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash - sudo apt-get install -y nodejs # macOS with Homebrew brew install node # Windows: Download from https://nodejs.org
-
Install Python 3.11+:
# Ubuntu/Debian sudo apt update sudo apt install python3.11 python3.11-venv # macOS with Homebrew brew install [email protected]
-
Install Poetry:
curl -sSL https://install.python-poetry.org | python3 - -
Verify CUDA Installation:
nvidia-smi nvcc --version
Follow these steps to set up the front-end and back-end services.
The VolView interface is a Node.js web app. From the project root, run:
npm install
npm run build
npm run serveYou can now access the VolView interface at http://localhost:5173.
Tip: Use
npm run serve --hostto make the app accessible from other devices on your local network.
Each NVIDIA model runs in its own Python server. From the server directory,
install dependencies and launch each service in a separate terminal.
cd server
poetry install-
Curate (Segmentation)
poetry run python -m volview_server -P 4014 -H 0.0.0.0 2025_nvidiagtcdc/nv_segment.py -
Reason (Chat)
poetry run python -m volview_server -P 4015 -H 0.0.0.0 2025_nvidiagtcdc/chat.py
-
Generate (Synthetic Data)
poetry run python -m volview_server -P 4016 -H 0.0.0.0 2025_nvidiagtcdc/nv_generate.py
Finally, connect the VolView front-end to your running model servers.
-
In the VolView UI, click the Settings (gear) icon to open the server configuration panel.
-
Update the URL for each service to point to the correct IP address and port where your Python servers are running. The panel will show a "Connected" status for each successful connection.
To save IP addresses for the three backend servers in the VolView Settings panel
, you can pre-configure them in the .env file.
-
Copy the env file local configuration file from the template:
cp .env.example .env
-
Edit the .env. file and update the following variables with the correct remote IP addresses (and ports) of your running back-end servers:
VITE_REMOTE_SERVER_1_URL= VITE_REMOTE_SERVER_2_URL= VITE_REMOTE_SERVER_3_URL=
Example:
VITE_REMOTE_SERVER_1_URL=http://localhost:4014 VITE_REMOTE_SERVER_2_URL=http://localhost:4015 VITE_REMOTE_SERVER_3_URL=http://10.50.56.30:9003
That's it! You are now ready to use the integrated NVIDIA models within VolView.
This software is provided solely for research and educational purposes. It is a research platform and is not intended for clinical use.
- This software has not been reviewed or approved by the U.S. Food and Drug Administration (FDA) or any other regulatory authority.
- It must not be used for diagnosis, treatment, or any clinical decision-making.
- No warranties or guarantees of performance, safety, or fitness for medical purposes are provided.
By using this software, you acknowledge that it is for non-clinical, investigational research only.
This repository (volview-gtc2025-demo) is released under the Apache License 2.0.
You are free to use, modify, and distribute this code, provided you comply with the terms of the license.
This demo integrates external AI models that are not part of this repository.
Each model has its own license and usage terms. You are responsible for reviewing and complying with these terms when downloading or using the models.
-
This repository does not redistribute model weights.
Instead, it provides integration points to download and use them directly from their official sources. -
Model license terms vary.
Some models are Apache 2.0, while others (e.g., NVIDIA-hosted) may have research-only or commercial-use restrictions.
Always check the model card or repository for the current license. -
Attribution.
If you use this demo in your work, please attribute the external models according to their license terms (e.g., Apache NOTICE requirements, CC-BY citation, NVIDIA terms of use).




