❓ Question
What you have already tried
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (
conda, pip, libtorch, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
Additional context
As the title, I have a machine with multiple GPUs and I would like to know if there is any way to evenly distribute the model across these GPUs. Is there any way to achieve this?