Skip to content

Commit 12b3d18

Browse files
authored
[Fix] fix torch allocator resouce releasing (#1708)
* delete root logger and add condition before calling caching_allocator_delete * fix lint error * use torch._C._cuda_cudaCachingAllocator_raw_delete
1 parent b85f341 commit 12b3d18

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

mmdeploy/backend/tensorrt/torch_allocator.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ def __init__(self, device_id: int = 0) -> None:
1313

1414
self.device_id = device_id
1515
self.mems = set()
16+
self.caching_delete = torch._C._cuda_cudaCachingAllocator_raw_delete
1617

1718
def __del__(self):
1819
"""destructor."""
@@ -53,11 +54,9 @@ def deallocate(self: trt.IGpuAllocator, memory: int) -> bool:
5354
Returns:
5455
bool: deallocate success.
5556
"""
56-
logger = get_root_logger()
57-
logger.debug(f'deallocate {memory} with TorchAllocator.')
5857
if memory not in self.mems:
5958
return False
6059

61-
torch.cuda.caching_allocator_delete(memory)
60+
self.caching_delete(memory)
6261
self.mems.discard(memory)
6362
return True

0 commit comments

Comments
 (0)