-
Notifications
You must be signed in to change notification settings - Fork 30
Open
Description
I run the TUM whith commad "./build/mono_tum Vocabulary/ORBvoc.txt configs/Monocular/TUM/freiburg3_office.yaml ./rgbd_dataset_freiburg3_long_office_household/ "on docker with GPU GTX1660 and I get the error
I

Copliot give the following advice:
"Reduce the n_neurons parameter in FullyFusedMLP model configuration. This will decrease the memory requirement of the model.
Alternatively, use CutlassMLP, which might offer better compatibility but could be slower."
I woner if these two can work. And I noticed the gpu memory usage is about 6GB. But the GPU memory usage in the paper and other people is about 9GB. Can I work with my GPU or I have to change one.
Metadata
Metadata
Assignees
Labels
No labels