Why does GPU memory fluctuate when using negative sampling in PyG DataLoader (with fixed batch size and only training process)? #10446
Unanswered
wilmerwang
asked this question in
Q&A
Replies: 1 comment 1 reply
-
|
@wilmerwang I'm not sure what your data look like, but in graph learning, it's common for input sizes to vary because each (sub)graph generally has different number of nodes and edges. I'd suggest providing a script to reproduce the behaviour you're seeing so that someone in the PyG community could possibly help you. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am using HAN for heterogeneous network link prediction. During training, I observed that the GPU memory usage fluctuates. I have the following questions:
Does using a negative sampler inside the DataLoader inherently cause GPU memory usage to fluctuate, even if batch_size is fixed?
Is there any way to force PyG / PyTorch to allocate the maximum GPU memory upfront, so the usage stays stable (close to max) instead of fluctuating?
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions