Replacing or updating distributed matrix on GPUs #1804
mawxcarroll
started this conversation in
General
Replies: 1 comment
-
Yes that approach will work. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I have a calculation that is working very well using Ginkgo's distributed matrix to execute a Runge-Kutta algorithm across 4 GPUs. It's really just a bunch of SpMV operations, and it's incredibly fast and has extended the region of parameter space I can explore in my research a great deal -- so thanks for that!
My question is about updating or replacing the matrix on the GPU. The first time I fill the matrix, I do the following:
I then add the nonzero values with Hdata.emplace_back(). Once the matrix is filled, I move it over to the GPU with:
As I said, this is working beautifully. Now, to replace the matrix on the GPU with the new values, will the following work?
Or do I need to somehow free the memory on the GPUs first?
Thanks for the help!
Cheers,
tom
Beta Was this translation helpful? Give feedback.
All reactions