-
-
Notifications
You must be signed in to change notification settings - Fork 132
Performance bencmarks comparison with scipy.sparse #331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Tl;dr: It's bad. Sometimes 10 times worse, even for element-wise operations. We're looking at approaches that essentially guarantee optimal performance and GPU support in all cases (#326). However, we don't have a ETA for this. |
|
For
|
Thanks for the explanations @hameerabbasi ! |
It seems like |
Not that I'm suggesting to optimize that right now, I just want to say there's no fundamental reason why it's slow, just no one got around to it since this package is still new. |
Line 1150 in 98a39cd
Perhaps the memoization isn't working. |
#350 makes this a bit better. |
I see there is an asv setup in the repo but I couldn't find any results.
How does pydata/sparse currently compare to scipy.sparse (e.g. for dot product)? Or is most of linalg operations expected to use
scipy.sparse.linlag
in the long run and so the performance would be equivalent?The text was updated successfully, but these errors were encountered: