Skip to content

Simplify show implementation #46

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Mar 3, 2025
Merged

Simplify show implementation #46

merged 5 commits into from
Mar 3, 2025

Conversation

lkdvos
Copy link
Contributor

@lkdvos lkdvos commented Feb 13, 2025

This does a simplified implementation of show, as in ITensor/SparseArraysBase.jl#31.

In order to make this work, I also added an isstored implementation for blocksparsearrays.

Note that the alignment again changes slightly, and apparently Base uses a centered dot, which I hadn't realized (spotting why the tests were failing was a fun exercise 😀).

Side note: This also unintentionally removes a @allowscalar before the call to print_array, since Base doesn't automatically put that there. Do you know if there is a good way to hook into that?

Copy link

codecov bot commented Feb 13, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 75.71%. Comparing base (e4c91aa) to head (62b4971).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #46      +/-   ##
==========================================
- Coverage   75.89%   75.71%   -0.18%     
==========================================
  Files          29       29              
  Lines        1095     1083      -12     
==========================================
- Hits          831      820      -11     
+ Misses        264      263       -1     
Flag Coverage Δ
docs 25.34% <0.00%> (+0.21%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mtfishman
Copy link
Member

mtfishman commented Feb 13, 2025

In order to make this work, I also added an isstored implementation for blocksparsearrays.

That will be good to have anyway!

Note that the alignment again changes slightly, and apparently Base uses a centered dot, which I hadn't realized (spotting why the tests were failing was a fun exercise 😀).

Oh interesting, I think I didn't appreciate there were two different kinds of dots... The fun of unicode.

Side note: This also unintentionally removes a @allowscalar before the call to print_array, since Base doesn't automatically put that there. Do you know if there is a good way to hook into that?

Good question. I see GPUArrays.jl handles it in this way: https://github.com/JuliaGPU/GPUArrays.jl/blob/v11.2.2/src/host/abstractarray.jl#L139-L149 which seems like a good approach. We could define something similar for AnyAbstractBlockSparseArray, so it would work as long as adapt is define properly for the AbstractBlockSparseArray. EDIT: Though probably we want some more nuanced than adapting to Array so that we preserve sparsity of the blocks, etc. We could define a "CPU adapter".

@mtfishman
Copy link
Member

mtfishman commented Feb 13, 2025

Following up on the last comment, it seems like we could either do something like:

Base.print_array(io::IO, X::AnyAbstractBlockSparseArray) = Base.print_array(io, adapt(ToCPU(), X))

or:

Base.print_array(io::IO, X::AnyAbstractBlockSparseArray) = @allowscalar Base.print_array(io, X)

but the issue would be that either one can lead to infinite recursion, but that could be fixed with invoke:

Base.print_array(io::IO, X::AnyAbstractBlockSparseArray) = invoke(Base.print_array, Tuple{IO,AbstractVecOrMat}, io, adapt(ToCPU(), X))

or:

Base.print_array(io::IO, X::AnyAbstractBlockSparseArray) = @allowscalar invoke(Base.print_array, Tuple{IO,AbstractVecOrMat}, io, X)

@lkdvos
Copy link
Contributor Author

lkdvos commented Feb 13, 2025

I guess that what bothers me about this is that invoke now would discard any specialized implementation that was previously there, and simply call the AbstractArray one instead.

In principle, converting to CPU is definitely the better option, since that pulls in the entire matrix in one go instead of one element at a time.
The annoying thing is mostly that AnyAbstractBlockSparseArray, or even AbstractBlockSparseArray does not know about the blocktype, and hence we cannot specialize on GPU arrays only.
Would you be okay with only defining this for BlockSparseArray and wrappers thereof? For that type, I can specialize on blocks being of GPUArray type and manually fix that.

@mtfishman
Copy link
Member

I guess that what bothers me about this is that invoke now would discard any specialized implementation that was previously there, and simply call the AbstractArray one instead.

I'm a bit confused about that point, this could just act as a fallback definition and if there are more specialized overloads of Base.print_array(io::IO, X) for some AbstractBlockSparseArray it wouldn't reach the invoke call in the first place.

@lkdvos
Copy link
Contributor Author

lkdvos commented Feb 13, 2025

Oh right, I guess that's true. invoke always trips me up somehow.
What I was thinking of is that print_array(io::IO, X::AbstractVector) and print_array(io::IO, X::AbstractMatrix) and print_array(::IO, ::AbstractArray) all exist, so simply using AbstractArray actually does not seem to work, since you then don't get the specialized matrix print.

@lkdvos lkdvos mentioned this pull request Feb 13, 2025
1 task
@mtfishman
Copy link
Member

@lkdvos I forgot about this PR, is this ready to merge? Looks like this will allow us to upgrade to SparseArraysBase.jl v0.3 (#64).

@mtfishman mtfishman merged commit 7c3d86a into main Mar 3, 2025
12 checks passed
@mtfishman mtfishman deleted the show branch March 3, 2025 13:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants