Some more small changes for GPU support#48
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #48 +/- ##
=======================================
Coverage 56.27% 56.27%
=======================================
Files 19 20 +1
Lines 1427 1434 +7
=======================================
+ Hits 803 807 +4
- Misses 624 627 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
The failure here would be fixed in QuantumKitHub/TensorKit.jl#375, annoyingly circular... |
| A::AbstractTensorMap, pA::Index2Tuple, conjA::Bool, | ||
| B::AbstractBlockTensorMap, pB::Index2Tuple, conjB::Bool, | ||
| pAB::Index2Tuple{N₁, N₂}, | ||
| ) where {N₁, N₂} = TO.tensorcontract_type(TC, B, pB, conjB, A, pA, conjA, pAB) |
There was a problem hiding this comment.
I think this isn't entirely correct and only works since our implementation is not depending on too many details, but this specification of pB, pA, pAB is not necessarily a valid contraction specification, which I would like to keep.
There was a problem hiding this comment.
Cool, I'll try to rework this then :)
| Base.getindex(iter::TK.BlockIterator{<:AbstractBlockTensorMap}, c::Sector) = block(iter.t, c) | ||
|
|
||
| TensorKit.storagetype(::Type{TT}) where {TT <: AbstractBlockTensorMap} = storagetype(eltype(TT)) | ||
| function TensorKit.storagetype(::Type{TT}) where {TT <: AbstractBlockTensorMap} |
There was a problem hiding this comment.
I'm slightly confused about why we need this, isn't this already included in the base definition: https://github.com/QuantumKitHub/TensorKit.jl/blob/3dfc99f6ea79ba0fc90b959c76314d869a23a17b/src/tensors/abstracttensor.jl#L53-L59
There was a problem hiding this comment.
No, I don't think so? Or at least it was causing errors. The base definition captures defining:
const MyTensorMap = Union{TensorMap, SomeCrap}
But the BlockTensorMap has such a Union as its eltype, which is not captured by the method in TensorKit. So what happens is that logic in base TensorKit just returns the similarstoragetype(scalartype(T)) line, I think
There was a problem hiding this comment.
Sorry, I meant that storagetype(::Type{TT}) where {TT <: AbstractBlockTensorMap} = storagetype(eltype(TT)) should already work, which is what is confusing me, since if eltype(TT) isa Union, this should then be handled in the base definition. I might also be missing something though
There was a problem hiding this comment.
Let me try killing this here and seeing what happens
There was a problem hiding this comment.
OK this generates new problems because I think it is hitting the similarstoragetype(scalartype(T)) line, why it is doing so I'm not sure...
| function BlockTensorKit._full(A::BM) where {T <: Number, TA <: AnyGPUMatrix{T}, BM <: BlockMatrix{T, Matrix{TA}}} | ||
| arr = similar(first(A.blocks), size(A)) | ||
| # TODO -- should we use Threads here to parallelize these | ||
| # transfers in streams if possible? | ||
| for block_index in Iterators.product(blockaxes(A)...) | ||
| indices = getindex.(axes(A), block_index) | ||
| arr[indices...] = @view A[block_index...] | ||
| end | ||
| return arr | ||
| end |
There was a problem hiding this comment.
How would you feel about just putting this code in the main package, and calling this as well for the CPU arrays? I'm not entirely sure about the comment about Threads, which I would leave out for now, but otherwise this seems like we might as well just "vendor" this
There was a problem hiding this comment.
We currently have the CPU one as _full(A) = Array(A), which I think calls back into BlockArrays? I don't have a strong opinion about this so I can move it back over into src if that's your preference
There was a problem hiding this comment.
I would keep the Threads comment tbh because doing a series of small transfers like this is absolute Kryptonite to GPU performance
There was a problem hiding this comment.
Indeed the CPU side one falls back to a very similar function in BlockArrays.jl
ddb285b to
4eacaa1
Compare
|
Sorry for hijacking it into much better shape? I’ve heard worse apologies
…On Fri, Apr 24, 2026 at 8:09 PM Lukas Devos ***@***.***> wrote:
***@***.**** approved this pull request.
@kshyatt <https://github.com/kshyatt> sorry for hijacking your PR, going
to try and include this and release already. We can rediscuss the
tensorcontract_type implementation and the storagetype one separately
maybe?
—
Reply to this email directly, view it on GitHub
<#48 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGKJY2WTYGP7KLTUK4HOA34XOUW3AVCNFSM6AAAAACYDRN75WVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHM2DCNZSGUZDGOBWGA>
.
Triage notifications, keep track of coding agent tasks and review pull
requests on the go with GitHub Mobile for iOS
<https://github.com/notifications/mobile/ios/AAGKJY4VROSS7WQCNQEPIXT4XOUW3A5CNFSNUABKM5UWIORPF5TWS5BNNB2WEL2QOVWGYUTFOF2WK43UKJSXM2LFO4XTIMJXGI2TEMZYGYYKM4TFMFZW63VHNVSW45DJN5XKKZLWMVXHJKTGN5XXIZLSL5UW64Y>
and Android
<https://github.com/notifications/mobile/android/AAGKJYYD2HKQPUFK3TG55K34XOUW3A5CNFSNUABKM5UWIORPF5TWS5BNNB2WEL2QOVWGYUTFOF2WK43UKJSXM2LFO4XTIMJXGI2TEMZYGYYKM4TFMFZW63VHNVSW45DJN5XKKZLWMVXHJLTGN5XXIZLSL5QW4ZDSN5UWI>.
Download it today!
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
No description provided.