Conversation
3bed38d to
8665c4a
Compare
eabfce9 to
0c903ac
Compare
|
Let's make this a draft too to cut down on CI thrash |
f5857b3 to
32e182d
Compare
f5faaf6 to
2359d28
Compare
lkdvos
left a comment
There was a problem hiding this comment.
Left some comment throughout, there are some things that I am not entirely convinced by but the rest looks great, thanks for working through all of this!
For the similarstoragetype(tensor, storagetype) calls that you added, this seems like something we should probably discuss over a separate PR, and it would be great if we could consolidate this one to get the remainder of the fixes in.
Would you be up for splitting these two things, and then getting this merged?
The same kind of holds for some of the comments I made too, if we can just postpone the things that are not obvious, but already get the other parts in, that would probably be helpful.
(Note that I am very much aware that none of this is your fault and this PR has lived for too long so the design shifts a bit, for which I do apologize!)
| twistB = false | ||
| end | ||
|
|
||
| TTC = storagetype(C) |
There was a problem hiding this comment.
I guess this effectively means that we are deciding to promote inputs to the storagetype of the output. I'm not sure if I am fully convinced that we should solve this automatically at all, since I think that is also inconsistent with how regular matrices work (same for adding):
julia> CUDA.rand(2, 2) * rand(Float32, 2, 2)
ERROR: Scalar indexing is disallowed.I do think that this might be the right approach, and requiring explicit conversions in the cases of mixed inputs seems like the right call to me. (Even though I can see how that is annoying for MPSKit 😉 )
There was a problem hiding this comment.
No, it's for
TTC = storagetype(C)
# Bring A in the correct form for BLAS contraction
if copyA
Anew = TO.tensoralloc_add(TTC, A, pA, false, Val(true), allocator)
Anew = TO.tensoradd!(Anew, A, pA, false, One(), Zero(), backend, allocator)
twistA && twist!(Anew, filter(!isdual ∘ Base.Fix1(space, Anew), domainind(Anew)))
else
Anew = permute(A, pA)
endWithout this change, Anew will always have Vector{scalartype(T)} storage even if A was a BraidingTensor or some other object that only gets instantiated here. With the changes in #393 this won't be necessary. It's more than "annoying", with this change or #393 you have to define new tensoralloc methods for the mixed case, it's quite painful 😭
|
It's completely fine!! This has stayed open as I work through adding more tests for MPSKit, so I think we can pare off the simpler stuff we agree on, and then discuss things that are more contentious. |
|
Your PR requires formatting changes to meet the project's style guidelines. Click here to view the suggested changes.diff --git a/test/cuda/tensors.jl b/test/cuda/tensors.jl
index 16d1030..0bdeb60 100644
--- a/test/cuda/tensors.jl
+++ b/test/cuda/tensors.jl
@@ -568,8 +568,8 @@ end
d4 = dim(domain(t2))
At = convert(Array, adapt(Vector{T}, t))
@test reshape(At, (d1, d2, d3, d4)) ≈
- reshape(convert(Array, adapt(Vector{T}, t1)), (d1, 1, d3, 1)) .*
- reshape(convert(Array, adapt(Vector{T}, t2)), (1, d2, 1, d4))
+ reshape(convert(Array, adapt(Vector{T}, t1)), (d1, 1, d3, 1)) .*
+ reshape(convert(Array, adapt(Vector{T}, t2)), (1, d2, 1, d4))
end
end
end |
8a12178 to
ad62dad
Compare
aa8ed5c to
fecb81d
Compare
Needed to get more MPSKit examples working