Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Random = "1"
ScopedValues = "1.3.0"
Strided = "2"
TensorKitSectors = "0.3.7"
TensorOperations = "5.1"
TensorOperations = "5.5"
TupleTools = "1.5"
VectorInterface = "0.4.8, 0.5"
cuTENSOR = "6"
Expand Down
12 changes: 9 additions & 3 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ end
using Documenter
using Random
using TensorKit
using TensorKit: FusionTreePair, FusionTreeBlock, Index2Tuple
using TensorKit: FusionTreePair, FusionTreeBlock, Index2Tuple, IndexTuple
using TensorKit.TensorKitSectors
using TensorKit.MatrixAlgebraKit
using DocumenterInterLinks
Expand All @@ -26,8 +26,14 @@ pages = [
"man/intro.md", "man/tutorial.md",
"man/spaces.md", "man/symmetries.md",
"man/sectors.md", "man/gradedspaces.md",
"man/fusiontrees.md", "man/tensors.md",
"man/tensormanipulations.md",
"man/fusiontrees.md",
"Tensors" => [
"man/tensors.md",
"man/linearalgebra.md",
"man/indexmanipulations.md",
"man/factorizations.md",
"man/contractions.md",
],
],
"Library" => [
"lib/sectors.md", "lib/fusiontrees.md",
Expand Down
1 change: 1 addition & 0 deletions docs/src/lib/fusiontrees.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

```@meta
CurrentModule = TensorKit
CollapsedDocStrings = true
```

# Type hierarchy
Expand Down
1 change: 1 addition & 0 deletions docs/src/lib/sectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

```@meta
CurrentModule = TensorKit
CollapsedDocStrings = true
```

## Type hierarchy
Expand Down
1 change: 1 addition & 0 deletions docs/src/lib/spaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

```@meta
CurrentModule = TensorKit
CollapsedDocStrings = true
```

## Type hierarchy
Expand Down
7 changes: 1 addition & 6 deletions docs/src/lib/tensors.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

```@meta
CurrentModule = TensorKit
CollapsedDocStrings = true
```

## Type hierarchy
Expand Down Expand Up @@ -184,12 +185,6 @@ repartition!
twist!
```

```@docs
TensorKit.add_permute!
TensorKit.add_braid!
TensorKit.add_transpose!
```

### Tensor map composition, traces, contractions and tensor products

```@docs
Expand Down
106 changes: 106 additions & 0 deletions docs/src/man/contractions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# [Tensor contractions and tensor networks](@id ss_tensor_contraction)

One of the most important operation with tensor maps is to compose them, more generally known as contracting them.
As mentioned in the section on [category theory](@ref s_categories), a typical composition of maps in a ribbon category can graphically be represented as a planar arrangement of the morphisms (i.e. tensor maps, boxes with lines eminating from top and bottom, corresponding to source and target, i.e. domain and codomain), where the lines connecting the source and targets of the different morphisms should be thought of as ribbons, that can braid over or underneath each other, and that can twist.
Technically, we can embed this diagram in ``ℝ × [0,1]`` and attach all the unconnected line endings corresponding objects in the source at some position ``(x,0)`` for ``x∈ℝ``, and all line endings corresponding to objects in the target at some position ``(x,1)``.
The resulting morphism is then invariant under what is known as *framed three-dimensional isotopy*, i.e. three-dimensional rearrangements of the morphism that respect the rules of boxes connected by ribbons whose open endings are kept fixed.
Such a two-dimensional diagram cannot easily be encoded in a single line of code.

However, things simplify when the braiding is symmetric (such that over- and under- crossings become equivalent, i.e. just crossings), and when twists, i.e. self-crossings in this case, are trivial.
This amounts to `BraidingStyle(I) == Bosonic()` in the language of TensorKit.jl, and is true for any subcategory of ``\mathbf{Vect}``, i.e. ordinary tensors, possibly with some symmetry constraint.
The case of ``\mathbf{SVect}`` and its subcategories, and more general categories, are discussed below.

In the case of trivial twists, we can deform the diagram such that we first combine every morphism with a number of coevaluations ``η`` so as to represent it as a tensor, i.e. with a trivial domain.
We can then rearrange the morphism to be all ligned up horizontally, where the original morphism compositions are now being performed by evaluations ``ϵ``.
This process will generate a number of crossings and twists, where the latter can be omitted because they act trivially.
Similarly, double crossings can also be omitted.
As a consequence, the diagram, or the morphism it represents, is completely specified by the tensors it is composed of, and which indices between the different tensors are connect, via the evaluation ``ϵ``, and which indices make up the source and target of the resulting morphism.
If we also compose the resulting morphisms with coevaluations so that it has a trivial domain, we just have one type of unconnected lines, henceforth called open indices.
We sketch such a rearrangement in the following picture

```@raw html
<img src="../img/tensor-bosoniccontraction.svg" alt="tensor unitary" class="color-invertible"/>
```

Hence, we can now specify such a tensor diagram, henceforth called a tensor contraction or also tensor network, using a one-dimensional syntax that mimicks [abstract index notation](https://en.wikipedia.org/wiki/Abstract_index_notation) and specifies which indices are connected by the evaluation map using Einstein's summation conventation.
Indeed, for `BraidingStyle(I) == Bosonic()`, such a tensor contraction can take the same format as if all tensors were just multi-dimensional arrays.
For this, we rely on the interface provided by the package [TensorOperations.jl](https://github.com/QuantumKitHub/TensorOperations.jl).

The above picture would be encoded as
```julia
@tensor E[a, b, c, d, e] := A[v, w, d, x] * B[y, z, c, x] * C[v, e, y, b] * D[a, w, z]
```
or
```julia
@tensor E[:] := A[1, 2, -4, 3] * B[4, 5, -3, 3] * C[1, -5, 4, -2] * D[-1, 2, 5]
```
where the latter syntax is known as NCON-style, and labels the unconnected or outgoing indices with negative integers, and the contracted indices with positive integers.

A number of remarks are in order.
TensorOperations.jl accepts both integers and any valid variable name as dummy label for indices, and everything in between `[ ]` is not resolved in the current context but interpreted as a dummy label.
Here, we label the indices of a `TensorMap`, like `A::TensorMap{T, S, N₁, N₂}`, in a linear fashion, where the first position corresponds to the first space in `codomain(A)`, and so forth, up to position `N₁`.
Index `N₁ + 1` then corresponds to the first space in `domain(A)`.
However, because we have applied the coevaluation ``η``, it actually corresponds to the corresponding dual space, in accordance with the interface of [`space(A, i)`](@ref) that we introduced [above](@ref ss_tensor_properties), and as indiated by the dotted box around ``A`` in the above picture.
The same holds for the other tensor maps.
Note that our convention also requires that we braid indices that we brought from the domain to the codomain, and so this is only unambiguous for a symmetric braiding, where there is a unique way to permute the indices.

With the current syntax, we create a new object `E` because we use the definition operator `:=`.
Furthermore, with the current syntax, it will be a `Tensor`, i.e. it will have a trivial domain, and correspond to the dotted box in the picture above, rather than the actual morphism `E`.
We can also directly define `E` with the correct codomain and domain by rather using
```julia
@tensor E[a b c;d e] := A[v, w, d, x] * B[y, z, c, x] * C[v, e, y, b] * D[a, w, z]
```
or
```julia
@tensor E[(a, b, c);(d, e)] := A[v, w, d, x] * B[y, z, c, x] * C[v, e, y, b] * D[a, w, z]
```
where the latter syntax can also be used when the codomain is empty.
When using the assignment operator `=`, the `TensorMap` `E` is assumed to exist and the contents will be written to the currently allocated memory.
Note that for existing tensors, both on the left hand side and right hand side, trying to specify the indices in the domain and the codomain seperately using the above syntax, has no effect, as the bipartition of indices are already fixed by the existing object.
Hence, if `E` has been created by the previous line of code, all of the following lines are now equivalent
```julia
@tensor E[(a, b, c);(d, e)] = A[v, w, d, x] * B[y, z, c, x] * C[v, e, y, b] * D[a, w, z]
@tensor E[a, b, c, d, e] = A[v w d; x] * B[(y, z, c); (x, )] * C[v e y; b] * D[a, w, z]
@tensor E[a b; c d e] = A[v; w d x] * B[y, z, c, x] * C[v, e, y, b] * D[a w; z]
```
and none of those will or can change the partition of the indices of `E` into its codomain and its domain.

Two final remarks are in order.
Firstly, the order of the tensors appearing on the right hand side is irrelevant, as we can reorder them by using the allowed moves of the Penrose graphical calculus, which yields some crossings and a twist.
As the latter is trivial, it can be omitted, and we just use the same rules to evaluate the newly ordered tensor network.
For the particular case of matrix-matrix multiplication, which also captures more general settings by appropriotely combining spaces into a single line, we indeed find

```@raw html
<img src="../img/tensor-contractionreorder.svg" alt="tensor contraction reorder" class="color-invertible"/>
```

or thus, the following two lines of code yield the same result
```julia
@tensor C[i, j] := B[i, k] * A[k, j]
@tensor C[i, j] := A[k, j] * B[i, k]
```
Reordering of tensors can be used internally by the `@tensor` macro to evaluate the contraction in a more efficient manner.
In particular, the NCON-style of specifying the contraction gives the user control over the order, and there are other macros, such as `@tensoropt`, that try to automate this process.
There is also an `@ncon` macro and `ncon` function, an we recommend reading the [manual of TensorOperations.jl](https://quantumkithub.github.io/TensorOperations.jl/stable/) to learn more about the possibilities and how they work.

A final remark involves the use of adjoints of tensors.
The current framework is such that the user should not be too worried about the actual bipartition into codomain and domain of a given `TensorMap` instance.
Indeed, for tensor contractions the `@tensor` macro figures out the correct manipulations automatically.
However, when wanting to use the `adjoint` of an instance `t::TensorMap{T, S, N₁, N₂}`, the resulting `adjoint(t)` is an `AbstractTensorMap{T, S, N₂, N₁}` and one needs to know the values of `N₁` and `N₂` to know exactly where the `i`th index of `t` will end up in `adjoint(t)`, and hence the index order of `t'`.
Within the `@tensor` macro, one can instead use `conj()` on the whole index expression so as to be able to use the original index ordering of `t`.
For example, for `TensorMap{T, S, 1, 1}` instances, this yields exactly the equivalence one expects, namely one between the following two expressions:

```julia
@tensor C[i, j] := B'[i, k] * A[k, j]
@tensor C[i, j] := conj(B[k, i]) * A[k, j]
```

For e.g. an instance `A::TensorMap{T, S, 3, 2}`, the following two syntaxes have the same effect within an `@tensor` expression: `conj(A[a, b, c, d, e])` and `A'[d, e, a, b, c]`.

## Fermionic tensor contractions

TODO

## Anyonic tensor contractions

TODO
46 changes: 46 additions & 0 deletions docs/src/man/factorizations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# [Tensor factorizations](@id ss_tensor_factorization)

```@setup tensors
using TensorKit
using LinearAlgebra
```

As tensors are linear maps, they suport various kinds of factorizations.
These functions all interpret the provided `AbstractTensorMap` instances as a map from `domain` to `codomain`, which can be thought of as reshaping the tensor into a matrix according to the current bipartition of the indices.

TensorKit's factorizations are provided by [MatrixAlgebraKit.jl](https://github.com/QuantumKitHub/MatrixAlgebraKit.jl), which is used to supply both the interface, as well as the implementation of the various operations on the blocks of data.
For specific details on the provided functionality, we refer to its [documentation page](https://quantumkithub.github.io/MatrixAlgebraKit.jl/stable/user_interface/decompositions/).

Finally, note that each of the factorizations takes the current partition of `domain` and `codomain` as the *axis* along which to matricize and perform the factorization.
In order to obtain factorizations according to a different bipartition of the indices, we can use any of the previously mentioned [index manipulations](@ref s_indexmanipulations) before the factorization.

Some examples to conclude this section
```@repl tensors
V1 = SU₂Space(0 => 2, 1/2 => 1)
V2 = SU₂Space(0 => 1, 1/2 => 1, 1 => 1)

t = randn(V1 ⊗ V1, V2);
U, S, Vh = svd_compact(t);
t ≈ U * S * Vh
D, V = eigh_full(t' * t);
D ≈ S * S
U' * U ≈ id(domain(U))
S

Q, R = left_orth(t; alg = :svd);
Q' * Q ≈ id(domain(Q))
t ≈ Q * R

U2, S2, Vh2, ε = svd_trunc(t; trunc = truncspace(V1));
Vh2 * Vh2' ≈ id(codomain(Vh2))
S2
ε ≈ norm(block(S, Irrep[SU₂](1))) * sqrt(dim(Irrep[SU₂](1)))

L, Q = right_orth(permute(t, ((1,), (2, 3))));
codomain(L), domain(L), domain(Q)
Q * Q'
P = Q' * Q;
P ≈ P * P
t′ = permute(t, ((1,), (2, 3)));
t′ ≈ t′ * P
```
114 changes: 114 additions & 0 deletions docs/src/man/indexmanipulations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# [Index manipulations](@id s_indexmanipulations)

```@meta
CollapsedDocStrings = true
```

```@setup indexmanip
using TensorKit
using LinearAlgebra
```

A `TensorMap{T, S, N₁, N₂}` is a linear map from a domain (a `ProductSpace{S, N₂}`) to a codomain (a `ProductSpace{S, N₁}`).
In practice, the bipartition of the `N₁ + N₂` indices between domain and codomain is often not fixed: algorithms typically need to reshuffle indices between the two sides, reorder them, or change the arrow direction on individual indices before passing a tensor to a factorization or contraction.

Index manipulations cover all such operations.
They act on the structure of the tensor data in a way that is fully determined by the categorical data of the `sectortype`, such that TensorKit automatically manipulates the tensor entries accordingly.
The operations fall into three groups, which mirror the structure of the source file:

* **Reweighting**: [`flip`](@ref) and [`twist`](@ref) apply local isomorphisms to individual indices without changing the index structure.
* **Space insertion/removal**: [`insertleftunit`](@ref), [`insertrightunit`](@ref) and [`removeunit`](@ref) add or remove trivial (scalar) index factors.
* **Index rearrangements**: [`permute`](@ref), [`braid`](@ref), [`transpose`](@ref) and [`repartition`](@ref) reorder indices and/or move them between domain and codomain.

Throughout this page, new index positions are specified using `Index2Tuple{N₁, N₂}`, i.e. a pair `(p₁, p₂)` of index tuples.
The indices listed in `p₁` form the new codomain and those in `p₂` form the new domain.
The following helpers retrieve the current index structure of a tensor:

```@docs; canonical=false
numout
numin
numind
codomainind
domainind
allind
```

## Reweighting

Reweighting operations modify the entries of a tensor by applying local isomorphisms to individual indices, without changing the number of indices or their partition between domain and codomain.

[`flip`](@ref) changes the arrow direction on selected indices by applying the corresponding isomorphism between a space and its dual.
[`twist`](@ref) applies the topological spin (monoidal twist) to selected indices; for `BraidingStyle(I) == Bosonic()` this is always trivial.

```@docs; canonical=false
flip(t::AbstractTensorMap, I)
twist(::AbstractTensorMap, ::Int)
twist!
```

## Inserting and removing unit spaces

These functions add or remove a trivial tensor product factor at a specified index position, without affecting any other indices.
[`insertleftunit`](@ref) inserts before position `i` and [`insertrightunit`](@ref) inserts after position `i`; [`removeunit`](@ref) undoes either insertion.
Passing `Val(i)` instead of an `Int` for the position may improve type stability.

```@docs; canonical=false
insertleftunit(::AbstractTensorMap, ::Val{i}) where {i}
insertrightunit(::AbstractTensorMap, ::Val{i}) where {i}
removeunit(::AbstractTensorMap, ::Val{i}) where {i}
```

## Index rearrangements

These operations reorder indices and/or move them between domain and codomain by applying the transposing or braiding isomorphisms of the underlying category.
They form a hierarchy from most general to most restricted:

- [`braid`](@ref) is the most general: it accepts any permutation and requires a `levels` argument — a tuple of heights, one per index — that determines whether each index crosses over or under the others it has to pass.
- [`permute`](@ref) is a simpler interface for sector types with a symmetric braiding (`BraidingStyle(I) isa SymmetricBraiding`), where over- and under-crossings are equivalent and `levels` is therefore not needed.
- [`transpose`](@ref) is restricted to *cyclic* permutations (indices do not cross).
- [`repartition`](@ref) only moves the codomain/domain boundary without reordering the indices at all.

For plain tensors (`sectortype(t) == Trivial`), `permute` and `braid` act like `permutedims` on the underlying array:

```@repl indexmanip
V = ℂ^2;
t = randn(V ⊗ V ← V ⊗ V);
ta = convert(Array, t);
t′ = permute(t, ((4, 2, 3), (1,)));
convert(Array, t′) ≈ permutedims(ta, (4, 2, 3, 1))
```

```@docs; canonical=false
braid(::AbstractTensorMap, ::Index2Tuple, ::IndexTuple)
braid!
permute(::AbstractTensorMap, ::Index2Tuple)
permute!(::AbstractTensorMap, ::AbstractTensorMap, ::Index2Tuple)
transpose(::AbstractTensorMap, ::Index2Tuple)
transpose!
repartition(::AbstractTensorMap, ::Int, ::Int)
repartition!
```

## Fusing and splitting indices

There is no dedicated function for fusing or splitting indices.
For a plain tensor (`sectortype(t) == Trivial`), this is equivalent to `reshape` on the underlying array.

In the general case there is no canonical embedding of `V1 ⊗ V2` into the fused space `V = fuse(V1 ⊗ V2)`: any two such embeddings differ by a basis transform, i.e. there is a gauge freedom.
TensorKit resolves this by requiring the user to construct an explicit isomorphism — the *fuser* — and contract it with the tensor:

```julia
f = unitary(fuse(space(t, i) ⊗ space(t, j)), space(t, i) ⊗ space(t, j))
@tensor t_fused[…, a, …] := f[a, i, j] * t[…, i, j, …]
```

Splitting is then the adjoint of the same map:

```julia
@tensor t_split[…, i, j, …] := f'[i, j, a] * t_fused[…, a, …]
```

Using `f'` as the splitter guarantees that the round-trip is the identity, i.e. `t_split == t`.
Using a *different* isomorphism to split would give a physically equivalent but numerically different tensor, so it is important to keep `f` and its adjoint consistent throughout a calculation.

Note that tensor factorizations (SVD, QR, etc.) can be applied directly to any index bipartition without needing to fuse indices first; see [Tensor factorizations](@ref ss_tensor_factorization).
Loading
Loading