You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: chapters/ndarray.md
+19-20Lines changed: 19 additions & 20 deletions
Original file line number
Diff line number
Diff line change
@@ -566,46 +566,46 @@ There are way more functions contained in the `Ndarray` module than the ones we
566
566
In the last part of this chapter, we will briefly introduce the idea of *tensor*.
567
567
If you look at some articles online the tensor is often defined as an n-dimensional array.
568
568
However, mathematically, there are differences between these two.
569
-
In a n-dimension space, a tensor that contains $m$ indices is a mathematical object that obeys certain transformation rules.
569
+
In a n-dimension space, a tensor that contains $$m$$ indices is a mathematical object that obeys certain transformation rules.
570
570
For example, in a three dimension space, we have a value `A = [0, 1, 2]` that indicate a vector in this space.
571
-
We can find each element in this vector by a single index $i$, e.g. $A_1 = 1$.
571
+
We can find each element in this vector by a single index $$i$$, e.g. $$A_1 = 1$$.
572
572
This vector is an object in this space, and it stays the same even if we change the standard cartesian coordinate system into other systems.
573
-
But if we do so, then the content in $A$ needs to be updated accordingly.
573
+
But if we do so, then the content in $$A$$ needs to be updated accordingly.
574
574
Therefore we say that, a tensor can normally be expressed in the form of an ndarray, but it is not an ndarray.
575
575
That's why we keep using the term "ndarray" in this chapter and through out the book.
576
576
577
577
The basic idea about tensor is that, since the object stays the same, if we change the coordinate towards one direction, the component of the vector needs to be changed to another direction.
578
-
Considering a single vector $v$ in a coordinate system with basis $e$.
579
-
We can change the coordinate base to $\tilde{e}$ with linear transformation: $\tilde{e} = Ae$ where A is a matrix. For any vector in this space using $e$ as base, its content will be transformed as: $\tilde{v} = A^{-1}v$, or we can write it as:
578
+
Considering a single vector $$v$$ in a coordinate system with basis $e$.
579
+
We can change the coordinate base to $$\tilde{e}$$ with linear transformation: $$\tilde{e} = Ae$$ where A is a matrix. For any vector in this space using $e$ as base, its content will be transformed as: $$\tilde{v} = A^{-1}v$$, or we can write it as:
580
580
581
581
$$\tilde{v}^i = \sum_j~B_j^i~v^j.$$
582
582
583
-
Here $B=A^{-1}$.
583
+
Here $$B=A^{-1}$$.
584
584
We call a vector *contravector* because it changes in the opposite way to the basis.
585
585
Note we use the superscript to denote the element in contravectors.
586
586
587
-
As a comparison, think about a matrix multiplication $\alpha~v$. The $\alpha$ itself forms a different vector space, the basis of which is related to the basis of $v$'s vector space.
588
-
It turns out that the direction of change of $\alpha$ is the same as that of $e$. When $v$ uses new $\tilde{e} = Ae$, its component changes in the same way:
587
+
As a comparison, think about a matrix multiplication $$\alpha~v$$. The $$\alpha$$ itself forms a different vector space, the basis of which is related to the basis of $$v$$'s vector space.
588
+
It turns out that the direction of change of $$\alpha$$ is the same as that of $$e$$. When $$v$$ uses new $$\tilde{e} = Ae$$, its component changes in the same way:
589
589
590
590
$$\tilde{\alpha}_j = \sum_i~A_j^i~\alpha_i.$$
591
591
592
592
It is called a *covector*, denoted with subscript.
593
-
We can further extend it to matrix. Think about a linear mapping $L$. It can be represented as a matrix so that we can apply it to any vector using matrix dot multiplication.
594
-
With the change of the coordinate system, it can be proved that the content of the linear map $L$ itself is updated to:
593
+
We can further extend it to matrix. Think about a linear mapping $$L$$. It can be represented as a matrix so that we can apply it to any vector using matrix dot multiplication.
594
+
With the change of the coordinate system, it can be proved that the content of the linear map $$L$$ itself is updated to:
595
595
596
596
$$\tilde{L_j^i} = \sum_{kl}~B_k^i~L_l^k~A_j^l.$$
597
597
598
-
Again, note we use both superscript and subscript for the linear map $L$, since it contains one covariant component and one contravariant component.
598
+
Again, note we use both superscript and subscript for the linear map $$L$$, since it contains one covariant component and one contravariant component.
599
599
Further more, we can extend this process and define the tensor.
600
600
A tensor $T$ is an object that is invariant under a change of coordinates, and with a change of coordinates its component changes in a special way.
We can use the `contract2` function in the `Ndarray` module. It takes an array of `int * int` tuples to specifies the pair of indices in the two input ndarrays. Here is the code:
629
629
@@ -634,7 +634,7 @@ let y = Arr.sequential [|4;3;2|]
634
634
let z1 = Arr.contract2 [|(0, 1); (1, 0)|] x y
635
635
```
636
636
637
-
The indices mean that, in the contraction, the 0th dimension of `x` corresponds with the 1st dimension of `y`, an the 1st dimension of `x` corresponds with the 0th dimension of `y`, as shown in [@eq:ndarray:contract].
637
+
The indices mean that, in the contraction, the 0th dimension of `x` corresponds with the 1st dimension of `y`, an the 1st dimension of `x` corresponds with the 0th dimension of `y`.
638
638
We can verify the result with the naive way of implementation:
639
639
640
640
```ocaml:contraction
@@ -687,16 +687,15 @@ High-performance implementation of the contraction operation has been a research
687
687
Actually, many tensor operations involve summation over particular indices.
688
688
Therefore in using tensors in applications such as linear algebra and physics, the *Einstein notation* is used to simplified notations.
689
689
It removes the common summation notation, and also, any twice-repeated index in a term is summed up (no index is allowed to occur three times or more in a term).
690
-
For example, the matrix multiplication notation $C_{ij} = \sum_{k}A_{ik}B_{kj}$ can be simplified as C = $A_{ik}B_{kj}$.
691
-
The [@eq:ndarray:tensor] can also be greatly simplified in this way.
690
+
For example, the matrix multiplication notation $$C_{ij} = \sum_{k}A_{ik}B_{kj}$$ can be simplified as C = $$A_{ik}B_{kj}$$.
692
691
693
692
The tensor calculus is of important use in disciplines such as geometry and physics.
694
-
More details about the tensor calculation is beyond the scope of this book. We refer readers to work such as [@dullemond1991introduction] for deeper understanding about this topic.
693
+
More details about the tensor calculation is beyond the scope of this book.
695
694
696
695
## Summary
697
696
698
697
N-dimensional array is the fundamental data type in Owl, as well as in many other numerical libraries such as NumPy.
699
-
This chapter explain in detail the Ndarray module, including its creation, properties, manipulation, serialisation, etc.
698
+
This chapter explain in detail the Ndarray module, including its creation, properties, manipulation, serialization, etc.
700
699
Besides, we also discuss the subtle difference between tensor and ndarray in this chapter.
701
700
This chapter is easy to follow, and can serve as a reference whenever users need a quick check of functions they need.
0 commit comments