Skip to content

Commit 3c15ab1

Browse files
committed
Improved README
1 parent dd91be3 commit 3c15ab1

File tree

1 file changed

+10
-9
lines changed

1 file changed

+10
-9
lines changed

README.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,12 @@
1-
# Trainax - Learning Methodologies for Autoregressive Neural Emulators
1+
# Trainax
22

3-
![](https://ceyron.github.io/predictor-learning-setups/sup-3-none-true-full_gradient.svg)
3+
<p align="center">
4+
<b>Learning Methodologies for Autoregressive Neural Emulators</b>
5+
</p>
6+
7+
<p align="center">
8+
<img src="https://ceyron.github.io/predictor-learning-setups/sup-3-none-true-full_gradient.svg" width="400">
9+
</p>
410

511
After the discretization of space and time, the simulation of a transient
612
partial differential equation amounts to the repeated application of a
@@ -58,11 +64,6 @@ $$
5864

5965
where $l$ is a **time-level loss**. In the easiest case $l = \text{MSE}$.
6066

61-
### More
62-
63-
Focus is clearly on the number of update steps, not on the number of epochs
64-
65-
6667
### A taxonomy of learning setups
6768

6869
The major axes that need to be chosen are:
@@ -108,7 +109,7 @@ There are three levels of hierarchy:
108109
diverted-chain, mix-chain, residuum). The most general diverted chain
109110
implementation contains supervised and branch-one diverted chain as special
110111
cases. See the section "Relation between Diverted Chain and Residuum
111-
Training" for details how residuum training fits into the picture. All
112+
Training" (TODO) for details how residuum training fits into the picture. All
112113
configurations allow setting additional constructor arguments to, e.g., cut
113114
the backpropagation through time (sparsely) or to supply time-level
114115
weightings (for example to exponentially discount contributions over long
@@ -119,4 +120,4 @@ There are three levels of hierarchy:
119120
combining the relevant configuration with the `GeneralTrainer` and a
120121
trajectory substacker.
121122

122-
### Relation between Diverted Chain and Residuum Training
123+
You can find an overview of predictor learning setups [here](https://fkoehler.site/predictor-learning-setups/).

0 commit comments

Comments
 (0)