@@ -50,7 +50,7 @@ \section{Runge-Kutta Methods}
50
50
\end {align }
51
51
\end {definition }
52
52
53
- We observe that when $ a_{ij} = 0 $ for each $ i \ge j $ , \cref {eq:rk_lin } can be
53
+ We observe that when $ a_{ij} = 0 $ for each $ j \ge i $ , \cref {eq:rk_lin } can be
54
54
computed without solving a system of algebraic equations, and we call methods
55
55
with this property \emph {explicit } otherwise \emph {implicit }.
56
56
@@ -115,8 +115,8 @@ \subsection{Change of Basis}
115
115
\begin {equation }
116
116
V^{-1}A^{-1}V = \Lambda ,
117
117
\end {equation }
118
- to decouple the $ sm \times sm$ system. To transforming \cref {eq:newton1 } to
119
- the eigenbasis of $ A^{-1}$ , notice
118
+ to decouple the $ sm \times sm$ system. To transform \cref {eq:newton1 } to the
119
+ eigenbasis of $ A^{-1}$ , notice
120
120
\begin {equation }
121
121
A^{-1}x = b \implies V^{-1}A^{-1}x = V^{-1}b \implies \Lambda V^{-1}x =
122
122
V^{-1}b.
@@ -128,7 +128,7 @@ \subsection{Change of Basis}
128
128
\bm {z}^k + \tilde {\bm {f}}(\bm {z}^k)).
129
129
\end {align }
130
130
We can introduce the transformed variable $ \bm {w} = (V^{-1}\otimes I_m) \bm {z}$
131
- to further reduce computation, so \cref {eq:newton1 } and \cref {eq:newton2 } is now
131
+ to further reduce computation, so \cref {eq:newton_1 } and \cref {eq:newton_2 } is now
132
132
\begin {align } \label {eq:newton2 }
133
133
(h^{-1} \Lambda \otimes M - I_s\otimes J) \Delta\bm {w}^k
134
134
&= -(h^{-1} \Lambda \otimes M) \bm {w}^k +
@@ -140,16 +140,15 @@ \subsection{Change of Basis}
140
140
141
141
\subsection {Stopping Criteria }
142
142
Note that throughout this subsection, $ \norm {\cdot }$ denotes the norm that is
143
- used by the time-stepping error estimate. We are using this choice because we
144
- need to make sure that convergent results from a nonlinear solver does not
145
- introduce step rejections.
143
+ used by the time-stepping error estimate. By doing so, we can be confident that
144
+ convergent results from a nonlinear solver do not introduce step rejections.
146
145
147
146
There are two approaches to estimate the error of a nonlinear solver, by the
148
147
displacement $ \Delta \bm {z}^k$ or by the residual $ G(\bm {z}^k)$ . The residual
149
148
behaves like the error scaled by the Lipschitz constant of $ G$ . Stiff equations
150
- have a large Lipschitz constant. However , this constant is not known a prior.
151
- This makes the residual test unreliable. Hence, we are going to focus on the
152
- analysis of the displacement.
149
+ have a large Lipschitz constant, furthermore , this constant is not known a
150
+ priori. This makes the residual test unreliable. Hence, we are going to focus on
151
+ the analysis of the displacement.
153
152
154
153
Simplified Newton iteration converges linearly, so we can model the convergence
155
154
process as
@@ -171,16 +170,16 @@ \subsection{Stopping Criteria}
171
170
\norm {\Delta\bm {z}^{k}}\sum _{i=0}^\infty \theta ^i = \frac {\theta }{1-\theta }
172
171
\norm {\Delta\bm {z}^{k}}.
173
172
\end {equation }
174
- To ensure nonlinear solver error does not cause that step rejection , we need a
173
+ To ensure the nonlinear solver error does not cause step rejections , we need a
175
174
safety factor $ \kappa = 1 /10 $ . Our first convergence criterion is
176
175
\begin {equation }
177
176
\eta _k \norm {\Delta\bm {z}^k} \le \kappa , \qq {if}
178
177
k \ge 1 \text { and } \theta \le 1, ~ \eta _k=\frac {\theta _k}{1-\theta _k}.
179
178
\end {equation }
180
- One major drawback with this convergence criterion is that we can only check it
181
- after two iterations . To cover the case of convergence in the first iteration,
182
- we need to define $ \eta _0 $ . It is reasonable to believe that the convergence
183
- rate remains relatively constant with the same $ W$ -matrix, so if $ W$ is reused
179
+ One major drawback with this criterion is that we can only check it after one
180
+ iteration . To cover the case of convergence in the first iteration, we need to
181
+ define $ \eta _0 $ . It is reasonable to believe that the convergence rate remains
182
+ relatively constant with the same $ W$ -matrix, so if $ W$ is reused
184
183
\todo [inline]{Add the reuse logic section} from the previous nonlinear solve,
185
184
then we define
186
185
\begin {equation }
@@ -206,18 +205,18 @@ \subsection{Stopping Criteria}
206
205
\end {equation }
207
206
Also, the algorithm diverges if the max number of iterations is reached without
208
207
convergence. A subtler criterion for divergence is: no convergence is predicted
209
- by extrapolating to $ \norm {\Delta\bm {z}^k_{\max }}$ , e.g .
208
+ by extrapolating to $ \norm {\Delta\bm {z}^k_{\max }}$ , i.e .
210
209
\begin {equation }
211
210
\frac {\theta _k^{k_{\max }-k}}{1-\theta _k} \norm {\Delta\bm {z}^k} > \kappa .
212
211
\end {equation }
213
212
\todo [inline]{OrdinaryDiffEq.jl doesn't actually check this condition anymore.}
214
213
215
- \subsection {$ W$ -matrix reuse }
214
+ \subsection {$ W$ -matrix Reuse }
216
215
217
- \section {Step size control }
218
- \subsection {Standard (Integral) control }
219
- \subsection {Predictive (modified PI) control }
216
+ \section {Step Size Control }
220
217
\subsection {Smooth Error Estimation }
218
+ \subsection {Standard (Integral) Control }
219
+ \subsection {Predictive (Modified PI) Control }
221
220
222
221
\nocite {hairer2010solving }
223
222
0 commit comments