Skip to content

Commit 3992c77

Browse files
committed
doc page writing improvements
1 parent df5096e commit 3992c77

File tree

3 files changed

+23
-26
lines changed

3 files changed

+23
-26
lines changed

docs/source/api/lift.ipynb

+1-2
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,6 @@
3838
"\n",
3939
"- Operator Inference models have polynomial structure. To calibrate a model with data, it can be advantageous to transform and/or augment the state variables to induce a desired structure.\n",
4040
"- {class}`LifterTemplate` provides an API for variable transformation / augmentation operations.\n",
41-
"- The constructor of the [`ROM`](opinf.roms.ROM) class receives an (optional) `lifter` argument.\n",
4241
":::"
4342
]
4443
},
@@ -81,7 +80,7 @@
8180
"In some systems with nonpolynomial nonlinearities, a change of variables can induce a polynomial structure, which can greatly improve the effectiveness of Operator Inference.\n",
8281
"Such variable transformations are often called _lifting maps_, especially if the transformation augments the state by introducing additional variables.\n",
8382
"\n",
84-
"This module defines a template class for implementing lifting maps that can interface with the [`ROM`](opinf.roms.ROM) class and provides a few examples of lifting maps."
83+
"This module defines a template class for implementing lifting maps that can interface with {mod}`opinf.roms` classes and provides a few examples of lifting maps."
8584
]
8685
},
8786
{

docs/source/api/operators.ipynb

+22-23
Original file line numberDiff line numberDiff line change
@@ -67,12 +67,8 @@
6767
":class: note\n",
6868
"\n",
6969
"- `opinf.operators` classes represent the individual terms in a [model](opinf.models). [Nonparametric operators](sec-operators-nonparametric) are functions of the state and input, while [parametric operators](sec-operators-parametric) are also dependent on one or more external parameters.\n",
70-
" - [`apply()`](OperatorTemplate.apply) applies the operator to given state and input vectors.\n",
71-
" - [`jacobian()`](OperatorTemplate.apply) constructs the derivative of the operator with respect to the state at given state and input vectors.\n",
72-
"- Operators that can be written as the product of a matrix (the *operator matrix*) and a known vector-valued function can be calibrated through solving a regression problem.\n",
73-
" - [`operator_dimension()`](OpInfOperator.operator_dimension) defines the size of the operator matrix for given state and input dimensions.\n",
74-
" - [`datablock()`](OpInfOperator.datablock) uses state and input snapshots to construct a block of the data matrix for the regression problem.\n",
75-
"- [Model](opinf.models) constructors receive a list of operator objects. A model's [`fit()`](opinf.models.ContinuousModel.fit) method constructs and solves a regression problem to learn the operator matrices.\n",
70+
"- Operators that can be written as the product of a matrix and a known vector-valued function can be [calibrated](sec-operators-calibration) through solving a regression problem.\n",
71+
"- {mod}`opinf.models` objects are constructed with a list of operator objects. A model's [`fit()`](opinf.models.ContinuousModel.fit) method constructs and solves a regression problem to learn the operator matrices.\n",
7672
":::\n",
7773
"\n",
7874
"<!-- - Monolithic operators are designed for dense systems; multilithic operators are designed for systems with sparse block structure. -->"
@@ -256,7 +252,7 @@
256252
"There are two ways to determine operator matrices in the context of model reduction:\n",
257253
"\n",
258254
"- [**Non-intrusive Operator Inference**](sec-operators-calibration): Learn operator matrices from data.\n",
259-
"- [**Intrusive (Petrov-)Galerkin Projection**](sec-operators-projection): Compress an existing high-dimensional operator.\n",
255+
"- [**Intrusive (Petrov--)Galerkin Projection**](sec-operators-projection): Compress an existing high-dimensional operator.\n",
260256
"\n",
261257
"Once the `entries` are set, the following methods are used to compute the action\n",
262258
"of the operator or its derivatives.\n",
@@ -394,7 +390,10 @@
394390
"cell_type": "markdown",
395391
"metadata": {},
396392
"source": [
397-
"To facilitate this, nonparametric OpInf operator classes have a static [`datablock()`](OpInfOperator.datablock) method that, given the state-input data pairs $\\{(\\qhat_j,\\u_j)\\}_{j=0}^{k-1}$, forms the matrix\n",
393+
"Nonparametric OpInf operator classes have two static methods that facilitate constructing the operator regression problem.\n",
394+
"\n",
395+
"- [`operator_dimension()`](OpInfOperator.operator_dimension): given the state dimension $r$ and the input dimension $r$, return the data dimension $d_\\ell$.\n",
396+
"- [`datablock()`](OpInfOperator.datablock): given the state-input data pairs $\\{(\\qhat_j,\\u_j)\\}_{j=0}^{k-1}$, forms the matrix\n",
398397
"\n",
399398
"$$\n",
400399
"\\begin{aligned}\n",
@@ -407,7 +406,7 @@
407406
"\\end{aligned}\n",
408407
"$$\n",
409408
"\n",
410-
"the complete data matrix $\\D$ is then the concatenation of the data matrices from each operator:\n",
409+
"The complete data matrix $\\D$ is the concatenation of the data matrices from each operator:\n",
411410
"\n",
412411
"$$\n",
413412
"\\begin{aligned}\n",
@@ -417,9 +416,7 @@
417416
" \\\\ & &\n",
418417
" \\end{array}\\right].\n",
419418
"\\end{aligned}\n",
420-
"$$\n",
421-
"\n",
422-
"Nonparametric OpInf operators also have a static [`operator_dimension()`](OpInfOperator.operator_dimension) method that, given the state dimension $r$ and the input dimension $r$, returns the data dimension $d_\\ell$."
419+
"$$"
423420
]
424421
},
425422
{
@@ -430,7 +427,7 @@
430427
":class: important\n",
431428
"\n",
432429
"Model classes from {mod}`opinf.models` are instantiated with a list of operators.\n",
433-
"The model's `fit()` method calls the [`datablock()`](OpInfOperator.datablock) method of each OpInf operator to assemble the full data matrix $\\D$, solves the regression problem for the full data matrix $\\Ohat$ (see {mod}`opinf.lstsq`), and extracts the operator matrix $\\Ohat_{\\ell}$ for each $\\ell = 1, \\ldots, n_{\\textrm{terms}}$ using the [`operator_dimension()`](OpInfOperator.operator_dimension)"
430+
"The model's `fit()` method calls the [`datablock()`](OpInfOperator.datablock) method of each OpInf operator to assemble the full data matrix $\\D$, solves the regression problem for the full data matrix $\\Ohat$ (see {mod}`opinf.lstsq`), and extracts from $\\Ohat$ the individual operator matrix $\\Ohat_{\\ell}$ for each $\\ell = 1, \\ldots, n_{\\textrm{terms}}$ using the [`operator_dimension()`](OpInfOperator.operator_dimension)."
434431
]
435432
},
436433
{
@@ -446,14 +443,14 @@
446443
"\\begin{aligned}\n",
447444
" \\Z\n",
448445
" &= [~\\dot{\\qhat}_0~~\\cdots~~\\dot{\\qhat}_{k-1}~] \\in \\RR^{r\\times k},\n",
449-
" \\\\\n",
446+
" \\\\ \\\\\n",
450447
" \\Ohat\n",
451448
" &= [~\\Ahat~~\\Bhat~] \\in \\RR^{r \\times (r + m)},\n",
452-
" \\\\\n",
449+
" \\\\ \\\\\n",
453450
" \\D\\trp\n",
454451
" &= \\left[\\begin{array}{ccc}\n",
455452
" \\qhat_0 & \\cdots & \\qhat_{k-1}\n",
456-
" \\\\ \\vdots & & \\vdots \\\\\n",
453+
" \\\\\n",
457454
" \\u_0 & \\cdots & \\u_{k-1}\n",
458455
" \\end{array}\\right]\n",
459456
" \\in \\RR^{(r + m) \\times k}.\n",
@@ -475,7 +472,7 @@
475472
"\\end{aligned}\n",
476473
"$$\n",
477474
"\n",
478-
"That is, the OLS Operator Inference regression minimizes a sum of residuals of the model equation {eq}`eq:operators:ltiexample` with respect to available data.\n",
475+
"That is, the ordinary least-squares Operator Inference regression minimizes a sum of residuals of the model equation {eq}`eq:operators:ltiexample` with respect to available data.\n",
479476
":::"
480477
]
481478
},
@@ -514,7 +511,7 @@
514511
"\\end{aligned}\n",
515512
"$$\n",
516513
"\n",
517-
"Using ordinary least squares regression, the optimization problem is given by\n",
514+
"Using ordinary least-squares regression, the optimization problem is given by\n",
518515
"\n",
519516
"$$\n",
520517
"\\begin{aligned}\n",
@@ -560,7 +557,7 @@
560557
"- $\\qhat\\in\\RR^{r}$ is the reduced-order state, and\n",
561558
"- $\\u\\in\\RR^{m}$ is the input (the same as before).\n",
562559
"\n",
563-
"This approach uses the low-dimensional state approximation $\\q = \\Vr\\qhat$.\n",
560+
"This approach uses the low-dimensional state approximation $\\q \\approx \\Vr\\qhat$.\n",
564561
"If $\\Wr = \\Vr$, the result is called a *Galerkin projection*.\n",
565562
"Note that if $\\Vr$ has orthonormal columns, we have in this case the simplification\n",
566563
"\n",
@@ -571,7 +568,7 @@
571568
"\\end{aligned}\n",
572569
"$$\n",
573570
"\n",
574-
"If $\\Wr \\neq \\Vr$, the result is called a *Petrov-Galerkin projection*."
571+
"If $\\Wr \\neq \\Vr$, the result is called a *Petrov--Galerkin projection*."
575572
]
576573
},
577574
{
@@ -583,7 +580,8 @@
583580
"\n",
584581
"Consider the bilinear operator\n",
585582
"$\\Op(\\q,\\u) = \\N[\\u\\otimes\\q]$ where $\\N\\in\\RR^{n \\times nm}$.\n",
586-
"The intrusive Petrov-Galerkin projection of $\\Op$ is the bilinear operator\n",
583+
"This type of operator can represented as a {class}`StateInputOperator`.\n",
584+
"The intrusive Petrov--Galerkin projection of $\\Op$ is the bilinear operator\n",
587585
"\n",
588586
"$$\n",
589587
"\\begin{aligned}\n",
@@ -594,7 +592,8 @@
594592
"$$\n",
595593
"\n",
596594
"where $\\Nhat = (\\Wr\\trp\\Vr)^{-1}\\Wr\\trp\\N(\\I_m\\otimes\\Vr) \\in \\RR^{r\\times rm}$.\n",
597-
"The intrusive Galerkin projection has $\\Nhat = \\Vr\\trp\\N(\\I_m\\otimes\\Vr)$.\n",
595+
"Hence, $\\Ophat$ can also be represented as a {class}`StateInputOperator`.\n",
596+
"Using Galerkin projection ($\\Wr = \\Vr$), $\\Nhat$ simplifies to $\\Nhat = \\Vr\\trp\\N(\\I_m\\otimes\\Vr)$.\n",
598597
":::\n",
599598
"\n",
600599
"Every operator class has a [`galerkin()`](OperatorTemplate.galerkin) method that performs intrusive projection."
@@ -774,7 +773,7 @@
774773
"That is, $\\ddqhat\\Ophat_{\\ell}(\\qhat, \\u) = \\operatorname{diag}(\\hat{\\s}).$\n",
775774
"\n",
776775
"Now consider a version of this operator with a large state dimension, $\\Op_{\\ell}(\\q, \\u) = \\q \\ast \\s$ for $\\q,\\s\\in\\mathbb{R}^{n}$.\n",
777-
"For basis matrices $\\Vr,\\Wr\\in\\mathbb{R}^{n \\times r}$, the Petrov-Galerkin projection of $\\Op_{\\ell}$ is given by\n",
776+
"For basis matrices $\\Vr,\\Wr\\in\\mathbb{R}^{n \\times r}$, the Petrov--Galerkin projection of $\\Op_{\\ell}$ is given by\n",
778777
"\n",
779778
"$$\n",
780779
"\\begin{aligned}\n",

docs/source/api/pre.ipynb

-1
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,6 @@
4646
"\n",
4747
"- Operator Inference performance often improves when the training data are standardized. Multivariable data in particular benefits from preprocessing.\n",
4848
"- `opinf.pre` classes define invertible transformations for data standardization.\n",
49-
"- The constructor of the [`ROM`](opinf.roms.ROM) class receives an (optional) `transformer` argument.\n",
5049
":::"
5150
]
5251
},

0 commit comments

Comments
 (0)