|
67 | 67 | ":class: note\n",
|
68 | 68 | "\n",
|
69 | 69 | "- `opinf.operators` classes represent the individual terms in a [model](opinf.models). [Nonparametric operators](sec-operators-nonparametric) are functions of the state and input, while [parametric operators](sec-operators-parametric) are also dependent on one or more external parameters.\n",
|
70 |
| - " - [`apply()`](OperatorTemplate.apply) applies the operator to given state and input vectors.\n", |
71 |
| - " - [`jacobian()`](OperatorTemplate.apply) constructs the derivative of the operator with respect to the state at given state and input vectors.\n", |
72 |
| - "- Operators that can be written as the product of a matrix (the *operator matrix*) and a known vector-valued function can be calibrated through solving a regression problem.\n", |
73 |
| - " - [`operator_dimension()`](OpInfOperator.operator_dimension) defines the size of the operator matrix for given state and input dimensions.\n", |
74 |
| - " - [`datablock()`](OpInfOperator.datablock) uses state and input snapshots to construct a block of the data matrix for the regression problem.\n", |
75 |
| - "- [Model](opinf.models) constructors receive a list of operator objects. A model's [`fit()`](opinf.models.ContinuousModel.fit) method constructs and solves a regression problem to learn the operator matrices.\n", |
| 70 | + "- Operators that can be written as the product of a matrix and a known vector-valued function can be [calibrated](sec-operators-calibration) through solving a regression problem.\n", |
| 71 | + "- {mod}`opinf.models` objects are constructed with a list of operator objects. A model's [`fit()`](opinf.models.ContinuousModel.fit) method constructs and solves a regression problem to learn the operator matrices.\n", |
76 | 72 | ":::\n",
|
77 | 73 | "\n",
|
78 | 74 | "<!-- - Monolithic operators are designed for dense systems; multilithic operators are designed for systems with sparse block structure. -->"
|
|
256 | 252 | "There are two ways to determine operator matrices in the context of model reduction:\n",
|
257 | 253 | "\n",
|
258 | 254 | "- [**Non-intrusive Operator Inference**](sec-operators-calibration): Learn operator matrices from data.\n",
|
259 |
| - "- [**Intrusive (Petrov-)Galerkin Projection**](sec-operators-projection): Compress an existing high-dimensional operator.\n", |
| 255 | + "- [**Intrusive (Petrov--)Galerkin Projection**](sec-operators-projection): Compress an existing high-dimensional operator.\n", |
260 | 256 | "\n",
|
261 | 257 | "Once the `entries` are set, the following methods are used to compute the action\n",
|
262 | 258 | "of the operator or its derivatives.\n",
|
|
394 | 390 | "cell_type": "markdown",
|
395 | 391 | "metadata": {},
|
396 | 392 | "source": [
|
397 |
| - "To facilitate this, nonparametric OpInf operator classes have a static [`datablock()`](OpInfOperator.datablock) method that, given the state-input data pairs $\\{(\\qhat_j,\\u_j)\\}_{j=0}^{k-1}$, forms the matrix\n", |
| 393 | + "Nonparametric OpInf operator classes have two static methods that facilitate constructing the operator regression problem.\n", |
| 394 | + "\n", |
| 395 | + "- [`operator_dimension()`](OpInfOperator.operator_dimension): given the state dimension $r$ and the input dimension $r$, return the data dimension $d_\\ell$.\n", |
| 396 | + "- [`datablock()`](OpInfOperator.datablock): given the state-input data pairs $\\{(\\qhat_j,\\u_j)\\}_{j=0}^{k-1}$, forms the matrix\n", |
398 | 397 | "\n",
|
399 | 398 | "$$\n",
|
400 | 399 | "\\begin{aligned}\n",
|
|
407 | 406 | "\\end{aligned}\n",
|
408 | 407 | "$$\n",
|
409 | 408 | "\n",
|
410 |
| - "the complete data matrix $\\D$ is then the concatenation of the data matrices from each operator:\n", |
| 409 | + "The complete data matrix $\\D$ is the concatenation of the data matrices from each operator:\n", |
411 | 410 | "\n",
|
412 | 411 | "$$\n",
|
413 | 412 | "\\begin{aligned}\n",
|
|
417 | 416 | " \\\\ & &\n",
|
418 | 417 | " \\end{array}\\right].\n",
|
419 | 418 | "\\end{aligned}\n",
|
420 |
| - "$$\n", |
421 |
| - "\n", |
422 |
| - "Nonparametric OpInf operators also have a static [`operator_dimension()`](OpInfOperator.operator_dimension) method that, given the state dimension $r$ and the input dimension $r$, returns the data dimension $d_\\ell$." |
| 419 | + "$$" |
423 | 420 | ]
|
424 | 421 | },
|
425 | 422 | {
|
|
430 | 427 | ":class: important\n",
|
431 | 428 | "\n",
|
432 | 429 | "Model classes from {mod}`opinf.models` are instantiated with a list of operators.\n",
|
433 |
| - "The model's `fit()` method calls the [`datablock()`](OpInfOperator.datablock) method of each OpInf operator to assemble the full data matrix $\\D$, solves the regression problem for the full data matrix $\\Ohat$ (see {mod}`opinf.lstsq`), and extracts the operator matrix $\\Ohat_{\\ell}$ for each $\\ell = 1, \\ldots, n_{\\textrm{terms}}$ using the [`operator_dimension()`](OpInfOperator.operator_dimension)" |
| 430 | + "The model's `fit()` method calls the [`datablock()`](OpInfOperator.datablock) method of each OpInf operator to assemble the full data matrix $\\D$, solves the regression problem for the full data matrix $\\Ohat$ (see {mod}`opinf.lstsq`), and extracts from $\\Ohat$ the individual operator matrix $\\Ohat_{\\ell}$ for each $\\ell = 1, \\ldots, n_{\\textrm{terms}}$ using the [`operator_dimension()`](OpInfOperator.operator_dimension)." |
434 | 431 | ]
|
435 | 432 | },
|
436 | 433 | {
|
|
446 | 443 | "\\begin{aligned}\n",
|
447 | 444 | " \\Z\n",
|
448 | 445 | " &= [~\\dot{\\qhat}_0~~\\cdots~~\\dot{\\qhat}_{k-1}~] \\in \\RR^{r\\times k},\n",
|
449 |
| - " \\\\\n", |
| 446 | + " \\\\ \\\\\n", |
450 | 447 | " \\Ohat\n",
|
451 | 448 | " &= [~\\Ahat~~\\Bhat~] \\in \\RR^{r \\times (r + m)},\n",
|
452 |
| - " \\\\\n", |
| 449 | + " \\\\ \\\\\n", |
453 | 450 | " \\D\\trp\n",
|
454 | 451 | " &= \\left[\\begin{array}{ccc}\n",
|
455 | 452 | " \\qhat_0 & \\cdots & \\qhat_{k-1}\n",
|
456 |
| - " \\\\ \\vdots & & \\vdots \\\\\n", |
| 453 | + " \\\\\n", |
457 | 454 | " \\u_0 & \\cdots & \\u_{k-1}\n",
|
458 | 455 | " \\end{array}\\right]\n",
|
459 | 456 | " \\in \\RR^{(r + m) \\times k}.\n",
|
|
475 | 472 | "\\end{aligned}\n",
|
476 | 473 | "$$\n",
|
477 | 474 | "\n",
|
478 |
| - "That is, the OLS Operator Inference regression minimizes a sum of residuals of the model equation {eq}`eq:operators:ltiexample` with respect to available data.\n", |
| 475 | + "That is, the ordinary least-squares Operator Inference regression minimizes a sum of residuals of the model equation {eq}`eq:operators:ltiexample` with respect to available data.\n", |
479 | 476 | ":::"
|
480 | 477 | ]
|
481 | 478 | },
|
|
514 | 511 | "\\end{aligned}\n",
|
515 | 512 | "$$\n",
|
516 | 513 | "\n",
|
517 |
| - "Using ordinary least squares regression, the optimization problem is given by\n", |
| 514 | + "Using ordinary least-squares regression, the optimization problem is given by\n", |
518 | 515 | "\n",
|
519 | 516 | "$$\n",
|
520 | 517 | "\\begin{aligned}\n",
|
|
560 | 557 | "- $\\qhat\\in\\RR^{r}$ is the reduced-order state, and\n",
|
561 | 558 | "- $\\u\\in\\RR^{m}$ is the input (the same as before).\n",
|
562 | 559 | "\n",
|
563 |
| - "This approach uses the low-dimensional state approximation $\\q = \\Vr\\qhat$.\n", |
| 560 | + "This approach uses the low-dimensional state approximation $\\q \\approx \\Vr\\qhat$.\n", |
564 | 561 | "If $\\Wr = \\Vr$, the result is called a *Galerkin projection*.\n",
|
565 | 562 | "Note that if $\\Vr$ has orthonormal columns, we have in this case the simplification\n",
|
566 | 563 | "\n",
|
|
571 | 568 | "\\end{aligned}\n",
|
572 | 569 | "$$\n",
|
573 | 570 | "\n",
|
574 |
| - "If $\\Wr \\neq \\Vr$, the result is called a *Petrov-Galerkin projection*." |
| 571 | + "If $\\Wr \\neq \\Vr$, the result is called a *Petrov--Galerkin projection*." |
575 | 572 | ]
|
576 | 573 | },
|
577 | 574 | {
|
|
583 | 580 | "\n",
|
584 | 581 | "Consider the bilinear operator\n",
|
585 | 582 | "$\\Op(\\q,\\u) = \\N[\\u\\otimes\\q]$ where $\\N\\in\\RR^{n \\times nm}$.\n",
|
586 |
| - "The intrusive Petrov-Galerkin projection of $\\Op$ is the bilinear operator\n", |
| 583 | + "This type of operator can represented as a {class}`StateInputOperator`.\n", |
| 584 | + "The intrusive Petrov--Galerkin projection of $\\Op$ is the bilinear operator\n", |
587 | 585 | "\n",
|
588 | 586 | "$$\n",
|
589 | 587 | "\\begin{aligned}\n",
|
|
594 | 592 | "$$\n",
|
595 | 593 | "\n",
|
596 | 594 | "where $\\Nhat = (\\Wr\\trp\\Vr)^{-1}\\Wr\\trp\\N(\\I_m\\otimes\\Vr) \\in \\RR^{r\\times rm}$.\n",
|
597 |
| - "The intrusive Galerkin projection has $\\Nhat = \\Vr\\trp\\N(\\I_m\\otimes\\Vr)$.\n", |
| 595 | + "Hence, $\\Ophat$ can also be represented as a {class}`StateInputOperator`.\n", |
| 596 | + "Using Galerkin projection ($\\Wr = \\Vr$), $\\Nhat$ simplifies to $\\Nhat = \\Vr\\trp\\N(\\I_m\\otimes\\Vr)$.\n", |
598 | 597 | ":::\n",
|
599 | 598 | "\n",
|
600 | 599 | "Every operator class has a [`galerkin()`](OperatorTemplate.galerkin) method that performs intrusive projection."
|
|
774 | 773 | "That is, $\\ddqhat\\Ophat_{\\ell}(\\qhat, \\u) = \\operatorname{diag}(\\hat{\\s}).$\n",
|
775 | 774 | "\n",
|
776 | 775 | "Now consider a version of this operator with a large state dimension, $\\Op_{\\ell}(\\q, \\u) = \\q \\ast \\s$ for $\\q,\\s\\in\\mathbb{R}^{n}$.\n",
|
777 |
| - "For basis matrices $\\Vr,\\Wr\\in\\mathbb{R}^{n \\times r}$, the Petrov-Galerkin projection of $\\Op_{\\ell}$ is given by\n", |
| 776 | + "For basis matrices $\\Vr,\\Wr\\in\\mathbb{R}^{n \\times r}$, the Petrov--Galerkin projection of $\\Op_{\\ell}$ is given by\n", |
778 | 777 | "\n",
|
779 | 778 | "$$\n",
|
780 | 779 | "\\begin{aligned}\n",
|
|
0 commit comments