2110.09658
System Norm Regularization Methods for Koopman Operator Approximation
Steven Dahdah, James Richard Forbes
correctmedium confidence
- Category
- math.DS
- Journal tier
- Strong Field
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
Each part (a)–(e) of the candidate solution matches the paper’s statements and LMI formulations, with only cosmetic differences (e.g., state-ordering in the cascade). The EDMD trace expansion and LMI slack (a) coincide with the paper’s convex reformulation using H = LL^T and the Schur complement. The stability constraint (b) is the same modified Lyapunov/LMI ensuring the spectral radius bound. The discrete-time bounded real lemma LMI and its use for H∞-regularized regression (c) match exactly the 4×4 “dilated” LMI used in the paper. The weighted H∞ construction by post-cascading a filter (d) reproduces the augmented realization (up to permutation of states) and the equivalence of minimizing the cascade’s H∞ norm to a frequency-weighted H∞ penalty. Finally, the SVD-based reduction and projected LMI (e) coincide with the paper’s reduced problem and final block LMI. See the paper’s LMI reformulation of EDMD, including the Schur complement block, and the explicit use of L from Ψ’s SVD ; the Lyapunov/spectral-radius constraint and its 2×2 LMI form ; the strict BRL LMI and the H∞-regularized objective ; the weighted H∞ cascade formulas (70)–(72) ; and the SVD-based reduction culminating in the projected LMI (102) .
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions
\textbf{Journal Tier:} strong field
\textbf{Justification:}
The paper integrates standard but powerful LMI tools to address stability and robustness in Koopman operator regression. The methodology is correct and practically relevant, with a clear path to implementation and extensions (e.g., weighting). Some algorithmic details (solution of bilinear LMIs) could be streamlined or summarized for reproducibility.