2103.04221
On Few Shot Learning of Dynamical Systems: A Koopman Operator Theoretic Approach
Suhbrajit Sinha, Umesh Vaidya, Enoch Yeung
incompletehigh confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper’s Theorem 6 claims an equivalence between a robust EDMD min–max and a Frobenius-regularized least-squares, but key parts of the proof are underspecified or internally inconsistent: the uncertainty set U is only stated to be compact, the symbols Δ̄ and Π_KΔ̄ are never defined, λ is not tied to any explicit norm radius, and steps such as “≤ λ ||K−1||2” and the appearance of √(||K||^2_F + K) mix the decision variable K with the dimension K and lack a clear norm-geometry justification (eqs. (15)–(22) in the paper) . By contrast, the model’s solution gives a precise, standard robust least-squares reduction under explicit norm-bounded uncertainty (||δG||_2 ≤ ρ_G, ||δA||_F ≤ ρ_A), yielding an exact inner maximization equal to ||GK−A||_F + ρ_G||K||_F + ρ_A, which rigorously implies the stated ridge-type regularization (up to an additive constant).
Referee report (LaTeX)
\textbf{Recommendation:} major revisions
\textbf{Journal Tier:} specialist/solid
\textbf{Justification:}
The manuscript addresses a practically important setting (sparse data for Koopman learning) and the robust-to-regularized connection is highly relevant. However, the central theorem’s proof is presently underspecified and notationally inconsistent: the uncertainty sets are only called compact, key symbols are undefined, and several inequalities are unjustified. With clear assumptions (norm-ball uncertainty) and a standard robust LS argument, the main equivalence becomes rigorous. I therefore recommend major revisions to correct and clarify the theoretical development.