2010.06701
Operator Inference and Physics-Informed Learning of Low-Dimensional Models for Incompressible Flows
Peter Benner, Pawan Goyal, Jan Heiland, Igor Pontes Duff
incompletehigh confidence
- Category
- Not specified
- Journal tier
- Strong Field
- Processed
- Sep 28, 2025, 12:55 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper’s Theorem 5.1 correctly states that, under four assumptions, the learned Operator Inference (OI) operators converge to the POD–Galerkin ones, but the proof sketch omits key identifiability/uniqueness conditions for the least-squares map K ↦ K D and relies on a vague perturbation argument. The candidate solution supplies a more constructive projection-residual analysis but makes two substantive linear-algebra errors: (i) it claims uniqueness of the minimizer under full column rank of D, and (ii) it uses D D^+ = I (valid only for full row rank), which it has not assumed. Both arguments need additional assumptions (e.g., persistent excitation so D has full row rank, or an explicit minimum-norm selection rule) to be complete.
Referee report (LaTeX)
\textbf{Recommendation:} major revisions
\textbf{Journal Tier:} strong field
\textbf{Justification:}
The central idea—learning structured low-order models via operator inference and establishing consistency with POD–Galerkin—is timely and impactful. The empirical results are compelling. However, the theoretical result (Theorem 5.1) relies on an under-specified rank/identifiability condition and a high-level perturbation argument that does not fix a unique solution selection for the least-squares map. These gaps can be addressed by stating a persistent-excitation/full-row-rank condition or by explicitly selecting the minimum-norm solution produced by the SVD algorithm and then invoking standard continuity/perturbation results. Clarifying these points would substantially improve the paper’s correctness and clarity.