2108.04433
Deep Learning Enhanced Dynamic Mode Decomposition
Christopher W. Curtis, Daniel Jay Alford-Lago, Opal Issan
incompletemedium confidence
- Category
- math.DS
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper’s Theorem 2 asserts that, under Ansatz 1 (dictionary invariance) and diagonalizable Ċ(0), EDMD “only computes spectra and eigenfunctions of K_t,” deriving a finite-dimensional connection-matrix dynamics Ċ(t)=Ċ(0)C(t) and concluding C(t)=Ṽ e^{tΛ} Ṽ^{-1} (Theorem 2 and its proof) . It also states the standard EDMD pipeline (least-squares problem Ko=argmin ||Ψ+−KΨ−||^2_F, SVD-based formula, spectral readout λ=ln t̃/δt, and eigenfunction reconstruction) . However, the paper stops short of a precise identification Ko=C(δt) and does not state the usual data richness/full row-rank condition on Ψ− that guarantees EDMD recovers the exact finite-dimensional Koopman matrix; instead it argues informally that, since r(x;K)=0 under Ansatz 1, the constructions “must be equivalent” to the EDMD minimizer . The candidate solution supplies the missing technical step: with noiseless on-trajectory snapshots, Ψ+=C(δt)Ψ−, hence Ko=Ψ+(Ψ−)+=C(δt)Π and, when rank(Ψ−)=N, Ko=C(δt). It then explicitly identifies eigenpairs, clarifies the left-eigenvector/eigenfunction connection, and addresses branch selection for the logarithm. Thus the model’s solution is correct and more complete; the paper’s result is essentially correct but omits the standard rank/coverage hypothesis needed to make the EDMD⇔C(δt) equivalence rigorous.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions
\textbf{Journal Tier:} specialist/solid
\textbf{Justification:}
The central theoretical claim—exact EDMD spectral recovery on an invariant finite dictionary—is sound but presented with an informal equivalence step. Adding the standard rank/data richness hypothesis and a short note on the logarithm branch would make the proof fully rigorous. The remainder of the paper is clearly written and the proposed DLDMD methodology is well motivated with illustrative experiments.