2106.05102
Learning normal form autoencoders for data-driven discovery of universal, parameter-dependent governing equations
Manu Kalia, Steven L. Brunton, Hil G.E. Meijer, Christoph Brune, J. Nathan Kutz
incompletemedium confidence
- Category
- math.DS
- Journal tier
- Strong Field
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper defines the normal-form autoencoder, its losses L1–L6, and a time-rescaling parameter τ, and informally appeals to center-manifold theory for existence of suitable coordinate transforms, but it does not provide a complete mathematical proof; the key statements appear as methodological assertions and appendix remarks (definitions of L1–L6, and τ via t* = τ^2 t, with τ ≈ sqrt(Tα/Tβ)) . The candidate solution supplies a coherent, standard proof sketch using center-manifold reduction and parameter-dependent normal-form theory, constructs α-independent encoders/decoders, accounts for higher-order remainders, and shows how each loss can be driven arbitrarily small on a sufficiently small neighborhood; it also matches the paper’s Hopf time-scaling recipe τ = sqrt(ω/Ωobs) since Tβ = 2π/ω implies τ = sqrt(Tα/Tβ) . Minor caveats (e.g., projecting to W^c(0) rather than W^c(α)) are acknowledged and only affect the rate at which L3–L4 shrink, not the existence claim.
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions
\textbf{Journal Tier:} strong field
\textbf{Justification:}
The work compellingly unifies normal-form theory with deep learning, offering a practical and conceptually elegant approach to parameterized dynamical systems. Empirical demonstrations are strong and diverse. The main gap is a formal existence/minimization argument for the losses, which can be remedied with a short theoretical appendix. Minor notational/typographical fixes would further improve clarity.