2110.01450
Extended dynamic mode decomposition with dictionary learning using neural ordinary differential equations
Hiroaki Terao, Sho Shirasaka, Hideyuki Suzuki
correctmedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper clearly specifies the EDMD-DL objective, alternating closed-form K update, and NODE-based dictionary with adjoint training, and reports concrete Duffing and Kuramoto–Sivashinsky results and parameter counts (Tables 1–5) under stated data-generation and solver settings, e.g., DOPRI5(4) with abs/rel tolerances 1e−9/1e−7, M=25 for Duffing with 3 non-trainable outputs, and M=151 with 129 non-trainable for KS. These are internally consistent and sufficient for replication (loss and K-update: Eqs. (20)–(21); metrics: Eqs. (28)–(32); architecture/adjoint: Eq. (25)–(27); prediction: Eq. (19)) . By contrast, the candidate solution only proposes a plan and does not execute or report the quantitative metrics. It also deviates by fixing the output projection (the paper states the input and output layers are the same as in the MLP-based EDMD-DL) which affects parameter counting and comparability . Given the paper’s explicit numerical evidence (Duffing Tables 1–2, classification Table 3; KS Tables 4–5), the paper’s claims stand, while the model solution is incomplete and not validated .
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions
\textbf{Journal Tier:} specialist/solid
\textbf{Justification:}
The paper gives a clear, methodologically consistent adaptation of EDMD-DL to NODE dictionaries and presents careful experimental comparisons on two canonical systems with well-defined metrics and solver settings. The parameter-efficiency gains are modest but meaningful. Clarity and reproducibility would benefit from additional details (parameter-count breakdowns, seed variability) and a brief discussion of the output-layer implementation.