2105.14070
Accelerating Neural ODEs Using Model Order Reduction
Mikko Lehtimäki, Lassi Paunonen, Marja-Leena Linne
correctmedium confidence
- Category
- math.DS
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper derives the POD–DEIM reduced Neural ODE explicitly: after POD Galerkin projection, x̃'(t) = V_k^T f(A V_k x̃ + b) + V_k^T Z u (their Eq. (3)), and then replaces the nonlinear term via DEIM to obtain x̃'(t) = V_k^T U_m (P^T U_m)^{-1} f_m(Ã x̃ + b) + Z̃ u (their Eq. (5)), followed by selecting only the m rows indexed by the DEIM points to get x̃'(t) = N f_m(Ã_m x̃ + b_m) + Z̃ u (their Eq. (6)), with the explicit statement that “only m nonlinear functions are evaluated” online. These steps are present verbatim in the PDF and match the model’s derivation and notation (Ã = A V_k, Z̃ = V_k^T Z, N = V_k^T U_m (P^T U_m)^{-1}). The model’s added remark that P^T f(y) = f(P^T y) when f is componentwise makes explicit the paper’s implicit assumption behind the m-row compression step (the paper phrases it as ‘Due to the structure of f_m: R^m → R^m…’). Therefore, both the paper and the candidate solution present the same argument and are mutually consistent. Citations: the paper’s Eq. (3) derivation and lifting bottleneck discussion , the DEIM interpolation formula and ROM Eq. (5) with fm definition and Algorithm 1 , and the ‘only m nonlinear functions’ and Eq. (6) statement .
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions
\textbf{Journal Tier:} specialist/solid
\textbf{Justification:}
The manuscript correctly formulates and implements POD–DEIM for Neural ODE compression and supports it with experiments. The derivation from projection (Eq. (3)) through DEIM (Eqs. (4)–(6)) is standard but appropriately adapted. The only needed improvements are minor clarifications of assumptions and a slightly more explicit accounting of computational costs and conditions under which k ≠ m is beneficial.