Back to search
2101.03583

Accuracy and Architecture Studies of Residual Neural Network solving Ordinary Differential Equations

Changxin Qiu, Aaron Bendickson, Joshua Kalyanapu, Jue Yan

correctmedium confidence
Category
Not specified
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:55 AM

Audit review

The paper’s Theorem 1 proves the one-step error bound for a ResNet ODE solver by decomposing the error into training error plus the target’s local truncation error; the candidate solution uses the same triangle-inequality decomposition and the standard one-step local truncation expansion. Aside from minor notational slips in the paper (a redundant Lipschitz term in Lemma 1 and a slight ambiguity about whether “k-th order” refers to method order or the order of the local truncation term), the logical content agrees. The candidate adds a clear uniformity argument over a finite training set. Overall, the proofs are essentially the same and correct.

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The central one-step error decomposition is correct, clearly connects training fidelity to solver accuracy, and is supported by consistent examples (Euler/RK2/RK4). The theory is concise and serviceable for practitioners. Minor notational slips and implicit assumptions (regularity, uniform constants, one-step vs. global accuracy) should be clarified, but these do not undermine the main result.