Back to search
2006.06417

Deep Learning for Stable Monotone Dynamical Systems

Yu Wang, Qitong Gao, Miroslav Pajic

correctmedium confidence
Category
Not specified
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:55 AM

Audit review

The paper’s Theorem 1 claims that, under assumptions (i)–(iv), each coordinate’s variance of the q-window predictor’s T-step error is approximately upper bounded by (1 + (b^2 + ε^2/3)/q)·ε^2/(3q), uniformly in T; this statement and its proof sketch appear in Section 3.1 and Appendix A.2, culminating in Eq. (54) and the replacement ∑ p_i^2 ≈ 1/q from assumption (i) . The candidate solution derives the same bound via a different route: an explicit error recursion e(s+1), law of total variance, a Lipschitz propagation step, and then approximation using assumption (iii). Both arrive at the same form and rely on the same modeling choices (fresh one-step error with variance ε^2/3 and p_i near 1/q). Differences are mainly in technique: the paper uses a Taylor expansion-based variance bound with sums over i and then drops higher-order cross terms via (iii), whereas the model emphasizes a b^i-Lipschitz argument and retains only the dominant i=1 contribution before bounding with ∑ p_i^2. The paper implicitly uses independence/zero-covariance when turning Var of sums into sums of variances; the model states such independence explicitly. Net: same claim, compatible assumptions, and comparable approximations; hence both correct by differing proofs.

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The claimed variance bound for the q-window approach is internally consistent and supported by a workable (though approximate) proof sketch. The result is practically informative and backed by experiments. However, the proof repeatedly relies on unstated independence/zero-covariance and uses heuristic asymptotics (condition (iii)) to drop higher-order terms. These points should be made explicit and quantified, which would raise the rigor without changing the main conclusion.