Back to search
2009.02874

Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

Shankar A. Deka, Dušan M. Stipanović, Claire J. Tomlin

correctmedium confidence
Category
math.DS
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:55 AM

Audit review

The paper formulates the error dynamics ė = A(t,e)e + B(t,e,d)d via a mean-value expansion (their equation (13)) and constructs an adversarial perturbation as the fixed point d̄ = ε B(t,e,d̄)^T e using a Banach contraction argument under bounded x-Hessians (Proposition 1, leading to (15)–(17)). It then states Theorem 1: existence of d(t) that makes the perturbed trajectory monotonically diverge if the matrix measure satisfies μ(−A(t,e)) < 0, leveraging the observation that the ε B B^T term makes d/dt ||e||^2 ≥ 0 and that A alone is expansive when μ(−A) < 0 (see matrix-measure/Coppel discussion around (12) and the sufficient condition after (13)) . The candidate solution follows the same fixed-point construction and, specializing to the Euclidean norm, shows V̇ = e^T Sym(A) e + ε||B^T e||^2 ≥ −μ2(−A)||e||^2, which is strictly positive when μ2(−A) < 0, and derives a Coppel-type exponential lower bound. Minor gaps in the paper (norm mixing when discussing monotonicity of the Euclidean norm, and how to leave e = 0) are addressed either elsewhere in the paper via a dynamic disturbance and Assumption 1 (Theorem 3) or by the model’s explicit remarks. Overall, both reach the same conclusion via substantially the same proof ingredients.

Referee report (LaTeX)

\textbf{Recommendation:} minor revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The paper presents a constructive existence result for adversarial disturbances in RNNs using classical control tools (contraction mapping and matrix measures). The analysis is sound and well-motivated; the dynamic disturbance adds implementational relevance. Minor clarifications would further strengthen the presentation, particularly around norm consistency and exiting the equilibrium.