2110.11538
Computing the Invariant Distribution of Randomly Perturbed Dynamical Systems Using Deep Learning
Bo Lin, Qianxiao Li, Weiqing Ren
correctmedium confidence
- Category
- Not specified
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper derives, via the substitution p = e^{-V/ε} in the stationary Fokker–Planck equation and with constant diffusion D = σσ^T, the decomposition f = −D∇V + g together with the pointwise constraint ∇V^T g − ε ∇·g = 0 (its Eqs. (3.1)–(3.3)) and uses this identity as the core constraint in the learning losses; this is exactly the same algebra and conclusion as the candidate solution, which adds only routine regularity/integrability remarks and the equivalent weighted-divergence form ∇·(g e^{-V/ε}) = 0. The paper explicitly states the FP equation and the flux J, the ansatz p = e^{-V/ε}, and the constraint used in training, all consistent with the model’s steps. Hence both are correct and essentially the same argument. See the paper’s FP setup and flux J definition (Sec. 2) and its decomposition-based constraint (Sec. 3) , the derivation of (3.1)–(3.3) via p = e^{-ε^{-1}V} , and the training objective enforcing ∇Vθ^T gθ − ε∇·gθ = 0 and integrability considerations for e^{-V/ε} via the quadratic tail in Vθ .
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions
\textbf{Journal Tier:} specialist/solid
\textbf{Justification:}
The core equivalence between the stationary Fokker–Planck equation and the potential–residual decomposition with the constraint ∇V\^T g − ε ∇·g = 0 is correct and well motivated. The paper operationalizes this identity in a learning framework and supports it with credible experiments. Minor clarifications on assumptions (positivity/regularity/integrability) and one explicit algebraic line connecting the flux to the residual term would improve clarity without altering the contribution.