2111.12024
Adversarial Sampling for Solving Differential Equations with Neural Networks
Kshitij Parwani, Pavlos Protopapas
incompletemedium confidence
- Category
- Not specified
- Journal tier
- Note/Short/Other
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper defines the adversarial sampling objective L̂sampler = −L̂(ŷ,x) + λ Dk(x), notes empirically that λ=0 causes collapse and that adding the Dk dispersion prevents it, and evaluates performance on ODE/PDE tasks with 10 trials and a 32×32 PDE validation grid, but gives no formal analysis of why collapse occurs or when regularization precludes it. The model’s solution supplies a clean mathematical treatment: (i) for λ=0, minimizers place all samples in Argmax L; (ii) for λ>0, a one-point split analysis yields a necessary local inequality preventing collapse unless the outward drop of L dominates the linear dispersion gain; and (iii) Lipschitz conditions imply non-collapse for sufficiently large λ. These arguments fill the missing theory while remaining consistent with the paper’s methodology and experiments, aside from a minor quantifier slip that we flag and correct. The paper’s claims are thus incomplete but directionally correct, whereas the model’s solution is substantively correct and more rigorous (definitions of L̂solver/L̂sampler and Dk match the paper’s, as do the experimental setup and baselines).
Referee report (LaTeX)
\textbf{Recommendation:} minor revisions
\textbf{Journal Tier:} note/short/other
\textbf{Justification:}
The paper presents a clear and practical adversarial sampling scheme with a dispersion regularizer and demonstrates performance gains across several DE benchmarks. However, it relies on observational claims about collapse and does not provide theoretical backing for when and why the Dk term averts collapse. Adding a short local analysis, plus ablations on hyperparameters and implementation details, would materially strengthen the work without changing its core contribution.