2105.02522
Consistency of mechanistic causal discovery in continuous-time using Neural ODEs
Alexis Bellot, Kim Branson, Mihaela van der Schaar
incompletemedium confidence
- Category
- math.DS
- Journal tier
- Specialist/Solid
- Processed
- Sep 28, 2025, 12:56 AM
- arXiv Links
- Abstract ↗PDF ↗
Audit review
The paper states a causal-consistency result for the adaptive group lasso (AGL) on neural ODEs and sketches key ingredients (a generalization bound, GL→AGL weighting, and a contradiction for eliminating null groups), but it relies on an unstated Assumption 3 and a margin-type Lemma 4 that are not fully specified in the provided text and does not explicitly prove preservation of truly nonzero groups. By contrast, the model’s solution supplies the missing structural and curvature assumptions (derivative–column linkage, local quadratic margin, persistence of excitation), and completes both directions: exact zeros on null groups via KKT and retention of true signals via a fixed risk gap that dominates a vanishing penalty. Hence, the model’s proof is complete under clearly stated, standard assumptions, while the paper’s argument is missing crucial hypotheses and steps.
Referee report (LaTeX)
\textbf{Recommendation:} major revisions
\textbf{Journal Tier:} specialist/solid
\textbf{Justification:}
The contribution is conceptually strong and addresses an important problem: mechanistic causal discovery using neural ODEs with adaptive group lasso. However, essential technical assumptions (a local curvature/smoothness assumption and the derivative–column equivalence conditions) are not explicitly stated or proved, and the provided proof fragment for causal consistency does not clearly cover preservation of true nonzero groups. These omissions should be remedied by adding explicit assumptions, full proofs for both directions of selection, and a lemma articulating the network identifiability conditions under which first-layer column norms correspond to functional sparsity.