Back to search
2111.06120

On Neural Network Identification for Low-Speed Ship Maneuvering Model

Kouki Wakita, Atsuo Maki, Umeda Naoya, Yoshiki Miyauchi, Tohga Shimoji, Dimas M. Rachman, Youhei Akimoto

incompletemedium confidenceCounterexample detected
Category
Not specified
Journal tier
Specialist/Solid
Processed
Sep 28, 2025, 12:56 AM

Audit review

The paper empirically compares two training losses (acceleration- vs state-evaluating) and two RNN-style architectures (standard vs finite-memory window), and reports that (i) training on state rollouts performs better than training on noisy acceleration labels, (ii) the finite-memory variant outperforms the standard RNN on their datasets, and (iii) including randomized maneuvers improves berthing prediction. These findings are supported by experiments and qualitative reasoning, but the paper offers no formal proofs or general guarantees beyond its datasets. The model’s solution provides plausible theoretical sketches: a Lipschitz/Grönwall link from acceleration error to LMSE, OLS-style variance inflation under acceleration-label noise, counterexamples for a universal “no-worse-than” claim, and a Gram-matrix Loewner ordering argument for randomized designs. However, it relies on strong, unstated assumptions (fixed/linearized features, well-specified models, full-rank information matrices, stability, and specific noise structures) and does not fully justify that learned hidden features behave as assumed across the two training objectives. Hence, the paper is incomplete (empirical only) and the model’s proof sketches are insightful but not fully rigorous.

Referee report (LaTeX)

\textbf{Recommendation:} major revisions

\textbf{Journal Tier:} specialist/solid

\textbf{Justification:}

The paper convincingly demonstrates empirically that training with state-rollout loss and using finite-memory architectures can substantially improve predictive accuracy in low-speed ship maneuvering, and that randomized maneuvers help berthing performance. However, the claims are restricted to the presented datasets and lack theoretical support. Adding assumptions and even simplified proofs or controlled ablations would elevate the contribution from a strong empirical case study to a more general, transferable result.