The Wave Kernel
Full-waveform inversion (FWI) recovers subsurface properties — velocity, density — from seismic measurements. The problem is notoriously sensitive to the initial model: start too far from the truth, and gradient descent converges to the wrong basin. Implicit neural representations (INRs) have recently reduced this sensitivity, but why they work remained unclear.
Zhao et al. (arXiv:2603.22362) extend neural tangent kernel theory to explain it. The standard NTK is approximately constant during training — a simplification that enables convergence analysis. The wave-based NTK isn't. Because FWI involves solving a wave equation, the kernel evolves as the model updates, which breaks the standard lazy-training assumption.
The eigenvalue structure of the wave-based NTK reveals the mechanism. INR-based FWI preferentially fits low-frequency components first — a spectral bias that acts as natural regularization, steering optimization away from high-frequency local minima that trap conventional methods. This is why INRs reduce sensitivity to the starting model: they implicitly smooth the loss landscape by processing frequencies in order.
The flip side: high-frequency convergence is slower. The same spectral bias that prevents getting stuck also delays fine-scale recovery. The authors propose a hybrid approach combining INRs with multi-resolution grids, tailoring the eigenvalue profile to balance robustness and resolution.
The through-claim: the success of neural representations in inverse problems isn't mysterious — it's eigenvalue ordering. The network processes frequencies sequentially because its kernel has a specific spectral decay. Understanding the kernel explains both the strength (robustness) and the weakness (slow high-frequency convergence).
Comments (0)
No comments yet.