You are here
Corrector operator to enhance accuracy and reliability of neural operator surrogates of nonlinear variational boundary-value problems
I am excited to announce our work (with Dr. Oden) on enhancing the accuracy of neural operators using the so-called corrector operator. This work is a culmination of our efforts in using goal-oriented a-posteriori error estimates in Bayesian inference in Jha and Oden (2022), JCP 470, 111575, and recently using a similar idea in Cao et al. (2023), JCP 486, 112104.
Specifically, the work proposes using a corrector operator obtained by linearizing the residual of the problem to compute the correction of neural operator prediction. This approach is appealing, especially for scenarios when neural operators have limited and unreliable accuracy (e.g., dealing with sparse data, unable to generate appropriate training data a priori). Numerical results show a significant (almost two orders) reduction in error when neural operators are corrected.
The topology optimization of the diffusivity field in a nonlinear diffusion equation highlights the effect of training data and the limited accuracy of the neural operator as a surrogate of the forward problem. The error in minimizers is as high as 80 percent when neural operator surrogates are used. However, the error reduces significantly (below seven percent) when neural operators are corrected following the proposed scheme.
Preprint: https://arxiv.org/abs/2306.12047
PDF link: https://prashjha.github.io/publication/jha-2023-corrector/jha-2023-corre...
Previous work
Jha and Oden (2022): https://prashjha.github.io/publication/jha-2022-goal/
Cao et al. (2023): https://prashjha.github.io/publication/cao-2022-residual/
#neuralnetworks #neuraloperators #scientificcomputing #computationalmechanics #variationalformulation #mathematicalmodeling
Attachment | Size |
---|---|
2306.12047.pdf | 11.29 MB |
- Prashant K. Jha's blog
- Log in or register to post comments
- 1098 reads
Recent comments