Focal Plane Wavefront Sensing using Machine Learning: Performance of Convolutional Neural Networks compared to Fundamental Limits

Authors: G. Orban de Xivry, M. Quesnel, P. -O. Vanberg, O. Absil, G. Louppe

arXiv: 2106.04456v1 - DOI (astro-ph.IM)
Accepted for publication in MNRAS; 13 pages, 14 figures
License: CC BY 4.0

Abstract: Focal plane wavefront sensing (FPWFS) is appealing for several reasons. Notably, it offers high sensitivity and does not suffer from non-common path aberrations (NCPA). The price to pay is a high computational burden and the need for diversity to lift any phase ambiguity. If those limitations can be overcome, FPWFS is a great solution for NCPA measurement, a key limitation for high-contrast imaging, and could be used as adaptive optics wavefront sensor. Here, we propose to use deep convolutional neural networks (CNNs) to measure NCPA based on focal plane images. Two CNN architectures are considered: ResNet-50 and U-Net which are used respectively to estimate Zernike coefficients or directly the phase. The models are trained on labelled datasets and evaluated at various flux levels and for two spatial frequency contents (20 and 100 Zernike modes). In these idealized simulations we demonstrate that the CNN-based models reach the photon noise limit in a large range of conditions. We show, for example, that the root mean squared (rms) wavefront error (WFE) can be reduced to < $\lambda$/1500 for $2 \times 10^6$ photons in one iteration when estimating 20 Zernike modes. We also show that CNN-based models are sufficiently robust to varying signal-to-noise ratio, under the presence of higher-order aberrations, and under different amplitudes of aberrations. Additionally, they display similar to superior performance compared to iterative phase retrieval algorithms. CNNs therefore represent a compelling way to implement FPWFS, which can leverage the high sensitivity of FPWFS over a broad range of conditions.

Submitted to arXiv on 08 Jun. 2021

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.