Researchers from Tsinghua University have proposed a Full Forward Mode (FFM) training method for optical neural networks, which can directly execute the training process in physical optical systems without the need for backpropagation algorithms. This method has the following advantages:
-
Reduces dependence on mathematical models, avoiding problems caused by model inaccuracies.
-
Saves time and energy consumption, allowing parallel processing of large amounts of data.
-
Achieves effective self-training on free-space optical neural networks, with accuracy approaching theoretical values.
-
Achieves high-quality imaging in complex scattering environments, with resolution approaching physical limits.
-
Capable of parallel imaging of hidden objects outside the line of sight.
The core principle of FFM is to map the optical system to a parameterized in-situ neural network, calculate gradients by measuring the output light field, and update parameters using gradient descent algorithms. It utilizes spatial reciprocity principles, allowing data and error calculations to share the same forward physical propagation process and measurement methods.
Researchers validated FFM's performance through multiple experiments:
-
Classification training on MNIST and Fashion-MNIST datasets, where FFM-learned networks achieved accuracy close to theoretical values.
-
High-resolution focusing in scattering media, with focal spot sizes approaching the diffraction limit.
-
Parallel recovery and imaging of hidden objects in non-line-of-sight scenarios.
This research provides new insights into training optical neural networks and is expected to advance optical computing and imaging technologies.