Speaker
Description
The integration of hardware accelerated machine learning is transforming astronomical instrumentation by enabling real-time video/image processing and adaptive optics for space-based applications. This project demonstrates the deployment of an Adaptive Optics segmented mirror co-phasing algorithms with applications for CubeSats on the AMD Versal AI Core, showcasing how FPGA-based architectures can be leveraged for accurate, low-latency wavefront error correction. The parallel processing capabilities of the Versal architecture make it an ideal platform for running machine learning algorithms efficiently, minimizing resource utilization while maximizing performance.
The implementation involved using AMD’s Vitis-AI library to develop a custom regression model, ensuring compatibility with the Deep Processing Unit running on the board. The model was successfully quantised, deployed, validated and tested, demonstrating its feasibility for on-board adaptive optics with minimal computational overhead and loss of accuracy.
Key findings from the project showed that the floating-point model and the hardware accelerated model achieved a residual wavefront error of only 6.2 nm on average. Most of the error originated from the quantisation process, where the floating-point model was converted to an integer representation. Future work will focus on enhancing Deep Processing Unit support for floating point models to mitigate quantisation-induced errors and expanding the validation dataset for fine-tuning.
The success of this project establishes a strong foundation for hardware accelerated processing within CubeSats, enabling real-time, resource-efficient computation without the need for a large onboard computer.