Daedalean Advances First AI Applications With EASA Project
Swiss artificial-intelligence startup Daedalean has completed a second study with the European Union Aviation Safety Agency to develop concepts for certifying safety-critical applications of machine learning in aviation.
The second 10-month joint project, completed in mid-May, matured the concept of learning assurance developed in the first project to augment the traditional development assurance process used to guarantee the safety of aircraft system.
Both studies focused of developing concepts of design assurance for the neural networks used for machine learning (ML) in artificial intelligence (AI) applications. Daedalean applied the technology to a visual landing guidance system in the first project and a visual traffic detection system in the second.
Both are complex ML-based computer vision systems providing safety benefits, the company said. Daedalean is working with avionics manufacturer Avidyne to field the advanced pilot assistance systems for retrofit to existing general-aviation aircraft and helicopters.
The second study also investigated the remaining building blocks of the European Union Aviation Safety Agency’s (EASA) AI trustworthiness framework for certifying safety-critical ML applications. This included the definition and role of AI explainability–the ability of users to understand how the system works.
A key outcome of the first Daedalean/EASA project was the identification of a W-shaped development process for machine-leaning applications. This an adaptation of and augmentation to the traditional V-shaped development assurance process already used for safety-critical aviation systems.
The W adds a learning assurance procedure to carefully manage the data and learning process used to train the ML model, plus verification steps for the data, learning process and the implementation of the ML model on the inference platform (hardware and software) within the aircraft system.
The second project expanded the work into the implementation and inference parts of the W-shaped process. This encompassed the development of ML models and the deployment of those models on the complex hardware needed to perform neural network inference during system operation.
“In both topics, a fundamental requirement is to ensure that performance guarantees are not lost in the transition from the development environment to the operational environment,” the team’s report says. “This advances the discussion from theoretical considerations on learning assurance to practical ones.”
Work on the AI explainability building bock underline the importance of strengthening the link between training data and learning assurance and in particular making sure the system’s operating space has been correctly identified to ensure the model is learning the correct behavior.
The report gives an example of the unexpected and undesirable input-output relationships that can be identified with explainability methods. A neural network tasked with detecting aircraft falsely identified a traffic cone as a helicopter. The network was trained using images of the Robinson R66 helicopter, which has a rotor pylon that resembles the triangular shape of a traffic cone.
These initial studies by Daedalean and EASA have been focused on Level 1 AI applications–defined as human assistance and augmentation. As a result, the work on AI explainability also highlighted the need, during operations, to provide additional insights to the pilot/operator about the system’s output to help in decision-making.
EASA already has used findings from both joint projects in drafting its first usable guidance for Level 1 machine-learning applications. This was released for public consultation in April and closed for comments at the end of June.