The challenges that face those developing or contemplating the use of artificial intelligence in security systems have clearly been high on the agenda at Thales. The company has a long-held position as a major provider of cybersecurity products and services to the financial sector, and strengthened its grip on that market with the acquisition last year of Gemalto, a specialist in technologies including encryption, biometrics and secure chips. At the air show this week, Thales launches TrUE AI – not so much a product as a new way of thinking and talking about AI-based security.

“TrUE AI is a framework that we’ve put together to give a consistent message to our customers and the community on what we wanted to do and how we wanted to do artificial intelligence in the world of critical systems,” says Marko Erman, the company’s chief technology officer. “Even though everybody’s talking about AI these days, the same words don’t always mean the same things.”

Erman explains that TrUE AI rests on three core principles that Thales has decided are baseline requirements for adoption and use of AI tools and techniques: AI must be Transparent, Understandable and Ethical. If its AI products tick all those boxes, the company can build systems users can trust. 

“[To achieve] transparency we have to have dialogue, both internal with our development teams and externally with the customer, on what kind of data we have used to train the AI,” Erman says. AIs have to be fed data to learn from, so customers have to understand that underlying education before they can rely on any suggestions or actions a system generates. 

“The second part is about understandable AI,” Erman says. “Sometimes the answer an AI will give is a surprise to the human. It’s hard to know whether the answer is different from the expected one because the machine really has come up with the right answers, or because it has malfunctioned. [Doubts] will occur if the recommendation from the machine comes without any explicability.”

Again, as with transparency, Erman argues, the way to resolve understandability issues raised by AI begins with dialogue. 

“Some algorithms are understandable because they’re based on mathematical rules where you can prove that A plus B implies C,” he says. “But you will not have the explicability that a given result is derived from given facts. For most of the consumer market this is not a problem, but for critical systems it’s a very severe problem.”

One way in which Thales is looking to solve this involves deploying AIs that rely on explicable mathematical models. Another acquisition announced recently – on a far smaller scale than Gemalto but with the potential to have an outsized impact – was that of the Ohio-based SME Psibernetix. The company has developed an AI called Alpha that it has used in flight simulators to control aggressor aircraft. Alpha uses only simple mathematical functions – plus, minus, multiply, divide – so it requires very little computational power and is very fast, yet its decisions are easy to explain and, when the totality of the context for them is taken into account, entirely predictable. 

“It’s explainable the same way a pilot’s behavior is explainable, because it’s based on pilot experience, pilot training and the rules and behaviors learned through the courses,” Erman says. “Even though [Alpha] is not certified, it’s very promising.”

The final part of TrUE AI is the ethical dimension. This may prove to be the most challenging element of delivering trustable AIs. Part of the solution lies in the company keeping constant vigil on its own responsibilities – Erman talks of an internal council, monitoring decisions on AI deployment, and Thales will train developers on ethics as well as technical skills. But customers have to accept responsibility, too.

“The problem with most AI today is it performs really well in a very narrow sector,” Erman says. “Humans are very good at understanding context; machines, today, are unable to. All of our customers, in a way, say: ‘We want the human to have the capacity to override the machine.’ Even when the decision is made so fast the human is not in the loop, he still has to exercise his responsibility.”

This involves setting boundaries beyond which the AI system can have no meaningful impact. 

“The human will be reminded of the space of decision where the AI can operate,” Erman says. “The machine stays in that area, and the human can prevent it leaving that space. The ‘off’ button is very important.”