Industry views

Ethical constraints the biggest challenge for AI in defence

Defence industry concerns over AI ‘hallucinations’ are not as pressing as the technology’s ethical and legal constraints, according to a GlobalData survey. Alex Blair reports.

The aftermath of an Israeli airstrike in Rafah, Gaza. Credit: Anas Mohammed/Shutterstock

Ethical and legal constraints will be the biggest challenge for the implementation of artificial intelligence (AI) in defence over the next five years, industry experts believe. 

An ongoing survey by GlobalData has found that, when asked what the biggest challenge for AI in defence will be over the next five years, 47.1% of respondents point to ethical and legal constraints, while 34.6% say bias and ‘hallucination’ errors. The survey has received 208 responses to date across GlobalData outlets. 

Israel's 'Gospel' AI airstrike system sparks backlash

Concerns over the ethicality and legality of AI use in conflict scenarios have been laid bare by the ongoing Middle East conflict. 

“There are definitely huge ethical concerns especially if the AI is involved in making potentially lethal decisions in an actual conflict scenario,” says James Marques, defence analyst at GlobalData. 

The Gaza Strip has become one such military theatre. As the Israel Defence Forces (IDF) launch airstrikes on Rafah, a city on the Palestinian territory’s southwestern border with Egypt, the role AI has played in their “around the clock” bombing campaign becomes increasingly apparent. 

The IDF has drastically increased the number of targets selectable for airstrikes through their ‘Gospel’ AI target-identification platform. 

In an interview before the most recent Israel-Palestine conflict, former IDF chief Aviv Kochaviv said the Gospel has increased the number of targets the IDF can target in Gaza from 50 per year to 100 per day. 

Questions over the data used by the Gospel to select targets – and how precise its airstrikes are in the context of reducing civilian harm – have gone unanswered. 

Aside from these mounting ethical and legal concerns, survey respondents also expect bias and ‘hallucinations’ to delay AI’s implementation in defence. 

Hallucinations, which occur when an AI-driven chatbot spouts misinformation as fact, are expected to decrease as AI accuracy increases. 

This trend, however, is far from guaranteed, according to Marques. 

“In general, I believe hallucinations will decrease, but it depends on the dataset AI systems are being trained on, and progress may not be so linearly positive,” Marques told Army Technology. “There is the possibility that introducing AIs to larger datasets and more complex topics may actually increase chances for hallucinations, but this is all early stages in the long run.”