Artificial Intelligence & Machine Learning , Government , Industry Specific

UK Official: AI in Defense Sector Is Not About Killer Robots

It Will Do Low-Level Tasks to Free Up Pros to Make Decisions, Say Researchers
UK Official: AI in Defense Sector Is Not About Killer Robots
A U.K. official says the country's military doesn't want killer robots. (Image: Shutterstock)

The current use of artificial intelligence in the U.K. defense sector is not about creating killer robots. It is focused on optimizing resources and increasing the efficiency of military operations, experts speaking at the Alan Turing Institute's AI UK conference told attendees.

See Also: Strengthen Cybersecurity with Zero Trust Principles

Speaking at the event on Wednesday, Steven Meers, a researcher at the Ministry of Defense's lab specializing in the application of AI and machine learning, said a common misconception about the technology among the public is that it is used solely for creating killing machines.

"AI is a very ubiquitous technology. The thing that I'm sure will pop into most people's minds will be killer robots and autonomous weapons systems. But for me, the single biggest thing is decision-making," Meers said.

At the Defense Science and Technology Laboratory, the focus has been on assisting the government in deploying AI to improve operations, Meers said.

An example is a 2021 pilot project called Sapient that uses a network of advanced sensors with artificial intelligence at the edge.

By sending only vital information instead of raw data to operators from data collected from CCTV cameras or drones, the project can free military operators from doing cumbersome low-risk jobs, Meers said.

"Command and control are incredibly important, and what we want is for them to spend their time using their judgment, their human intuition, their experience of military environments to make the best possible decision if we can use AI to remove some of the burdens on them. That's fantastically helpful," he added.

A Lords Select Committee warned the U.K. government in December to "proceed with caution" as it pursues AI autonomous systems projects. The committee also urged the government to ensure that human oversight is integrated into all stages in the project development cycle.

"The government should adopt an operational definition of autonomous weapon system," the committee's letter says. "The committee was surprised the government does not currently have one and believes it is possible to create a future-proofed definition which would aid the UK's ability to make meaningful policy on AWS and engage fully in international discussions."


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.com, you agree to our use of cookies.