1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Researchers warn of malicious use of AI

February 21, 2018

Artificial intelligence could soon be misused by criminals to launch hacking attacks in a number of critical areas, a fresh report by top researchers has said. So far societies seem unprepared to prevent such activities.

https://p.dw.com/p/2t2Kt
Christian Arnold, Cardiff University
Image: ZDF

Rapid advances in artificial intelligence (AI) are increasing risks that malicious users may soon exploit the underlying technology to mount automated hacking attacks, cause driverless car crashes or turn commercial drones into targeted weapons, a new report warns.

The study, published Wednesday by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities, sounded the alarm over the potential misuse of AI by rogue states, criminals or lone-wolf attackers.

The report said the malicious use of AI posed imminent threats to digital, physical and political security by allowing for large-scale, finely targeted and highly efficient assaults, adding that the study focused on plausible developments within the next five years.

More questions than answers

"We all agree there are a lot of positive applications of AI," said researcher Miles Brundage from Oxford's Future of Humanity Institute. "But there was a gap in the literature about the issue of malicious use."

The 98-page paper cautioned that new attacks might arise which would have been impractical for humans alone to develop or which exploit the vulnerabilities of AI itself.

The report makes a series of recommendations including regulating AI as a dual-use military and commercial technology. It also asked about whether academics and others should rein in what they publish or disclose about new developments in AI until other experts in the field have had a chance to study and react to potential dangers they might pose.

The researchers had to admit, though, that "ultimately, we ended up with a lot more questions than answers."

hg/tr (Reuters, AFP)