Earlier this month, researchers created an AI-driven malware that can be used to hack hospital CT scans, generating false cancer images that deceived even the most skilled doctors. If introduced into today’s hospital networks, healthy people could be treated with radiation or chemotherapy for non-existent tumours, while early-stage cancer patients could be sent home with false diagnoses. Today’s medical intelligence about the treatment of cancers, blood clots, brain lesions and viruses could be manipulated, corrupted and destroyed. This is just one example of how “data-poisoning” – when data is manipulated to deceive – poses a risk to our most critical infrastructures. Without a common understanding of how AI is converging with other technologies to create new and fast-moving threats, far more than our hospital visits may turn into a nightmare.
Policymakers need to start working with technologists to better understand the security risks emerging from AI’s combination with other dual-use technologies and critical information systems. If not, they must prepare for large-scale economic and social harms inflicted by new forms of automated data-poisoning and cyberattacks. In an era of increasing AI-cyber conflicts, our multilateral governance system is needed more strongly than ever.