When AI goes Awry

Desmond Higham, University of Edinburgh
21 August 2024

image representing a theme in this article

Over the last decade, adversarial attack algorithms have revealed instabilities in deep learning tools. These algorithms raise issues regarding safety, reliability and interpretability in artificial intelligence (AI); especially in high risk settings. At the heart of this landscape are ideas from optimization, numerical analysis and high dimensional stochastic analysis playing key roles. From a practical perspective, there has been a war of escalation between those developing attack and defence strategies. At a more theoretical level, researchers have also studied bigger picture questions concerning the existence and computability of successful attacks. I will present examples of attack algorithms in image classification, optical character recognition and Large Language Models. I will also outline recent results on the overarching question of whether, under reasonable assumptions, it is inevitable that AI tools will be vulnerable to attack.

Download slides

About the speaker

Desmond John Higham is an applied mathematician and Professor of Numerical Analysis at the School of Mathematics at the University of Edinburgh, United Kingdom.

He is a graduate of the Victoria University of Manchester gaining his BSc in 1985, MSc in and 1986 and PhD 1988. He was a postdoctoral Fellow at the University of Toronto before taking up a Lectureship at the University of Dundee in 1990 and moving to a Readership at the University of Strathclyde in 1996. He was made Professor in 1999 and awarded the “1966 Chair of Numerical Analysis” in 2011. He moved to the University of Edinburgh in April 2019.

Higham’s main area of research is stochastic computation, with applications in data science, deep learning, network science and computational biology.