Digital technologies, devices and the internet are producing huge amounts of data and greater capacity to store it, and those developments are likely to accelerate. For law enforcement, a critical capability lagging behind the pace of tech innovation is the ability and capacity to screen, analyse and render insights from the ever-increasing volume of data—and to do so in accordance with the constraints on access to and use of personal information within our democratic system.
Artificial intelligence (AI) and machine learning are presenting valuable solutions to the public and private sectors for screening big and live data. AI is also commonly considered and marketed as a solution that removes human bias, although AI algorithms and dataset creation can also perpetuate human bias and so aren’t value or error free.
This report analyses limitations, both technical and implementation, of AI algorithms, and the implications of those limitations on the safe, reliable and ethical use of AI in policing and law enforcement scenarios. This publication closely examines usage of AI by domestic policing agencies to model what success looks like for safe, reliable and ethical use of AI in policing and law enforcement spaces. It also explores possible strategies to mitigate the potential negative effects of AI data insights and decision-making in the justice system; and implications for regulation of AI use by police and law enforcement in Australia.