While you’re here… help us stay here.

Are you enjoying open access to policy and research published by a broad range of organisations? Please donate today so that we can continue to provide this service.

Briefing paper
Resources
Description

The Royal United Services Institute's (RUSI’s) research involved interviews with UK police officers themselves, who describe that the landscape of technological sophistication is varied across forces in the UK. The evidence suggests that there is an absence of consistent guidelines for the use of automation and algorithms, which may be leading to discrimination in police work.

This research forms an important part of the CDEI’s overall review into algorithmic bias.

Main findings:

  • Multiple types of potential bias can occur. These include discrimination on the grounds of protected characteristics; real or apparent skewing of the decision-making process; and outcomes and processes which are systematically less fair to individuals within a particular group.
  • Algorithmic fairness is not just about data. Rather, to achieve fairness there needs to be careful consideration of the wider operational, organisational and legal context, as well as the overall decision-making process informed by the analytics.
  • A lack of guidance. There remains a lack of organisational guidelines or clear processes for scrutiny, regulation and enforcement for police use of data analytics.

Research implications:

  • Allocation of resources. Police forces will need to consider how algorithmic bias may affect their decisions to police certain areas more heavily.
  • Legal claims. Discrimination claims could be brought by individuals scored “negatively” in comparison to others of different ages or genders.
  • Over-reliance on automation. There is a risk that police officers become over-reliant on the use of analytical tools, undermining their discretion and causing them to disregard other relevant factors.

 

Publication Details
License type:
CC BY-NC-ND
Access Rights Type:
open