Technical report

Using artificial intelligence to make decisions: addressing the problem of algorithmic bias

Publisher
Algorithms Machine learning Artificial Intelligence (AI) Decision making Discrimination Regulatory instruments Australia
Description

This technical paper is a collaborative partnership between the Australian Human Rights Commission, Gradient Institute, Consumer Policy Research Centre, CHOICE and the CSIRO’s Data61.

AI is increasingly used by government and businesses to make decisions that affect people’s rights, including in the provision of goods and services, as well as other important decision making, such as recruitment, social security and policing. Where algorithmic bias arises in these decision-making processes, it can lead to error. Especially in high-stakes decision making, errors can cause real harm. The harm can be particularly serious if a person is unfairly disadvantaged on the basis of their race, age, sex or other characteristics. In some circumstances, this can amount to unlawful discrimination and other forms of human rights violation.

The report explores how the problem of algorithmic bias can arise in decision making that uses artificial intelligence (AI). This problem can produce unfair, and potentially unlawful decisions. It also demonstrates how the risk of algorithmic bias can be identified, and steps that can be taken to address or mitigate this problem.

Publication Details
ISBN:

978-1-925917-27-7

License type:
CC BY
Access Rights Type:
open