Artificial intelligence (AI) may be defined as a collection of general-purpose, advanced digital technologies that enable machines to reproduce or surpass abilities that would require intelligence if humans were to perform them.
AI promises many benefits, but also holds significant risks. For example, the same qualities that may improve efficiency, timeliness and fairness in the public sector could also produce wide-scale negative outcomes for large numbers of people.
While Australian Governments having been using AI since the 1990s, recent developments demonstrate the increasing salience of this topic. On 4 September 2020, the NSW Government released its AI Strategy and AI Ethics Policy. Globally, debates concerning surveillance technologies, such as facial recognition, have taken on new significance in light of government responses to the COVID-19 pandemic.
This paper focuses on the parliamentary and legal implications of governments using a form of AI: automated-decision making (ADM), which is deployed in automated decision-making systems (ADMS). It discusses the implications, presents key parliamentary case studies, and sets out recommendations from the literature on how Parliaments could respond.
The paper finds that, as of July 2020, there appear to be no legislative provisions that expressly authorise the NSW Government to use an ADMS to make a decision. Ongoing debate in the literature about whether Parliaments should legislate to control the use of ADMS by government raises several challenges for Parliament: Are there significant gaps within existing legislative frameworks? When and how should Parliaments legislate? Do parliamentarians possess, or have access to, the technical knowledge required to make legislation and scrutinise the design, development and deployment of ADMS by government?