While you’re here… help us stay here.
Are you enjoying open access to policy and research published by a broad range of organisations? Please donate today so that we can continue to provide this service.
This report, by the University of Queensland and KPMG, shows that trust in artificial intelligence (AI) is currently low in Australia. Concerns including privacy violations, unintended bias and inaccurate outcomes, as well as recent high profile scandals have further reduced public trust in this technology.
Trust underpins the acceptance and use of AI and requires a willingness to be vulnerable to AI systems, by sharing data or relying on automated AI decisions. This trust is built on positive expectations of the ability, humanity and integrity of the AI systems, and the people and organisations developing and deploying AI.
Many organisations are still at the early stages of maturity in establishing the ethical, technical and governance foundations to manage the risks and realise the opportunities associated with AI.
Now organisations and their leaders need to work towards achieving trustworthy AI. There is no silver bullet to this process. Organisations should tailor their approach to the benefits, challenges, risks and opportunities that are unique to their industry and context, and ensure it is proportional to the potential impacts that the AI systems pose to their stakeholders.
This is not something that a single executive, or business function, can do alone. It requires an organisation-wide approach that integrates and connects key functional areas of the organisation at each stage of the AI lifecycle.