Safe and Responsible AI in Australia: Striking the Balance Between Risk and Reward

As Australia grapples with the potential of AI, the government recently took a decisive step towards regulation, highlighting the need for safeguards.

Man walking in frame with circuit board backdrop
Image by iStock/Gremlin

NP

Nitesh Patel

May 30, 2024 12:00 AM

As Australia embraces the potential of artificial intelligence (AI), particularly generative AI, it grapples with a critical question: how can we leverage AI’s power while ensuring its safe and responsible use?

Following the release of its discussion paper on Safe and Responsible AI in Australia in June 2023, the government issued an interim response to the consultation process in early 2024, indicating the likely future direction of AI regulation in Australia.

Highlighting the need for more safeguards (beyond voluntary guardrails), the government has flagged several views that will guide the direction of its regulatory response:

  • Existing Laws: Australia’s laws do not sufficiently address the risks presented by AI (particularly “high-risk” and frontier models designed for general-purpose applications).
  • Regulation of AI: The government will seek to regulate AI rather than rely solely on voluntary commitments. It will adopt a risk-based approach to implement additional guardrails for AI in “high-risk” settings.
  • Strengthening Existing Laws: The government is currently undertaking work to enhance existing laws in areas that will help to address known harms with AI, including proposed privacy law reforms.
  • International Engagement: The government has declared its commitment to continue engaging internationally to help shape global AI governance.

Businesses in Australia that develop and use AI are already subject to various Australian laws relating to privacy, corporate regulation and anti-discrimination, which apply across all sectors of the economy. There are also financial services sector-specific laws in Australia administered by Australian Prudential Regulation Authority and Australian Securities and Investments Commission, which impact the development and deployment of AI in the sector. However, these existing laws are technology neutral.

Nitesh Patel, a Principal specializing in Cyber and Technology at insurance-focused law firm Gilchrist Connell, is well attuned to the complexities surrounding the use of AI.

“The demonstrated potential of generative AI is almost certainly just a taste of what will come. It has captured the imaginations of many but also raised concerns about how it can be leveraged prudently with proper governance and oversight. Generative AI is, therefore, an exciting opportunity for our firm and our clients to enhance the services we provide and the way we work, but its implementation brings with it significant challenges,” said Patel.

He said that, in the context of law, “Generative AI has demonstrated its potential application to efficiently assist with labor-intensive tasks such as legal research, document review, document comparison and content generation.”

“However, it is imperative that any generative AI tools that we as a firm deploy to assist with our processes have careful regard to our obligations to our clients and other stakeholders, including data privacy and security, confidentiality, output oversight and accuracy and longer-term impacts to the legal profession more generally,” he said.

The Need for Safeguards

Given AI’s myriad of uses, there are differing levels of risk associated with its various applications in different settings. Submissions made to the Safe and Responsible AI in Australia consultation identified a high level of risk associated with newer and more powerful AI models (“frontier” models).

As outlined in the government’s discussion paper, under a risk-based regulatory framework, AI development and application are subject to regulatory requirements commensurate to the level of risk they pose. This will allow low-risk AI development and application to operate freely while imposing regulatory requirements for high-risk AI development and application.

It has captured the imaginations of many but also raised concerns about how it can be leveraged prudently."

The government has acknowledged that many AI applications that operate in “low risk” settings (for example, automation of internal business processes) do not require a regulatory response. However, this must be balanced against existing laws and regulatory frameworks, which are likely inadequate to prevent AI-facilitated harm, particularly in legitimate but “high-risk” contexts.

As part of its proposed risk-based approach, the government has signaled that it will consider mandatory safeguards for those who develop or deploy AI systems in legitimate, “high-risk” settings to ensure that AI systems are safe. The government has indicated it will undertake further work to:

  • define the criteria of risk categorization
  • further refine definitions of “high-risk” AI, considering developments in overseas jurisdictions and
  • identify the most appropriate guardrails and regulatory interventions for each category of risk.

The government has indicated that it will also consider possible legislative vehicles (amendments to existing laws or through a new dedicated legislative framework) to introduce mandatory safety guardrails for AI in “high-risk” settings and specific obligations for developing, deploying and using frontier or general-purpose models.

Interaction Between Australian Legislation and AI Risk

The submissions to the Safe and Responsible AI in Australia consultation identified that at least ten pieces of existing Australian legislation required amendments for relevance in an AI context, including competition and consumer law, health and privacy law and copyright law.

The government’s final regulatory response to AI is subject to further consultation, including in the financial services sector. We expect to see proposed amendments to laws or new laws that will impact all sectors of the Australian economy shortly afterwards.

During a recent speech at the UTS Human Technology Institute Shaping Our Future Symposium, ASIC Chair Joe Longo commented on the current and future state of AI regulation, governance for corporations and the financial services sector.

Mr. Longo took the opportunity to highlight ASIC’s view that AI regulation already falls within the scope of its existing regulatory toolkit and remit over corporations and the financial services sector.

He stated, “All participants in the financial system have a duty to balance innovation with the responsible, safe and ethical use of emerging technologies – and existing obligations around good governance and the provision of financial services don’t change with new technology”.

In that context, Mr. Longo cited directors’ duties under the Corporations Act 2001 (Cth) as an example of broad obligations that should cause directors to pay particular attention to how companies are deploying AI into their businesses.

Mr. Longo also referred to the 2022 Federal Court decision of ASIC v RI Advice Group Pty Ltd [2022] FCA 496, in which it was found that RI Advice had breached its Australian Financial Services License obligations under the Corporations Act to act efficiently and fairly and to have adequate risk management systems, which was found to include having adequate cyber risk management Mr. Longo’s view is that this line of thinking could be applied to the use and operation of AI by financial services licensees as the technology evolves and is deployed in this space.

AI risk – A Global Issue

In November 2023, Australia, the EU and 27 other countries signed the Bletchley Declaration at the first Global AI Safety Summit. This declaration affirms that AI should be designed, developed, deployed, and used safely and responsibly, prioritizing a community-first approach.

The government has confirmed that it will uphold the commitments made in the Bletchley Declaration, which includes international collaboration on AI safety testing and developing risk-based frameworks across countries to ensure AI safety and transparency.

The government will continue to engage with international partners on domestic responses to risks posed by AI, aiming to strengthen and ensure interoperability between international responses and Australia’s domestic response to AI risk.

Nitesh Patel is a recognized industry leading cyber and technology lawyer with proven and extensive expertise in contentious and non-contentious cyber, technology, privacy, data security, incident response and associated insurance coverage matters and leads our cyber practice at Gilchrist Connell.

Featured Articles