Earlier this month, Australia’s long-anticipated Nationwide AI Plan was launched to a mixed reception.
The plan shifts away from the federal government’s previously promised mandatory AI safeguards. As an alternative, it’s positioned as a whole-of-government roadmap for constructing an “AI-enabled economic system”.
The plan has raised alarm bells amongst consultants for its lack of specificity, measurable targets, and clarity.
Globally, incidents of AI hurt are rising. From main cyber crime breaches using deepfakes to disinformation campaigns fuelled by generative AI, the shortage of accountability is staggering. In Australia, AI-generated child sexual abuse material is rapidly spreading, and present legal guidelines are failing to protect victims.
With out devoted AI regulation, Australia will go away probably the most weak prone to hurt. However there are frameworks elsewhere on this planet that we will be taught from.
No devoted AI legal guidelines in Australia
The brand new plan doesn’t mandate for a standalone AI Act. It additionally doesn’t have concrete suggestions for reforms to present legal guidelines. As an alternative, it establishes an AI Security Institute and different processes together with voluntary codes of conduct.
Based on Assistant Minister for Science, Know-how and the Digital Financial system Andrew Charlton, “the Institute might be [..] working straight with regulators to verify we’re prepared to securely seize the advantages of AI with confidence.” Nevertheless, this institute has solely been afforded guidance and advisory powers.
Australia additionally has a historical past of blaming algorithms for authorized failures, such as the Robodebt scandal. Present authorized protections aren’t sufficient to handle present and potential AI harms. Consequently, the brand new AI plan dangers amplifying injustices.
Authorized whack-a-mole
Holding tech corporations legally liable is no easy feat.
Massive tech persistently seeks loopholes in present authorized methods. Tech giants Google and OpenAI are claiming “truthful use” provisions in US copyright legislation legalise data scraping.
Social media corporations Meta and TikTok are exploiting present legal guidelines – akin to broad immunity underneath US Communications Decency Act – to avoid liability for harmful content.
Many are additionally utilizing particular objective acquisition corporations (primarily shell corporations) to circumvent antitrust laws that concentrate on anti-competitive conduct.
As per the brand new nationwide plan, Australia’s “technology-neutral” strategy argues that present legal guidelines and laws are adequate to fight potential AI harms.
Based on this line of considering, considerations akin to privateness breaches, shopper fraud, discrimination, copyright and office security may be addressed utilizing a lightweight contact – regulation solely the place mandatory. And the AI Security Institute could be “monitoring and advising”.
The present legal guidelines referenced as adequate embody the privateness act, Australian shopper legislation, present anti-discrimination, copyright and mental property legal guidelines, in addition to sector-specific legal guidelines and requirements, akin to these within the medical field.
This may seem as complete authorized oversight. However there remain legal gaps, together with these associated to generative AI, deepfakes, and synthetic data made up for AI coaching.
There are additionally extra foundational concerns round systemic algorithmic bias, autonomous decision-making and environmental threat. An absence of transparency and accountability looms giant, too.
Massive tech usually makes use of authorized uncertainty, lobbying and technical complexity to delay compliance and sidestep accountability. The businesses adapt whereas the authorized system makes an attempt to catch up – like a recreation of whack-a-mole.
A name to motion for Australia
Similar to the moles within the recreation, massive tech usually engages in “regulatory arbitrage” to bypass the legislation. This implies shifting to jurisdictions with much less stringent legal guidelines. Below the present plan, that is now Australia.
The answer? Global consistency and harmonisation of related legal guidelines, to chop down on the variety of areas massive tech can exploit.
Two frameworks specifically provide classes. Harmonising Australia’s nationwide AI plan with the EU AI Act and Aotearoa New Zealand’s Māori AI Governance framework would improve protections for all Australians.
The EU AI Act was the world’s first AI-specific laws. It gives clear guidelines on what’s allowed and never allowed. AI methods are assigned authorized obligations and tasks primarily based on the extent of potential societal threat they pose.
The act places in place varied enforcement mechanisms. This contains particular monetary penalties for non-compliance, in addition to EU- and national- stage governance and surveillance our bodies.
In the meantime, the Māori AI Governance Framework outlines Indigenous information sovereignty ideas. It highlights the significance of Māori information sovereignty within the face of insufficient AI regulation.
The framework contains 4 pillars that present complete motion to assist Māori information sovereignty, the well being of land, and neighborhood security.
The EU AI Act and the Māori Framework articulate clear values and translate them into particular protections: one by enforceable risk-based guidelines, the opposite by culturally-grounded ideas.
In the meantime, Australia’s AI plan claims to mirror “Australian values” however gives neither regulatory tooth nor cultural specificity to uphold them. As authorized consultants have called for, Australia wants AI accountability constructions that don’t depend on people efficiently prosecuting well-resourced companies by outdated legal guidelines.
The selection is obvious. We will both chase an “AI-enabled economic system” at any value, or construct a society the place neighborhood security, not cash, comes first.
- Jessica Russ-Smith, Affiliate Professor of Social Work and Chair, Indigenous Analysis Ethics Advisory Panel, Australian Catholic University; Immaculate Motsi-Omoijiade, Senior Analysis Fellow – Accountable AI Lead, AI and Cyber Futures Institute, Charles Sturt University, and Michelle D. Lazarus, Director, Centre of Human Anatomy Training, Monash University
This text is republished from The Conversation underneath a Inventive Commons license. Learn the original article.

