Join the movement! Take the Responsible AI Pledge today.
Sign now

Federal Court of Australia Issues Strong Guidance on AI Use in Legal Proceedings

Federal Court of Australia Issues Strong Guidance on AI Use in Legal Proceedings

The Federal Court of Australia has formally addressed the use of generative artificial intelligence in legal practice, reinforcing that while AI can improve efficiency, its misuse poses serious risks to the administration of justice.

In its General Practice Note - Use of Generative Artificial Intelligence (GPN-AI), the Court makes its position clear:

"The use of generative artificial intelligence must be consistent with a practitioner's overarching duty to the Court and the administration of justice."

This principle is not merely advisory. It goes to the core of a lawyer's professional obligations.

"Unacceptable" Conduct: False Citations and AI Hallucinations

In a Notice to the Profession dated 16 April 2026, Chief Justice Debra Mortimer addressed growing concerns around the misuse of AI in litigation.

The Court explicitly warned against the submission of inaccurate or fabricated material:

"The presentation of false or inaccurate information to the Court is unacceptable."

This includes:

  • Non-existent case citations
  • Fabricated quotations
  • Incorrect legal authorities generated by AI tools

The Court emphasised that these failures are not attributable to the technology itself, but to the practitioner relying on it without proper verification.

Verification Is Mandatory, Not Optional

The GPN-AI makes it unequivocally clear that responsibility remains with the legal practitioner:

"Practitioners must ensure that any material produced with the assistance of generative AI is accurate."

This reflects a critical legal principle: delegation to AI does not dilute professional accountability.

In practice, this means:

  • Every citation must be independently verified
  • Authorities must be checked against primary sources
  • AI outputs must be treated as drafts, not final work product

Failure to do so may amount to a breach of duties owed to the Court.

AI Recognised as Beneficial, But Risk-Laden

Importantly, the Court does not reject AI outright. In fact, it acknowledges its utility:

"Generative artificial intelligence can be used to assist with legal work and improve efficiency."

However, this endorsement is conditional. The same guidance warns:

"If used inappropriately, generative artificial intelligence poses risks to the proper administration of justice and public confidence in the legal system."

This dual position reflects a broader regulatory trend: AI is permissible, but only under disciplined and responsible use frameworks. You can view additional Generative AI Resources provided by the Court.

A Turning Point for AI Governance in Australia

The Federal Court's position signals a clear shift toward enforceable standards around AI usage in professional settings, particularly in high-stakes environments like legal proceedings.

For organisations beyond the legal sector, the implications are significant:

  • AI governance is no longer optional
  • Verification and human oversight are essential controls
  • Accountability remains with the organisation, not the tool

The Role of Responsible AI Certification

As regulatory expectations evolve, organisations must demonstrate that their AI systems are deployed responsibly, transparently, and with appropriate safeguards.

Responsible AI Australia provides a structured certification framework that enables organisations to:

  • Validate ethical AI practices
  • Build stakeholder trust
  • Align with emerging regulatory expectations

Just as the Australian Made mark signals quality and origin, Responsible AI Australia certification signals trustworthy and ethical AI use.

Final Perspective

The message from the Federal Court of Australia is precise:

AI is a powerful tool, but without proper governance, it introduces systemic risk.

The organisations that will lead in this new environment are those that treat AI not just as a capability, but as a responsibility.

Syed Mosawi

Syed Mosawi

Founder at Responsible AI Australia. Building certification frameworks to help organisations operationalise their AI governance and compliance.

Share this article