top of page
Search

Why Legislation is Needed for Government AI Use

  • Gov+AI
  • Jul 14
  • 3 min read
ree

As artificial intelligence (AI) becomes increasingly embedded in government operations, the need for robust, ethical, and transparent governance is more urgent than ever.


While Australia has made important strides with its AI Ethics Framework, legislating these principles for all government use of AI is a logical and necessary next step. Doing so would provide a clear operating framework and, crucially, help foster public trust in the growing use of AI technologies across the public sector.


The Case for Legislating AI Ethical Principles

Trust is the cornerstone of effective public service. Australians must be confident that AI systems used by their government are fair, transparent, and accountable. Australia currently has voluntary AI ethics principles, which while valuable, do not carry the enforceability required to ensure consistent, responsible AI use across all departments. Legislating Australia’s AI ethical principles would set a clear, non-negotiable standard for government, providing assurance to citizens that their rights and interests are protected as automation becomes more widespread (Australian Government, 2019).


Australia’s AI Ethics Framework: The Foundation

The Australian Government’s AI Ethics Framework, launched in 2019, outlines eight core principles:

- Human, social and environmental wellbeing: AI systems should benefit individuals, society, and the environment, promoting positive outcomes and minimising harm.

- Human-centred values: AI should respect human rights, freedom, dignity, and autonomy, aligning with Australian values and diversity.

- Fairness: AI systems should avoid bias, ensure equitable treatment, and not discriminate unlawfully against individuals or groups.

- Privacy protection and security: AI should uphold privacy rights and ensure the security of data, protecting individuals from misuse or unauthorised access.

- Reliability and safety: AI systems should operate reliably, safely, and as intended throughout their lifecycle, with risks identified and managed.

- Transparency and explainability: The operations and decisions of AI systems should be transparent, and outcomes should be explainable to those affected.

- Contestability: People should be able to challenge the use or outcomes of AI systems, especially those that significantly impact individuals.

- Accountability: Organisations and individuals responsible for AI systems must be accountable for their functioning, outcomes, and impacts, with clear governance and oversight mechanisms in place.


These principles are intended to guide the design, development, and implementation of AI systems across sectors. However, without legislative backing, their adoption in government remains uneven and optional.


Legislating ethical principles would require departments to be transparent about how AI systems make decisions, protect privacy, and ensure that outcomes are fair and contestable. Australia should embed these requirements in law, ensuring that citizens can trust the purpose and impact of AI on government operations.


Best Practice in AI Governance: Lessons from Home and Abroad


The South Australian government has legislated the Five Safes Framework as part of its data sharing legislation in the Public Sector (Data Sharing) 2016 Act. This allowed for progressive data sharing within a trusted model – so that public good outcomes could be pursued in a way in which the public could have confidence that the sharing was undertaken safely and appropriately.

Internationally, leading governments have moved beyond voluntary frameworks to implement enforceable governance arrangements. For example

·       In Canada, the Algorithmic Impact Assessment (AIA) tool is mandatory for federal departments deploying AI, ensuring risks are assessed and mitigated (Government of Canada, 2020.

·       The European Union’s AI Act legally requires public sector AI systems to meet strict ethical and safety standards. These approaches demonstrate that legislation is both possible and practical.

·       Singapore’s Model AI Governance Framework requires continuous monitoring and evaluation of AI systems to uphold ethical standards.

By legislating its AI ethical principles, Australia can provide public sector departments with the clarity and consistency needed to deploy AI responsibly. This step would not only align Australia with global best practice but also send a strong signal to citizens that their government is committed to ethical, accountable, and trustworthy use of AI.


Conclusion

Adopting best practice AI governance arrangements in public sector departments is crucial for fostering trust, accountability, and ethical usage of AI technologies. Legislating Australia’s AI ethical principles is a practical and necessary step to ensure that as AI becomes more pervasive in government, it is always used in the service of the public good. By establishing clear frameworks, promoting transparency, and engaging with stakeholders, Australia can set a global benchmark for responsible AI in the public sector.

 

References: 

European Commission (2005) https://artificialintelligenceact.eu

UK Government, AI and the Public Sector Code of Practice, https://www.gov.uk/government/publications/ai-and-the-public-sector-code-of-practice  

Singapore Government, Model AI Governance Framework, https://www.pdpc.gov.sg/Industry-Guides/2020/Model-AI-Governance-Framework

 
 
bottom of page