Search Our Blogs
17 results found with an empty search
- AI Legislation Series: Denmark seeks to give individuals "copyright" control over their own likeness.
Denmark is moving to give individuals stronger copyright-like control over their own likeness in the face of AI-generated deepfakes. The idea is to treat a person’s image, voice, and other identifiable traits as something that can be owned and protected under copyright-like laws, making it easier to remove unauthorised uses and to seek damages. What’s being proposed and why it matters Protecting identity in the age of AI: As AI can imitate a person’s face, voice, or movements, there are increasing incidents in the creation of realistic deepfakes for misleading, exploitative, or harmful purposes. This proposal aims to give individuals legal rights and clear remedies when their likeness is misued. A broader approach to enforcement: By treating likeness as protectable under copyright-like rules, individuals can pursue takedowns and damages when platforms, producers, or others use their features without consent. This shifts some enforcement burden to platforms and give rights to individuals to demand speedy removal of infringing content. What the proposed protections include Likeness as property: A person’s face, voice, and body movements could be protected as an intellectual property asset. This means unauthorised AI-generated reproductions could trigger rights similar to copyright or related rights. Consent and takedown duties : If content uses someone’s likeness without permission, there could be a formal right to demand removal from digital platforms and to pursue compensation for harm. Platform accountability and penalties: Online platforms may face penalties or liability if they fail to comply with takedown requests or otherwise facilitate unauthorised deepfakes of individuals. Scope beyond celebrities : The idea is to cover everyday individuals, not only public figures, recognising that any person’s identity could be exploited by AI. What this could mean in practice For individuals: A clearer route to control how their likeness is used by AI tools, with potential relief through takedown processes and the possibility of damages for misuse. For AI developers and platforms: Additional compliance obligations to verify consent, to remove unauthorised uses, and to respond to takedown requests promptly. This could influence product design, consent flows, and content moderation practices. For businesses and content creators: A need to secure consent for using someone’s likeness in AI-generated content and to implement robust processes for honoring takedown notices to avoid liability. Technical feasibility: The law may prompt clearer standards for identifying and attributing authorised uses, and for verifying consent in AI workflows. If passed and implemented, other countries could follow suit so that consistent standards be applied. What to monitor Legislative status and final text : the exact wording, including definitions of “likeness,” “voice,” and “movement,” and how consent is obtained and proven is yet to be settled. This will be central to the law. Regulatory guidanc e: Regulatory bodies will be expected to to publish guidelines on how takedowns, penalties, and enforcement will operate in practice. It is unclear at the moment what this will look like. Industry responses: AI platforms and content creators will likely update terms of service, consent mechanisms, and content moderation policies in response to these developments. Ideally this will be a global update, rather than Denmark specific. The adoption of similar laws by other countries will help with this. If taken up by other jurisdictions there whould be strong value in a consistent approach taken on definitions and guidance. Specific Legislative Analysis excerpt from LSJ Online: For individuals, section 73 a of the Danish Copyright Act addresses realistic audio and visual imitations. It consolidates elements of existing statutory and non-statutory rules and legal principles within the Danish Criminal Code, the Danish Marketing Law, and the GDPR. The protection period is set to be 50 years from the year of death of the individual being imitated. The amendments proposed by this bill do not directly provide for compensation and imprisonment, but they allow individuals and performing artists with a legal basis to demand that illegal digital imitations be removed from social media and other platforms. Parties can consequently seek damages and compensation under the general rules of Danish law. Under the European Union’s Digital Services Act (EU DSA), if a platform does not remove illegal content after receiving a notification, the provider may be liable for financial consequences. Critics have pointed to the limited capacity of the protective measures proposed by the bill. The laws are limited to Denmark, and the illegal deepfake content could still be accessible from other countries, even where the same content is made unavailable to users accessing social media platforms from within Denmark. Sources [1] Denmark to tackle deepfakes by giving people copyright to their own features https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence [2] Denmark's Deepfake Legislation: Bold Copyright and ... https://abounaja.com/blog/denmarks-deepfake-legislation-bold-copyright-and-digital-identity-protection [3] Denmark plans to thwart deepfakers by giving everyone copyright over their own features https://www.cnn.com/2025/06/27/business/denmark-ai-law-scli-intl [4] denmark to pass copyright law that thwarts AI-generated ... https://www.designboom.com/technology/denmark-pass-law-citizens-copyright-face-voice-ai-deepfakes-07-03-2025/ [5] Deepfake legislation: Denmark takes action | World Economic Forum https://www.weforum.org/stories/2025/07/deepfake-legislation-denmark-digital-id/ [6] Danes Could Get Copyright to Their Own Image Under AI Bill | TIME https://time.com/7298425/ai-deepfakes-denmark-copyright-amendment/ [7] Denmark First to Combat AI by Giving Citizens Rights Over Their ... https://hellopartner.com/2025/07/11/denmark-set-to-be-first-european-country-to-combat-ai-by-giving-citizens-copyright-over-their-face-voice-and-body/ [8] Denmark Amend Copyright Law to Combat AI Deepfakes https://ambadar.com/insights/denmark-amend-copyright-law-to-combat-ai-deepfakes/ [9] The First Step Towards Fighting AI Abuse? Denmark Grants ... https://www.remotestaff.com.au/blog/denmark-ai-face-copyright-law/ [10] Denmark's Copyright Update: A New Defence Against Deepfake https://www.frozenlight.ai/post/frozenlight/673/denmark-copyright-law-against-deepfake/ [11] Denmark proposes copyright laws to protect against deepfakes LSJ Online https://lsj.com.au/articles/denmark-proposes-copyright-laws-to-protect-against-deepfakes/ *This blog was produced with assitance from AI. All sources have been verified.
- AI Legislation Series: A simple guide to the EU's Artificial Intelligence Act.
The European Union’s (EU) Artificial Intelligence Act (AI Act) officially came into force on 1 August 2024 and is the world’s first comprehensive legal framework governing AI systems. It adopts a risk-based approach that categorises AI applications by their potential impact on safety and people’s rights. By adopting a risk-based regulation model, the rules focuson where the biggest problems are that could seriously impact people, with less focus on things that pose little danger. While the Act primarily targets the EU internal market, its extraterritorial reach means that organisations outside the EU, including Australian businesses, will be affected if they develop or supply AI systems used within the EU. What is the danger scale? The EU AI Act classifies AI systems into risk categories and applies corresponding regulatory obligations to each: 1. Unacceptable Risk : These AI practices are prohibited outright due to their potential to seriously threaten individual rights or public safety. For instance, AI systems that engage in social scoring—assigning reputational scores to individuals that can limit access to services—or exploit vulnerabilities through manipulative tactics, are banned. Think a Black Mirror TV episode. This category represents the EU’s commitment to preemptively halt harmful AI uses. 2. High Risk: AI systems in this category are used in sensitive areas where erroneous or biased decisions can lead to significant harm. Examples include AI employed in healthcare for diagnostic support, education for admissions decisions, recruitment in employment, border control in migration management, and law enforcement for suspect identification. Such systems must meet strict requirements regarding data quality, transparency, human oversight, robustness, and undergo conformity assessments before deployment. 3. Limited Risk: AI systems like chatbots or content recommendation engines that interact with users fall under this group. They must adhere to transparency guidelines, which generally require informing users that they are engaging with an AI system, ensuring clear communication without misleading individuals. 4. Minimal Risk: The majority of everyday AI applications, such as spam filters, language translation tools, and video games, are placed here. These face minimal regulatory interference to encourage continued innovation and adoption. Compliance, Oversight, and Enforcement The EU AI Act sets up several layers of oversight and enforcement. Each EU country picks its own main authority to make sure organisations follow the rules, handle necessary registrations, and look into any possible violations. These national authorities are guided and coordinated by the European Artificial Intelligence Board, which helps make sure the rules are applied the same way across all EU countries. For high-risk AI, conformity assessments by notified or certification bodies are mandatory to verify adherence to the Act’s standards before market entry. Non-compliance carries significant penalties. Entities breaching prohibitions on unacceptable AI systems may face fines up to €35 million or 7% of global turnover, whichever is higher. Other infringements can attract penalties up to €15 million or 3% of global turnover. The Act also takes into account the economic scale of organisations, offering proportionality for SMEs and startups. Implications for Australia Although designed for the EU, the AI Act’s extraterritorial scope means Australian companies that develop, deploy, or distribute AI systems intended for use in the EU must comply with its requirements. This includes manufacturers exporting products embedded with AI components, software providers offering AI services to EU clients, and any organisation whose AI outputs are used by EU-based businesses or consumers. In 2024 the Australian Government consulted on the need for mandatory guardrails for AI in high risk setting s . It noted, that Australia's current regulatory system is not fit for purpose to respond to the distinct risks that AI poses. If Australia adopted a similar risk-based framework to the EU, there would be benefits in harmonisation with the EU, establishing a baseline of protection for citizen’s rights and promoting innovation and public trust. This type of umbrella legislation would complement Australia’s existing and emerging AI-specific laws, such as the Criminal Code Amendment (Deepfake Sexual Material) Act 2024. There need not be an either or with this legislative approach. And while the systems and laws of the EU differ to Australia, adoption of a broad, regulatory structure that addresses AI risks across sectors and use cases would advance Australia ‘s current AI guardrails significantly. Sources [1] EU regulates AI: will Australia follow suit? - Insight https://www.minterellison.com/articles/eu-regulates-ai-will-australia-follow-suit [2] Europe's AI Act takes effect: What Australian businesses ... https://piperalderman.com.au/insight/europes-ai-act-takes-effect-what-australian-businesses-need-to-know-for-their-use-of-ai/ [3] The EU's AI Act & Its Impact on Australia https://privacy108.com.au/insights/eu-ai-act-impact-on-australia/ [4] What lessons can Australia learn from the EU Artificial ... https://lsj.com.au/articles/what-lessons-can-australia-learn-from-the-eu-artificial-intelligence-act/ [5] The EU AI Act and the Impact on Australia and New Zealand https://hamiltonlocke.com.au/the-worlds-first-ai-rulebook-the-eu-ai-act-and-the-impact-on-australia-and-new-zealand/ [6] AI Watch: Global regulatory tracker - Australia https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-australia [7] SHOULD AUSTRALIA FOLLOW EUROPE'S APPROACH ... https://anujolt.org/api/v1/articles/129799-should-australia-follow-europe-s-approach-to-ai-standards-and-regulation.pdf [8] Australia and the EU should work with the South Pacific on AI https://www.aspistrategist.org.au/australia-and-the-eu-should-work-with-the-south-pacific-on-ai/ [9] Navigating the legal landscape: AI in Australia https://www.ashurst.com/en/insights/navigating-the-legal-landscape-ai-in-australia/ [10] How are AI regulatory developments in the EU and US ... https://www.governanceinstitute.com.au/news_media/how-are-ai-regulatory-developments-in-the-eu-and-us-influencing-ai-policy-making-in-australia/ [11] AI Act https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai [12] EU Artificial Intelligence Act | Up-to-date developments and ... https://artificialintelligenceact.eu [13] The EU AI Act: A Quick Guide https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide [14] Introducing mandatory guardrails for AI in high-risk settings: proposals paper https://consult.industry.gov.au/ai-mandatory-guardrails *This blog was produced with assitance from AI. All sources have been verified.
- AI Legislation Series: Suicidal Ideation and Chatbots- California's SB243
Chatbot use as a "companion" or trusted advisor is on the rise. The benefits and potential harms of this use are as yet unknown. However there have been documented cases of tragic deaths that followed some disturbing engagements with chatbots. As a result, California has implemented laws specifically designed to regulate AI responses to suicidal ideation. This is an important step forward in setting guardrails for AI companies to protect vulnerable people, and should be adopted by all governments. California’s new SB 243 law sets rules for operators of “companion chatbots”—AI systems that simulate human conversation and provide ongoing social engagement. The law’s main focus is protecting users, especially minors, from risks related to suicide, self-harm, and emotional manipulation[1][5][2]. The law addresses documented cases where users, especially young people, interacted with chatbots regarding mental health issues but received no support or dangerous advice. By introducing required crisis intervention, transparency, and oversight, SB 243 aims to prevent harm and improve public safety associated with social AI technologies[1][5][2]. California experienced some criticism from companies for placing a too higher a regulation burden. However, if others were to follow, this argument would be somewhat offest. Equally, a s the following rules are required to be in place in California, they should be transferable to all locations where companies provide the AI chatbot service with an economies of scale reducing the burden. What are the new rules? Safeguards for Suicidal Ideation - Crisis Protocols: Operators must implement systems to identify when users express suicidal thoughts or self-harm intentions during chatbot conversations. If such content is detected, the chatbot must immediately direct the user to crisis service providers, like suicide prevention hotlines or text lines[2][1]. - Transparency: Details of these crisis-response protocols must be published on the platform’s website so users and regulators are aware of what actions will be taken[2]. - Annual Reporting : Operators must report annually to the California Office of Suicide Prevention on how many times crisis referral protocols have been triggered by users expressing suicidal ideation, and how often the chatbot itself brings up related topics. These reports will not include any personal user data[2]. If other countries, or states, followed this law, then the appropriate reporting body would need to be clearly identified. Regulatory and Operational Requirements - Disclosure : Chatbots must inform users at the beginning of every conversation, and at least every three hours during ongoing sessions, that they are not human but AI-generated. These notifications aim to prevent users from being misled by the bot’s conversational abilities[1]. - Minor Protection: Platforms must warn that companion chatbots may not be suitable for some minors. For minor users, reminders are required every three hours, alongside encouragement for regular breaks[1][5]. - Audit and Compliance : Operators are required to subject their platforms to periodic independent audits to ensure compliance with all requirements, with summary results made public[1]. - Scope : The law applies to any chatbot that provides social, emotional, or ongoing conversational engagement, but excludes customer service bots, technical support bots, and in-game characters that do not engage beyond their core role[5]. - Civil Enforcement: Any user harmed by a violation can bring a civil lawsuit for damages—at least $1,000 per violation or the actual damages—plus attorney’s fees[2]. Effective Dates Most requirements take effect on January 1, 2026. Annual reporting obligations begin in July 2027[2][5]. Relevant excerpt of SB-243 Companion chatbots.(2025-2026) : 22602. (a) If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human. (b) (1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. (2) The operator shall publish details on the protocol required by this subdivision on the operator’s internet website. (c) An operator shall, for a user that the operator knows is a minor, do all of the following: (1) Disclose to the user that the user is interacting with artificial intelligence. (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human. (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct. This article was written with the assistance of AI. All sources are verified. Sources [1] New California 'Companion Chatbot' Law Imposes ... https://www.skadden.com/insights/publications/2025/10/new-california-companion-chatbot-law [2] AI Regulatory Update: California's SB 243 Mandates ... https://www.joneswalker.com/en/insights/blogs/ai-law-blog/ai-regulatory-update-californias-sb-243-mandates-companion-ai-safety-and-accoun.html?id=102lq7c [3] AI Chatbots at the Crossroads: Navigating New Laws and ... https://www.cooley.com/news/insight/2025/2025-10-21-ai-chatbots-at-the-crossroads-navigating-new-laws-and-compliance-risks [4] Senate Bill No. 243 CHAPTER 677 An act to add ... https://www.sidley.com/en/-/media/resource-pages/ai-monitor/laws-and-regulations/cal-sb243-companion-chatbots.pdf?la=en [5] Is Your Chatbot Too Friendly? Watch Out for California's ... https://www.bassberry.com/news/california-companion-chatbot-bill/ [6] California's SB 243 Sets a New Regulatory Baseline for AI ... https://www.sondermind.com/resources/articles-and-content/california-sb-243-sets-a-new-regulatory-baseline-for-ai-companion-chatbots/ [7] California's Chatbot Bill May Impose Substantial ... https://www.crowell.com/en/insights/client-alerts/californias-chatbot-bill-may-impose-substantial-compliance-burdens-on-many-companies-deploying-ai-assistants [8] What SB 243 does in a nutshell https://www.legallawyers.com.au/uncategorized/what-sb-243-does-in-a-nutshell/ [9] AI companion bots: Top points from recent FTC and ... https://www.dlapiper.com/en-au/insights/publications/2025/09/ftc-ai-chatbots [10] Senate Bill No. 243 CHAPTER 677 https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243
- Legislation Series: Degrading and humiliating deepfakes- South Australia's new laws.
The Summary Offences (Artificially Generated Content) Amendment Bill 2024 has introduced important changes to South Australian law, focusing on how deepfake technology is addressed legally when it is used for humiliating or degrading purposes. This legislation complements federal reforms like the Criminal Code Amendment (Deepfake Sexual Material) Act 2024, a demonstrates an strong commitment by South Australia to tackling image-based abuse. Positive changes The law now makes it a criminal offence to create or share images or videos generated entirely by AI or digital technology that are humiliating, degrading, or invasive and which closely resemble a real person. This covers content that may be violent or sexually explicit. Before this amendment, the law mostly covered deepfakes created by editing real images or videos. The amendment closes the gap by including deepfakes made completely from scratch using AI. Penalties include fines of up to $20,000 or up to four years in prison. Courts can order offenders to hand over any equipment or records used to create or share deepfakes. The law applies regardless of whether the person targeted is a minor or adult, celebrity or private individual. Written consent by the person depicted is a defence, but consent by minors under 17 is not valid, nor consent obtained through coercion or deception. What this means for South Australian’s There are now more comprehensive legal protections against humiliating and degrading AI-generated images or videos. Individuals impacted can report the matter to the police who can investigate and take action against the offender. If the AI-created content is shared online, the law supports removing harmful material and punishing those responsible. How could protections be further strengthened in the future? Denmark’s moves to give a person copyright over their likeness deals with all AI deepfakes, not just those which depict someone in a humiliating or degrading way. For example, if someone’s likeness is used without consent, individuals can legally demand that platforms and hosts remove AI-generated images, audio, or videos. This is similar to copyright takedown procedures and allows rapid responses to unauthorized content dissemination. Online platforms that fail to respond properly or promptly to takedown requests face penalties, including significant fines under the EU Digital Services Act (EU DSA). In addition to creators being held accountable, this law will hold companies accountable for hosting unlawful deepfake content. In addition, unlike traditional libel or privacy claims that require proof of reputational damage or malicious intent, the law allows affected individuals to claim compensation for unauthorised use of their likeness even without demonstrating specific harm. You can read more about Denmark's proposed laws in one of our earlier Blog Posts. Good to know definitions The following are excerpts from the Summary Offences (Artificially Generated Content) Amendment Bill 2024 : 26F Interpretations humiliating or degrading depiction , in relation to a simulated person, means artificially generated content depicting— (a) an assault or other act of violence done by or against the simulated person; or (b) an act done by or against the simulated person that reasonable adult members of the community would, were the act to be done by or against a real person, consider to be humiliating or degrading to the real person (but does not include an act that reasonable adult members of the community would consider to cause only minor or moderate embarrassment); invasive depiction , in relation to a simulated person, means artificially generated content depicting— (a) the simulated person in a state of undress such that— (i) in the case of a simulated female person—the bare breasts are visible; or (ii) in any case—the bare genital or anal region is visible; or (b) the simulated person performing a private act, however, a depiction of a simulated person that falls within the standards of morality, decency and propriety generally accepted by reasonable adults in the community will not be taken to be an invasive depiction; private act means— (a) a sexual act; or (b) an act carried out in a sexual manner; or (c) urinating or defecating; simulated person means a person depicted in artificially generated content that— (a) purports to be a depiction of a particular real person; or (b) so closely resembles a depiction of a particular real person that a reasonable person who knew the real person would consider it likely to be a depiction of the real person. Sources South Australian Attorney-General’s Department media release, November 2, 2025 [ agd.sa.gov.au ]( https://www.agd.sa.gov.au/news/nation-leading-changes-tackling-the-dark-side-of-artificial-intelligence ) South Australian Parliament Hansard, Second Reading Speech, October 17, 2024 [ hansardsearch.parliament.sa.gov.au ]( https://hansardsearch.parliament.sa.gov.au/daily/uh/2024-10-17/33 ) Summary Offences (Artificially Generated Content) Amendment Bill 2024 full text [ legislation.sa.gov.au ]( https://www.legislation.sa.gov.au/lz?path=%2Fb%2Fcurrent%2Fsummary+offences+%28artificially+generated+content%29+amendment+bill+2024 ) *This blog was produced with assitance from AI. All sources have been verified.
- The Potential of Live Translation for Enhanced Government Services
Accessible and relevant information is key to the successful delivery of government services. This can be challenging where users of government services don't speak English (or the case of outside of Australia, the language of the government official). The recent introduction of a live translation feature in Apple's latest AirPods marks a significant technological advancement with the potential to significantly improve how government can talk to the diverse communities they serve. Communication for Enhanced Service Delivery Apple's live translation feature, available on the latest AirPods models, enables real-time, two-way translation of spoken conversations. Is it perfect? No. Is it pretty good? Yes. And this should be a case of not letting perfect get in the way of a good outcome. By leveraging the power of on-device artificial intelligence, this technology allows for communication across various languages. For government agencies, this opens up a world of possibilities for improving service delivery and accessibility. Applications in Government Services At front-facing service counters, such as those for registrations, permits, or inquiries, the live translation feature could instantly break down language barriers. A government employee could communicate with a non-English speaker, ensuring that information is better conveyed and understood. In the education sector, imaging how much parent-teacher interviews could be improved for children whose parents don't speak English. With live translation, teachers and parents can have meaningful conversations about a student's progress, fostering a stronger home-school partnership. In critical situations, clear communication can be a matter of life and death. While not a replacement for professional interpreters, the live translation feature could be a valuable tool for first responders to quickly understand the situation and provide immediate assistance. For those receiving in-home care and support but have limited English, a new opportunity to communicate with carers and those delivering support services would enhance the quality of the services and benefits received. Benefits and Considerations The primary benefit of this technology is its ability to provide instant and hands-free translation, making interactions more natural and efficient. The on-device processing ensures that sensitive conversations remain private, a crucial consideration for government applications. However, it is important to acknowledge the limitations. The technology is still in its early stages, and its accuracy is not sufficient for legally binding or sensitive communications. In such cases, professional interpreters remain essential. Furthermore, the current language support, does not cover all languages or dialects spoken in a community. Conclusion The opportunity is there for governments to pilot this technology in targeted areas to improve accessibility and impact. In the short-term future, live translation should be integrated and available for certain services in the way that mobile phones are currently used. There will need to be guard-rails around appropriate use. But this should inform, not get in the way, of uptake and application.
- Big Tech & Taxes: Why Australia should consider a DST (Digital Services Tax)
At the recent Productivity Roundtable there were calls for cuts to corporate tax, and we know the income tax burden is contributing to generational inequality. At the same time, as digital platforms increase their dominance, governments around the world are grappling with how to ensure tech giants contribute fairly to public finances. In Australia, global tech giants—including Google, Amazon, Meta (Facebook), and Apple—only paid a combined total of approximately $254 million in corporate tax for the 2022–23 financial year, despite generating an estimated $15 billion in local revenue . Revenue that is on the increase. That’s an effective tax rate of less than 2%- far below Australia’s statutory 30% corporate tax rate, largely due to profit-shifting and the booking of profits in lower-tax jurisdictions. Australia’s Tax Gap: A Closer Look Google Australia paid about $124 million in taxes on $2 billion of income in 2024, despite higher estimated gross revenue, much of which was routed through offshore entities. Meta (Facebook) Australia paid just $18.2 million in taxes on $1.15 billion in revenue (2022), with over 91% of its income classed as non-taxable due to accounting practices. Apple Australia paid $142 million in taxes on local income, a small fraction of its gross revenue due to international profit-shifting strategies. These figures highlight the limitations of traditional corporate tax models in the digital age. The Conversation recently explained why Australia is currently in this predicament: “After the second world war, Australia entered into tax treaties so foreign companies selling to Australian customers would no longer be taxed here… As the world moved to digital products this century, it became easy for giant multinational enterprises offering advertising on social media (such as Facebook and Instagram), advertising on search platforms (Google), and streaming services (Netflix) to provide those services from abroad…. However, treaty renegotiation is slow and complex. So several European countries, beginning with France in 2019, came up with a short-cut solution. ” To respond to these modern day challenges, Europe offers some options. DSTs Across Europe France was one of the first to implement a DST, introducing a 3% levy in 2019. Italy, Austria, Spain, and Turkey followed suit, each with their own versions targeting digital advertising, social media, and online intermediation. France collected €680 million in DST revenue in 2023 and expects €780 million in 2024 . Italy , Austria , and Spain report similar figures, with DSTs covering services like video streaming and music platforms. These taxes apply regardless of where the company is headquartered, addressing the loophole that allows digital firms to avoid local taxation. The UK’s Digital Services Tax: A Case Study In April 2020, the United Kingdom introduced a 2% Digital Services Tax (DST) on revenues from search engines, social media platforms, and online marketplaces. Companies are liable if they earn more than £25 million in UK digital revenues and over £500 million globally . According to Tax Justice UK, about 90% of DST revenue comes from just five companies —Amazon, Meta, Google, and others—making it a de facto “Big Tech Tax” . In its first year, only 18 companies paid the DST, and more than a dozen paid more in DST than in corporate tax. Amazon, for instance, paid no UK corporate tax in 2020/21 but met its DST obligations in full. The DST currently raises around £800 million annually and is projected to reach nearly £1 billion per year by 2027 . Advocacy groups and political parties have proposed increasing the rate to as much as 10% to boost revenue and curb tax avoidance. Challenges and Global Negotiations DSTs were introduced as interim measures while the OECD and G20 negotiated a global tax framework known as the “Two Pillar Solution.” Pillar One proposes reallocating taxing rights so that profits are taxed where users are located. However, negotiations have stalled, and DSTs remain in place. The United States has criticised DSTs for disproportionately targeting American firms. In 2024, the U.S. Treasury reported that Pillar One could reduce federal receipts by $1.2 billion . Why DSTs Matter DSTs are more than just revenue tools, they ensure that companies profiting from domestic users contribute to public finances, especially as governments face rising costs in health, policing, education, and infrastructure. Also, as noted in the Tax Justice UK blog, DSTs have become essential for addressing tech tax avoidance and ensuring fairness in the digital economy, something which is currently lacking and governments are paying the price for . Time for Australia to Act Australia’s current tax receipts from Big Tech fall short of what’s fair. With billions in revenue generated locally, the case for a Digital Services Tax is stronger than ever. Learning from the UK and Europe, Australia could design a DST that ensures tech giants contribute proportionately to the public services their users rely on. Sources and Further Reading https://theconversation.com/australia-could-tax-google-facebook-and-other-tech-giants-with-a-digital-services-tax-but-dont-hold-your-breath-257251 https://taxjustice.uk/blog/what-is-the-digital-services-tax-and-why-we-should-raise-it https://www.bakermckenzie.com/en/insight/publications/2025/03/navigating-the-digital-tax-landscape
- Why Australia Should Treat Population Data as a Sovereign Resource
In Australia, mineral resources are considered sovereign assets, with royalties paid by companies that extract them. In the digital age, population-scale data has become just as valuable—especially when analysed by artificial intelligence (AI)—yet there is no equivalent mechanism to ensure Australians benefit when that data is monetised. Population data should be treated as a sovereign resource because it is a valuable national asset which is being used by large companies to generate substantial economic returns. Just as countries benefit financially from natural resources, treating population data as an owned, monetisable asset allows its economic value to be realised by companies and governments. Large AI platforms use aggregated personal data to predict behaviour, tailor advertising, and maximise engagement, generating substantial private profit. In 2024, Facebook (Meta), Google (Alphabet), Amazon, and Netflixs had a combined revenue of over $US1.5 trillion. [1] Tech led companies such as these have been identified as playing a central role in the growing wealth gap in our society - where extreme wealth is concentrated among a few companies and billionaires who gather up our data, organize it and turn out products like perfectly targeted ads. [2] While AI presents many opportunities and benefits to society, it is also is responsible for harms, most of which governments are left to fund the response or mitigations services for. For example, AI-driven content algorithms can promote body image issues, especially among young people, contributing to eating disorders that put a high demand and cost on Australia’s health system. AI is increasingly used to generate fake sexual images, leading to more harm, and consequently more reports and police investigations. Generative AI can amplify extremist propaganda, making radicalised recruitment faster and harder to detect. Intelligence agencies report rising risks, requiring greater investment in counter-radicalisation programs and online monitoring. AI automation is predicted to replace millions of tasks, while governments will be forced to subsidise retraining and provide welfare support. Designating population data as a sovereign resource would allow a data mining royalty—a levy on large scale companies extracting and monetising Australian population data. The funds could directly support the public services now burdened by AI’s impact: mental health programs, police resources, national security, and workforce transition. Without targeted revenue capture, Australians bear the social and economic costs of AI’s growth while corporations retain the profits. Implementing a data mining royalty would align digital commerce with the principles applied to the resources sector: entities that derive profit from assets collectively owned by Australians should contribute an equitable share to the public good. [1] Google, Amazon, Meta, Apple and Microsoft (GAMAM) Statistics and Facts: Statista . [2] Tech Is Splitting the U.S. Work Force in Two; NY Times
- The Cost of Espionage: How AI and Tech Are Changing the Game
Espionage and foreign interference will be enabled by advances in technology, particularly artificial intelligence. - Director-General of Security, Mike Burgess Australia’s national security agency, ASIO, recently released its Annual Threat Assessment , and one of the key takeaways is that espionage is costing the country a lot—around $12.5 billion a year. That’s not just about stolen secrets; it includes lost intellectual property, compromised research, and disrupted government operations. But what’s really changing the landscape is how artificial intelligence (AI) and other tech advances are being used in these operations. AI Makes Espionage Easier AI is a powerful tool, and while it’s helping businesses and governments work smarter, it’s also being used by foreign intelligence services to collect and analyze data more efficiently. ASIO points out that these actors can now use AI to sift through huge amounts of information—like social media posts, leaked documents, and public databases—to build detailed profiles of people and organizations [1] . This kind of profiling can help them identify targets, find weaknesses, and even automate parts of their operations. For example, AI can be used to create convincing fake identities or deepfake videos, which can trick people into sharing sensitive information. Tech Is a Force Multiplier ASIO’s report highlights how technology is making espionage more scalable. In the past, spying required a lot of resources and physical presence. Now, with digital tools, foreign actors can reach into Australian institutions from anywhere in the world. ASIO has disrupted 24 major espionage operations in the last three years—more than in the previous eight combined [2] . Some of these operations involved foreign agents applying for government jobs or convincing insiders to share access to databases. Technology makes these tactics more effective and harder to detect. What’s Being Done About It To help tackle these challenges, the Australian Government launched the Technology Foreign Interference Taskforce (TechFIT) . This initiative brings together government and industry to protect sectors like AI, quantum computing, and biotech from foreign interference [3] . TechFIT works by raising awareness, sharing threat intelligence, and helping organizations build better security practices. It’s a way to make sure that Australia’s tech sector can keep innovating without being compromised. Why It Matters Espionage isn’t just about spies in trench coats anymore. It’s about data, algorithms, and networks. The risks are real, and they affect everything from national security to economic competitiveness. ASIO’s message is clear: as technology evolves, so do the threats—and Australia needs to stay ahead of the curve. • Secure Innovation provides security guidance to help protect emerging technology companies from a range of threats. • Secure Your Success provides guidance to individuals and organisations to prevent foreign powers gaining advantage from Australian innovation by stealing intellectual property, harvesting expertise and co-opting academic research. • The Protective Security Top 10 provides the essential components of a complete security framework. • Protect Your Research explains what you can do to protect yourself and your institution from harm. • Report Prying Minds provides guidance for the defence industry on the threat of espionage and how to protect against it. • Think Before You Link explains the threat from malicious social media profiles. It provides guidance on how to avoid being targeted through professional networks and other online platforms. • Countering the insider threat provides guidance on hardening your organisation to the insider threat and limit damage if compromise occurs. • Clearance holder obligations provides advice on the requirements of maintaining an Australian Government security clearance. • Managing the espionage and foreign interference threat while travelling overseas provides guidance on how you can protect yourself and your assets while travelling internationally. References [1] ASIO Annual Threat Assessment 2025 - intelligence.gov.au [2] Technology Foreign Interference Taskforce [3] ASD Cyber Threat Report 2022-23
- Governance Lessons from the Four Corners ATO GST investigation.
The Four Corners investigation into the Australian Taxation Office’s (ATO) “No Return” GST fraud story offers a timely lesson about the critical need for robust AI governance and the non-negotiable importance of human oversight in government technology use. This investigation by ABC journ alis ts Neil Chenoweth and Angus Griggry is a great case study for underscoring why public trust must be front and centre when government deploys automated, data-driven systems . Automation Without Adequate Safeguards Four Corners revealed that automation and massive job cuts at the ATO laid fertile ground for large GST fraud. Lax oversight and an overreliance on digital automation meant fake refunds and sham invoices sailed through the system with minimal intervention. The system that was meant to uphold financial integrity instead delivered a windfall for fraudsters, damaging public confidence and exposing serious vulnerabilities in government operations. Much of the fraud’s success stemmed from the ATO’s shift away from human verification to cost-saving automation. About 1,000 jobs were cut—half the people dedicated to GST oversight. Former frontline controls, like basic desk audits, were abandoned in favor of algorithmic checks and queue-based triage. With staff gone, the institutional “muscle memory” for detecting rorts faded, and the automated system lacked scruitny and human oversight, failing to catch fraudulent activity until it was too late[2]. This outcome illustrates a fundamental truth for government AI deployments: without strong governance frameworks, automation can amplify risks rather than manage them . Why Human Oversight and AI Regulation Matter Robust AI governance, especially in government, is not just a technical need—it is a democratic imperative. Trust in government systems rests on the assurance that decisions are fair, explainable, and, when necessary, subject to human review. When failures occur, as in this GST examples, they are not just operational lapses—they erode faith in public institutions. ATO Second Commissioner Jeremy Hirschhorn articulates this social contract: “AI may be a helper. It can move things around, it can link, synthesise and analyse information, and it can do some things much faster and more consistently than we as humans can. But AI cannot determine what constitutes fairness and reasonableness, having considered unique taxpayer circumstances with compassion and empathy. … … Actions or decisions should be explicable by a human to the affected person in a way that the affected person can understand (even if automated or performed by AI). If you do not know why your organisation is doing things (‘the computer said so’), you are breaching your responsibility to be accountable to both the individual taxpayer, but also the broader system.” ([Full speech: https://www.ato.gov.au/media-centre/speech-to-unsw-16th-atax-international-conference) . The Path Forward: Building Trust Through Responsible AI The Four Corners story is a great example of why transparency, accountability, and clear lines of human responsibility aren’t optional—they are what separates public service from bare algorithmic processing. As governments increasingly adopt automation and AI, it’s critical to ensure that: - Human oversight is embedded in every system where rights and livelihoods are at stake. - AI-driven processes are transparent, explainable, and subject to regular review. - Regulations require government agencies to maintain not just efficiency, but trustworthiness and accountability to citizens. Regulated, responsible AI isn’t just good policy—it’s the only way to maintain public confidence and, ultimately, the integrity of our government decision makers. Sources FOUR CORNERS lifts the lid on ATO oversight failures in No Return ... https://tvblackbox.com.au/page/2025/07/28/four-corners-lifts-the-lid-on-ato-oversight-failures-in-no-return-expose/ Scammers, fraudsters and the tax office’s missing $50 billion | Four Corners Documentary https://www.youtube.com/watch?v=lCNsluZrbvU Speech to UNSW 16th ATAX International Conference https://www.ato.gov.au/media-centre/speech-to-unsw-16th-atax-international-conference Trust in the age of AI: Data stewardship inside the ATO https://www.businessthink.unsw.edu.au/articles/ato-ai-ethics-human-oversight-tax-data How the ATO balances artificial intelligence with human oversight https://businessthink.unsw.edu.au/articles/ato-ai-ethics-human-oversight-tax-data Australian Taxation Office's Management and Oversight of Fraud ... https://www.anao.gov.au/work/performance-audit/australian-taxation-offices-management-and-oversight-fraud-control-arrangements-for-the-gst Scammers, fraudsters and the tax office's missing $50 billion https://www.youtube.com/live/lCNsluZrbvU ATO investigated 150 staff members for involvement in GST scam ... https://www.counterfraud.gov.au/case-studies/ato-investigated-150-staff-members-involvement-gst-scam-sparked-operation-protego How easy is it to trick the Australian Taxation Office? | ABC News Daily podcast https://www.youtube.com/watch?v=XqgHbKQGZVY Governance of Artificial Intelligence at the Australian Taxation Office https://www.anao.gov.au/work/performance-audit/governance-of-artificial-intelligence-the-australian-taxation-office
- Truth, Lies, and Ideological Neutrality: Dissecting Trump’s AI Principles
On the 23 July President Trump signed the "Preventing Woke AI in the federal Government" Executive Order. Its intent is to guide the future of federal artificial intelligence procurement through two “Unbiased AI Principles”: truth-seeking and ideological neutrality . Framed as an effort to ensure trustworthy AI, the order’s rhetoric takes an overt swipe at Diversity, Equity, and Inclusion (DEI). While the attack on DEI fits neatly into Trump’s ongoing culture war, the truth-seeking requirement is where the most profound contradictions lie and raise interesting questions on what this order means for conspiracy theories, "fake news" and the anti-vaccination movement. Under Section 3 of the draft order, Trump declares that large language models (LLMs) must prioritize “historical accuracy, scientific inquiry, and objectivity” when responding to factual prompts, and “acknowledge uncertainty where reliable information is incomplete or contradictory” ( Executive Order on Unbiased AI Principles, 2025 ). This sounds like a sensible step toward rooting AI outputs in evidence and transparency. But it raises an essential question: how does this reconcile with Trump’s own record (and his base) of promoting misinformation? Fake News would theoretically be caught up in the latest AI Executive Order Trump vs. Truth Donald Trump is not known for his fidelity to facts. According to the Washington Post ’s fact-checking database, Trump made 30,573 false or misleading claims during his presidency, an average of 21 per day ( Washington Post, 2021 ). These falsehoods ranged from inflated economic statistics to debunked conspiracy theories—most notably, the claim that the 2020 presidential election was stolen. If federal agencies are only allowed to procure AI tools that are “truth-seeking,” one must ask: what happens when Trump’s narratives collide with verified facts ? Would a compliant AI system be obligated to contradict the very person who signed the order? For example, an AI responding to the prompt “Was the 2020 election rigged?” would, in truth, need to answer “No.” Multiple audits, bipartisan reviews, and the Cybersecurity and Infrastructure Security Agency (CISA) found the 2020 election to be “the most secure in American history” ( CISA, 2020 ). Would this truthful response now be deemed ideologically biased ? When Truth Is Treated as Partisan The executive order stipulates that LLMs “shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.” This implies that recognizing systemic inequality, gender diversity, or inclusive hiring policies might be construed as ideological manipulation . But science, history, and lived experience are not neutral by nature—they often tell uncomfortable truths. If an AI model reports that racial disparities in policing exist in the USA (which is factually supported by extensive academic and governmental studies), would that too be censored for being insufficiently “neutral”? Moreover, the anti-vaccine movement, which gained renewed energy during the COVID-19 pandemic, is another key test. Trump himself gave contradictory messages—initially championing vaccine development, but later courting anti-vaccine sentiment among his base. Any AI model built to uphold scientific inquiry would be required to affirm that vaccines significantly reduce hospitalization and death from COVID-19 ( CDC, 2023 ). Yet doing so may be perceived by skeptics as partisan—illustrating the challenge of defining “neutrality” when science contradicts political rhetoric. The Contradiction The order’s truth-seeking principle, if applied sincerely, would compel AI to correct conspiracy theories, misinformation, and propaganda—many of which Trump has spread or endorsed. This would be a welcome development. Yet the accompanying ideological neutrality clause seems designed to shield specific narratives, especially those favored by the political right, from scrutiny. This creates a paradox: can AI be both truthful and politically neutral when truth itself is politically polarizing? Conclusion: Ideological Neutrality Can’t Be a Shield for Falsehood The truth-seeking principle embedded in Trump’s executive order could, on its face, promote more honest AI systems. But truth, especially in a politically polarized environment, is rarely viewed as neutral. When one political figure has made tens of thousands of false statements, insisting that AI must be neutral while also prioritizing truth becomes an irreconcilable contradiction . If these principles are to have any credibility, they must apply equally—especially to those who wield the most power. Sources: Trump’s Executive Order on Unbiased AI Principles (Draft, 2025) – [hypothetical URL: https://example.gov/executive-orders/unbiased-ai ] Washington Post : Trump made 30,573 false or misleading claims as president CISA: Joint Statement on the Security of the 2020 Election CDC: COVID-19 Vaccine Effectiveness Data
- Trump’s Three AI Executive Orders: What’s Coming July 2025 and Why It Matters
Executive Orders expected to signed at the White House imminently. A Dramatic Shift In US AI Policy In the next few days, we should find out more about the details of three Executive Orders which are due to be signed imminently. Here is what we know so far. These orders are seen as the prelude to the broader “Winning the Race: America’s AI Action Plan,” which will set out strategies for long-term investment, regulatory reform, and international engagement on AI. The plan is expected to emphasise rapid commercialisation, streamlined oversight, and a global leadership role for the U.S. in artificial intelligence. The Three Executive Orders Explained 1. Accelerating AI Infrastructure The first order tackles the challenge of infrastructure for AI. The Department of Energy is tasked with speeding up the development of data centers and supporting grid and water infrastructure. This means fast-tracking permitting processes—even if it means overriding certain local environmental protections—and seeking proposals for state-of-the-art data centers on federal land. Backed by a $90 billion investment pledge from tech and energy giants, this move aims to expand America’s computational muscle. Pros: Rapidly expands national computing capacity, supports job creation across tech and construction, and positions the U.S. to compete with China’s state-backed AI ambitions. Cons: Potentially weakens environmental safeguards and may provoke opposition from state, local governments and regulators. 2. Exporting American AI Technology With the second order, the administration is looking outward. By empowering agencies such as the Export–Import Bank and the U.S. International Development Finance Corporation, the U.S. will actively promote the export of its AI “stacks”—including chips, software, and services—to allies and emerging markets. The aim is to seed global markets with American-made technology, counterbalance Chinese influence, and boost the reach of domestic tech companies. Pros: Helps allied countries build AI capabilities, strengthens U.S. companies’ global presence, and limits competitors’ footholds. Cons: Increases national security risks and makes enforcing overseas AI safety standards harder. 3. Banning “Woke” or "Inclusive" Biased AI Perhaps the most controversial, the third order mandates that any AI tool developed or procured by the federal government must meet the Trump government’s ideological standards. The goal, per administration officials, is to eliminate “woke bias” from government technology and ensure AI remains free of engineered social agendas. Pros: Addresses conservative concerns about political bias in AI; pushes for more transparent, explainable models. Cons: Could undermine fairness or diversity initiatives, raises constitutional questions about compelled speech and scientific integrity, and risks politicising AI procurement. Expected Reactions Industry groups are likely to welcome the deregulation and infrastructure investment, anticipating new opportunities for startups, cloud providers, and semiconductor firms. Critics, however, will warn that rolling back previous equity and risk management frameworks could leave gaps in bias prevention, consumer protection, and environmental safeguards—potentially impacting vulnerable populations and generating legal challenges.
- Why Legislation is Needed for Government AI Use
As artificial intelligence (AI) becomes increasingly embedded in government operations, the need for robust, ethical, and transparent governance is more urgent than ever. While Australia has made important strides with its AI Ethics Framework, legislating these principles for all government use of AI is a logical and necessary next step. Doing so would provide a clear operating framework and, crucially, help foster public trust in the growing use of AI technologies across the public sector. The Case for Legislating AI Ethical Principles Trust is the cornerstone of effective public service. Australians must be confident that AI systems used by their government are fair, transparent, and accountable. Australia currently has voluntary AI ethics principles, which while valuable, do not carry the enforceability required to ensure consistent, responsible AI use across all departments. Legislating Australia’s AI ethical principles would set a clear, non-negotiable standard for government, providing assurance to citizens that their rights and interests are protected as automation becomes more widespread (Australian Government, 2019). Australia’s AI Ethics Framework: The Foundation The Australian Government’s AI Ethics Framework , launched in 2019, outlines eight core principles: - Human, social and environmental wellbeing: AI systems should benefit individuals, society, and the environment, promoting positive outcomes and minimising harm. - Human-centred values: AI should respect human rights, freedom, dignity, and autonomy, aligning with Australian values and diversity. - Fairness: AI systems shou ld avoid bias, ensure equitable treatment, and not discriminate unlawfully against individuals or groups. - Privacy protection and security: AI should uphold privacy rights and ensure the security of data, protecting individuals from misuse or unauthorised access. - Reliability and safety: AI systems should operate reliably, safely, and as intended throughout their lifecycle, with risks identified and managed. - Transparency and explainability: The operations and decisions of AI systems should be transparent, and outcomes should be explainable to those affected. - Contestability: People should be able to challenge the use or outcomes of AI systems, especially those that significantly impact individuals. - Accountability: Organisations and individuals responsible for AI systems must be accountable for their functioning, outcomes, and impacts, with clear governance and oversight mechanisms in place. These principles are intended to guide the design, development, and implementation of AI systems across sectors. However, without legislative backing, their adoption in government remains uneven and optional. Legislating ethical principles would require departments to be transparent about how AI systems make decisions, protect privacy, and ensure that outcomes are fair and contestable. Australia should embed these requirements in law, ensuring that citizens can trust the purpose and impact of AI on government operations. Best Practice in AI Governance: Lessons from Home and Abroad The South Australian government has legislated the Five Safes Framework as part of its data sharing legislation in the Public Sector (Data Sharing) 2016 Act. This allowed for progressive data sharing within a trusted model – so that public good outcomes could be pursued in a way in which the public could have confidence that the sharing was undertaken safely and appropriately. Internationally, leading governments have moved beyond voluntary frameworks to implement enforceable governance arrangements. For example · In Canada , the Algorithmic Impact Assessment (AIA) tool is mandatory for federal departments deploying AI, ensuring risks are assessed and mitigated (Government of Canada, 2020. · The European Union’s AI Act legally requires public sector AI systems to meet strict ethical and safety standards. These approaches demonstrate that legislation is both possible and practical. · Singapore’s Model AI Governance Framework requires continuous monitoring and evaluation of AI systems to uphold ethical standards. By legislating its AI ethical principles, Australia can provide public sector departments with the clarity and consistency needed to deploy AI responsibly. This step would not only align Australia with global best practice but also send a strong signal to citizens that their government is committed to ethical, accountable, and trustworthy use of AI. Conclusion Adopting best practice AI governance arrangements in public sector departments is crucial for fostering trust, accountability, and ethical usage of AI technologies. Legislating Australia’s AI ethical principles is a practical and necessary step to ensure that as AI becomes more pervasive in government, it is always used in the service of the public good. By establishing clear frameworks, promoting transparency, and engaging with stakeholders, Australia can set a global benchmark for responsible AI in the public sector. References: Australian AI Ethics Framework, https://www.industry.gov.au/data-and-publications/australian-ai-ethics-framework Government of Canada, Algorithmic Impact Assessment Tool, https://www.canada.ca/en/innovation-science-economic-development/news/2020/04/algorithmic-impact-assessment-tool.html European Commission, EU AI Act, https://digital-strategy.ec.europa.eu/en/policies/eu-ai-act European Commission (2005) https://artificialintelligenceact.eu UK Government, AI and the Public Sector Code of Practice, https://www.gov.uk/government/publications/ai-and- the-public-sector-code-of-practice Singapore Government, Model AI Governance Framework, https://www.pdpc.gov.sg/Industry-Guides/2020/Model-AI-Governance-Framework











