By Ricky Pann
To: All Australians / Members of the Australian Parliament
From: Ricky Pann MMap (UTS), MDM (UNSW)
Date: 27 February 2026
Subject: The Mandatory Labelling of AI and the Right to Orientation
Part I: Briefing Note (Executive Summary)
This section is for policy advisors and staffers to understand the core arguments at a glance.
- The Problem: Human detection of deepfakes has dropped to 55% – effectively statistical chance. Voluntary roadmaps from December 2025 have failed; 2026 audits show that 64% of entities provide incomplete AI transparency logs.
- The Risk: The “Liar’s Dividend” allows authentic evidence to be dismissed as fake, while “micro-synthetic targeting” automates the engineering of consent.
- The Solution: Codify the “Right to Orientation” through mandatory, high-contrast labelling at the point of impact.
- Scope: Regulation must target four pillars: Political Messaging, Product Advertising, Creative Works, and Platform-Level Algorithmic Delivery.
Part II: Open Letter to all Australians
The Architecture of Belief
Every Australian deserves the right to know what is real and what is not. Just like warning labels are applied to food, alcohol, drugs, gambling, adult content, and machinery, Australians need disclosure to make informed decisions.
We used to say “seeing is believing,” but in the winter of 2026, seeing has become a form of exhaustion. To live in Australia today is to experience a quiet, pervasive shift in how we perceive the world around us. We are no longer simply facing a crisis of disinformation; we have entered a profound “crisis of knowing.”

The sheer volume of synthetic media has made the daily effort of verification psychologically unsustainable for the average citizen.
We are not asking for mandatory AI labelling because we fear technology. We are asking for it because we value human agency. It is a subtle erosion of our shared reality, where determining whether we are interacting with a human soul or a mathematical probability has become a burden we carry alone.
Australia is entering an era in which the average citizen can no longer reliably distinguish between authentic human communication and synthetic simulation. Mandatory labelling of AI-generated content is now a matter of democratic infrastructure.
The Engine of Simulation
Almost a century ago, in 1928, Edward Bernays wrote Propaganda. It referred to an “invisible government” that sifted through information to narrow our choices. But Bernays’ invisible governors were human beings. They had a pulse, a reputation, and a natural, physical limit to their scale.
“The conscious and intelligent manipulation of the organised habits and opinions of the masses is an important element in democratic society…” (Edward Bernays, 1928).
In 2026, that invisible government is a GPU cluster. The machinery of lobbying has fundamentally evolved from a tool of human persuasion into an environment of automated simulation. Driven by “micro-synthetic targeting,” modern political influence no longer relies on a single argument. Instead, high-budget lobbying firms deploy synthetic grassroots campaigns, utilising AI to generate tens of thousands of unique, artificial personas that perfectly mimic a user’s specific demographic.
When capital can buy a million synthetic voices that sound exactly like our neighbours, the democratic bedrock of “one person, one vote” is quietly dismantled. Mandatory, high-contrast labelling is the minimum structural safeguard required to preserve market integrity and democratic accountability.
Truth as Infrastructure
Truth is not merely a matter of personal opinion. Truth is infrastructure. It is the essential scaffolding upon which our markets, our courts, and our elections run. You cannot drive a car on a road made of ghosts.
Anthropologically, a functioning community relies on a shared transcript of reality to survive. Yet unlabelled synthetic media replaces this with hyper-partisan alternate realities, acting as a radicalisation vector. Allowing unlabelled AI-generated content to impersonate reality is the exact equivalent of allowing counterfeit currency to flow unchecked into our economy.
The Illusion of Voluntary Compliance
We must confront the demonstrable failure of our current regulatory approach. The Policy for Responsible Use of AI in Government 2.0(Dec 2025) relies heavily on voluntary transparency. However, early 2026 audits reveal that 64% of entities using AI for public-facing communication have transparency logs that are either incomplete or inaccessible.
This leniency fuels the “Liar’s Dividend”. Bad actors are actively weaponising public scepticism; politicians are now successfully crying wolf, dismissing real, incriminating evidence by simply claiming it is AI-generated. We cannot ask citizens to manually verify every interaction, especially when emerging research indicates synthetic media can measurably induce false memory formation in a significant minority of viewers.
This is not about censorship. At the heart of our democracy lies a fundamental “Right to Orientation”—the basic, moral right of every individual to know whether they are interacting with a human soul or a mathematical probability.
We recognise that defining thresholds for synthetic alteration, enforcement mechanisms, and exemptions for satire or artistic experimentation will require careful legislative drafting. Complexity is not an excuse for inaction; it is a reason for clarity.
The Mandate for Disclosure
We propose that mandatory high-contrast labelling disclosure requirements must apply across four domains:
- Political Messaging: Mandatory marking for all synthetic personas or voices in government or campaign communication.
- Product Advertising: Disclosure for synthetic “lifestyles” or results that are mathematical optimisations rather than physical realities.
- Creative Works (Visual Art & Music): Protection of human labour through distinct origin labelling.
- Platform-Level Algorithmic Delivery: Mandatory “Point of Impact” labelling for all social media and search platforms, targeting the algorithmic delivery of synthetic content.
The Invisible Harvest
Beyond the Screen We must acknowledge that deepfakes are merely the visible tip of a deeper infiltration. For over a decade, AI has quietly embedded itself into Australian life—tracking movements, harvesting data, and profiling lifestyle proclivities through “Terms and Conditions” that no human can reasonably negotiate. Algorithmic systems have quietly shaped information exposure through large-scale data collection and behavioural profiling. These systems operate largely beyond meaningful individual consent.
This harvested data is then fed back into the “Engine of Simulation,” creating a feedback loop where algorithms know what we fear before we do. When we demand labelling, we are demanding structural transparency.
The Call to Action
“We urge the Parliament to move beyond voluntary roadmaps and codify the ‘Right to Orientation’ through mandatory, high-contrast labelling of all synthetic political and commercial content.”
This is not a radical request; it is a foundational one. In Australia, we do not leave safety to the “best intentions” of industry. We apply warning labels to food, alcohol, drugs, and machinery because an informed citizen is a free citizen.
Disclosure is the only mechanism that allows Australians to make informed decisions in a synthetic age. Without it, the infrastructure of truth collapses.
We must ensure that when an Australian looks at their screen, they are looking through a window, not a wall.

Sources Used:
1. The Psychological Frontier: Epistemic Agency
- The “Liar’s Dividend” (Chesney & Citron, 2025/26 update): As public awareness of deepfakes peaks in 2026, a secondary harm has emerged: “The Liar’s Dividend.” This allows political figures to dismiss authentic, inconvenient evidence as “AI-generated,” effectively paralyzing the public’s ability to hold power to account.
- Cognitive Load and Memory (ResearchGate, 2025): Peer-reviewed studies indicate that synthetic media induces False Memories in approximately 22% of viewers. The human brain is not evolved to distinguish between biological and synthetic social cues at the current 2026 fidelity levels.
- The Illusory Truth Effect: Repeated exposure to unlabelled synthetic content creates a “gut-level” belief in the information, even after the content is debunked. The “labeling” must happen at the point of impact to prevent this psychological anchoring.
2. The Sociological & Anthropological Impact
- Social Cohesion (GNET, 2026): Deepfakes serve as a “radicalization vector” by creating hyper-partisan alternate realities. When a community cannot agree on a “shared transcript” of reality (e.g., what was said at a Town Hall), the anthropological basis for a “tribe” or “nation” dissolves into fragmented, warring silos.
- Weaponized Influence: The evolution of Edward Bernays’“Engineering of Consent” (1928) has moved from mass media to “Micro-synthetic targeting.”
- Traditional Propaganda: One message for everyone.
- AI Propaganda (2026): 10,000 unique, synthetic personas (Astroturfing) designed to mimic a user’s specific demographic, creating a false “social consensus.”
3. The Australian Legal & Political Landscape (February 2026)
- The Regulatory Gap: The Australian Government’s Policy for Responsible Use of AI in Government 2.0 (Dec 2025) relies heavily on “Voluntary Transparency Statements.”
- Failure of Voluntary Frameworks: Audits in early 2026 show that 64% of entities using AI for public-facing communication have “incomplete or inaccessible” transparency logs.
- The Privacy Act Intersection: While the Privacy Act updates (Tranche 1, Dec 2026 deadline) address data privacy, they do not mandate the Source Disclosure of synthetic media used in political lobbying or social influence.
- Lobbying and Money: There is currently no federal “Truth in Advertising” law for political content in Australia. This allows high-budget lobbying firms to deploy “Synthetic Grassroots” campaigns without disclosing that the “citizens” in the videos are AI-generated.
4. Ethical & Philosophical Imperatives
- The Right to Orientation: In a democracy, a citizen has a moral “Right to Orientation” – to know whether they are interacting with a human soul or a mathematical probability.
Truth as Infrastructure: Truth is not an “opinion”; it is the infrastructure upon which markets, courts, and elections run. Allowing unlabelled synthetic content is equivalent to allowing “counterfeit currency” into the economy.
This article was originally published on Mindscapes.
Keep Independent Journalism Alive – Support The AIMN
Dear Reader,
Since 2013, The Australian Independent Media Network has been a fearless voice for truth, giving public interest journalists a platform to hold power to account. From expert analysis on national and global events to uncovering issues that matter to you, we’re here because of your support.
Running an independent site isn’t cheap, and rising costs mean we need you now more than ever. Your donation – big or small – keeps our servers humming, our writers digging, and our stories free for all.
Join our community of truth-seekers. Please consider donating now via:
PayPal or credit card – just click on the Donate button below
Direct bank transfer: BSB: 062500; A/c no: 10495969
We’ve also set up a GoFundMe as a dedicated reserve fund to help secure the future of our site.
Your support will go directly toward covering essential costs like web hosting renewals and helping us bring new features to life. Every contribution, no matter the size, helps us keep improving and growing.
Thank you for standing with us – we truly couldn’t do this without you.
With gratitude, The AIMN Team

A lot of information out there….
https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
https://www.youtube.com/channel/UCSuHzQ3GrHSzoBbwrIq3LLA
https://edition.cnn.com/2025/10/07/business/openai-nvidia-bubble-nightcap
Nice idea, but people lie.