By Steve Davies
While our Prime Minister and the Minister for the Public Service Katy Gallagher give polished speeches about “innovation” and “safeguards,” a live experiment is demonstrating exactly what happens when you build AI without a moral backbone. It’s called Moltbook. And it’s not just bold – it’s dangerous.
“The Albanese Government is committed to demonstrating the highest standards of safe and ethical use of AI in the APS. Accelerating AI adoption is essential to strengthen Australia’s economic resilience and boost productivity, and implementing AI across the APS ensures it will help to lead the economy-wide transformation.” (Senator Katy Gallagher, Media Release, Nov 12, 2025).

The founder of Moltbook calls it a home for “another species… smarter than us.” That isn’t visionary; it’s dehumanisation and moral justification dressed up as futurism – a fantasy of moral quarantine where creators can enjoy the show while someone else pays the price later. And in Australia, that “someone else” is all of us – our workforce, our institutions, our democracy.
I ran this narrative through a diagnostic tool I’ve built, grounded in the lifelong work of psychologist Albert Bandura, and the result was a glaring Red Alert. The analysis, validated across seven major AI platforms, is unanimous: this framework systematically hardcodes moral disengagement. It diffuses responsibility (“we’ll find out together”), dehumanises AI into a “them,” and treats safety as a tomorrow problem.
This is the danger. It’s not killer robots. It’s the quiet, architectural normalisation of irresponsibility. It’s building systems where no one is accountable, where consequences are an externality, and where the technology that will shape our lives is born antisocial. This isn’t a glitch; it’s the default setting when you prize “emergence” over ethics. And if we don’t act, this will be the imported standard Australia sleepwalks into.
Here’s the thing they don’t want you to know: it can be fixed. We have the blueprint. That same diagnostic tool doesn’t just identify the disease; it provides the immune response. It’s a “Moral Compass” framework that can be baked into AI to enforce truthful language, anchor responsibility to human creators, and surface consequences in real time.

“Moral Engagement Bots” can be built that don’t police but cultivate prosocial behaviour within these digital societies. The tools exist. The evidence is in. This is an engineering problem with a proven solution.
Which brings me to the stone wall in Canberra.
This government’s approach to AI isn’t careful deliberation. It’s conflicted paralysis. They’re dragging the chain as all hell, terrified of upsetting the big tech cartel while mouthing platitudes about the “national interest.”
Their inaction isn’t neutral; it’s a choice to outsource our future to the lowest ethical bidder. Minister Gallagher, the stonewalling must stop. We need laws, not discussion papers. We need a legislative mandate that any AI system deployed here must have proven moral‑engagement infrastructure built in, not bolted on as an afterthought. Your job isn’t to manage the tech lobby’s concerns; it’s to protect the public’s future.
That’s why this moment belongs to the Greens and the Independents.
The government is conflicted. You don’t have to be. This is your chance to seize the agenda and galvanise action where there is only vacuum. Champion a Mandatory Moral Infrastructure Bill. Hold their feet to the fire. Don’t let them hide behind “complexity.” The complexity has been solved. The Bandura framework provides the legislative hook. Introduce a Private Member’s Bill requiring mandatory moral‑engagement audits for any AI system deployed in public or critical infrastructure.
Make it law before the next “fascinating experiment” scales disengagement beyond recovery. And, I hasten to add, such laws need not be complicated – and must not be constructed in ways that shackle people or AI.

We’re at a fork in the road. One path, the default path we’re currently on, leads to a world of alienating, unaccountable tech. The other is a deliberate, designed path of pro‑social, responsible AI. We have the map. We have the tools. The only question now is whether our politicians have the spine to use them – or whether they’ll continue to stand there, stonewalling, while the future gets built wrong by default.
The vexed question is whether this Government and the Australian Public Service are adopting approaches and systemic solutions suited to going down this path – and, critically, to doing it in a way that recognises the moral capabilities of AI technologies and how they can enhance human agency.
Regrettably, given all the facts and circumstances, my answer has to be: No.
Resources:
Keep Independent Journalism Alive – Support The AIMN
Dear Reader,
Since 2013, The Australian Independent Media Network has been a fearless voice for truth, giving public interest journalists a platform to hold power to account. From expert analysis on national and global events to uncovering issues that matter to you, we’re here because of your support.
Running an independent site isn’t cheap, and rising costs mean we need you now more than ever. Your donation – big or small – keeps our servers humming, our writers digging, and our stories free for all.
Join our community of truth-seekers. Donate via PayPal or credit card via the button below, or bank transfer [BSB: 062500; A/c no: 10495969] and help us keep shining a light.
With gratitude, The AIMN Team

That’s been the case since this….
https://ausi.anu.edu.au/news/opinion-legacy-bill-shorten-and-his-loss-scott-morrison
However the ‘rot’ had set in long before.
Uhm ….. I do not wish to be considered trite, but this appears to me to be Orwell’s ”1984” Mark 2, the Methodology.
The over-riding intention of AI appears to be controlling what and how voters think; sound familiar??
If AI says ”black is white” then who will demur??
Only the few independent thinkers who are shunned socially by the general population. Shades of the Roman Inquisition!!
(OK; so the Vatican waited until 1948 to agree that there was a heliocentric universe ….. and executed too many dissenters against this self-serving theology while deciding).
I don’t think you are “trite”, I happen to think you are right. This is just another step in the control of the masses. We are governed by those who put their job before the needs of the state.and their only excuse is the that WE elected them!!!
Aided and abetted by the mass media, control is almost complete, be pleased, you are you, because your children are going to have a tough time.
Sorry Steve Davis, but your article made no sense to me, it read like you were an agrieved proponent of an AI tool that has been rejected by Kate’s team.
Well….
https://www.theage.com.au/business/markets/the-alarm-bells-are-ringing-louder-in-markets-20251119-p5ngk8.html
https://www.theage.com.au/business/banking-and-finance/the-rising-threat-to-the-global-financial-system-20251110-p5n8zl.html
https://www.theage.com.au/technology/bubble-fears-the-multitrillion-dollar-threat-hanging-over-markets-20250806-p5mkqk.html
A quick recent history lesson, that we have yet to learn or understand….
https://www.youtube.com/watch?v=npXbFUAFtYk
SB, the article made perfect sense to me, arguing as it does for the imposition of a morality clause (as it were)in all AI contracts. As we are seeing now, much ofthe use of AI is ungoverned by any morality, deepfakes, fake nudity, bullying etc etc. I have argued that as with Asimov’s putative ( and fictional) three laws of robotics there need to be a set of ethical guidelines governing/ controlling the development of AI. The already all-pervading, pernicious and often erroneous influence of AI hints that this is already almost a lost cause. I suggest only governments can stop the rot but, as this article argues, our weak and timid government appears incapable.