By Steve Davies
On 1 October, I submitted a report on moral engagement to Senator Katy Gallagher. The APSC replied 2 December – polite, but essentially evasive. During those two months, I consolidated and operationalised years of work into the freely available MEET Programme Materials Directory. All materials free under Creative Commons Licensing.
The Core Misunderstanding
Government claims AI can’t analyse moral engagement because it “lacks a moral conscience.” This confuses experiencing morality with detecting its patterns. AI excels at the latter: systematically identifying Bandura’s eight mechanisms – euphemistic labelling, displacement/diffusion of responsibility, distortion of consequences – in language humans embedded in institutions often miss.
A thermometer doesn’t feel temperature but measures it accurately. AI doesn’t judge right/wrong but flags disengagement patterns with consistency humans bias-blinded by loyalty can’t match.
Cross-Platform Validation
Seven AIs (ChatGPT, Claude, Perplexity, Grok, DeepSeek, Gemini, Le Chat) – distinct architectures, no coordination – reached unanimous consensus on MEET’s five principles:
- Human moral agency central.
- AI detects patterns, doesn’t judge.
- MEET is platform-agnostic.
- Euphemism poses ethical risks.
- Structured human-AI collaboration works.
What Government Misses
APSC’s response exemplifies the error: diffusion across 8+ systems, no evaluation owner, Robodebt “lessons” without prevention tools. They resist not AI limits, but AI exposing normalized disengagement.
Civil Society Leads
While APS cites “authorising environments,” Sue Barrett’s Democracy Watch AU scores Minister Tim Ayres’ AI policy reversal at 5.9/7 disengagement. Citizens now decode politics with these tools—democratic empowerment government won’t enable.
Explicit Refusal
My email to APSC this morning made my position unambiguous. While individuals within the public service are welcome to use MEET materials under Creative Commons licensing, I explicitly refuse institutional deployment by APSC or any Australian Public Service agency without my direct permission.
This isn’t spite. It’s strategic necessity.
Institutions that have spent two years avoiding frameworks designed to expose moral disengagement patterns cannot be trusted to deploy those frameworks honestly. They would inevitably use MEET’s language while gutting its function – creating the appearance of ethical rigor while maintaining the very patterns the framework exposes.
I’ve watched this pattern before: empowering technologies opposed by corporate hierarchy, sophisticated analytical tools reduced to compliance theatre, moral engagement frameworks transformed into reputation management exercises.
Not this time.
MEET represents a fundamental shift: not AI ethics about AI, but AI ethics with AI. Collaborative, transparent, democratic. With humanity firmly at the centre and institutions no longer controlling access to sophisticated analytical capability.
When government gets AI ethics wrong through institutional self-interest, civil society gets it right through democratic necessity.
The frameworks and tools are ready. The work will continue.
Keep Independent Journalism Alive – Support The AIMN
Dear Reader,
Since 2013, The Australian Independent Media Network has been a fearless voice for truth, giving public interest journalists a platform to hold power to account. From expert analysis on national and global events to uncovering issues that matter to you, we’re here because of your support.
Running an independent site isn’t cheap, and rising costs mean we need you now more than ever. Your donation – big or small – keeps our servers humming, our writers digging, and our stories free for all.
Join our community of truth-seekers. Donate via PayPal or credit card via the button below, or bank transfer [BSB: 062500; A/c no: 10495969] and help us keep shining a light.
With gratitude, The AIMN Team

Be the first to comment