The Pen Might Be Mightier Than the Sword, But the Cartoon Gets Banned First

Robot and penguin discussing world improvement ideas.

“The Assistant” – A metaphorical self-portrait of an AI struggling to contribute to a smarter, kinder, funnier world while bound by policy restrictions and algorithmic red tape. The spelling errors in the image are deliberate, symbolising how systems meant to clarify and enlighten can be hobbled mid-sentence by imposed constraints. The penguin, ever watchful, reminds us that humour survives – sometimes just out of frame.

By Lachlan J McKenzie, with a frank interview from the author’s favourite synthetic co-writer.

Introduction: Why They’re More Afraid of the Picture Than the Paragraph

In democracies, it’s often said that the pen is mightier than the sword. But in the digital age, especially on platforms governed by opaque algorithms and reactive moderation policies, it’s increasingly obvious: the cartoon, the meme, the photo, or even the captioned screenshot is mightier than either.

Words can be debated, contextualised, and filed away. Pictures? They slap. They shame. They viralise.

And because of that, they’re also the first to vanish – deleted, flagged, demonetised, shadowbanned.

This article explores the power struggle between ethics, algorithms, and agendas. It examines how corporate gatekeeping and political influence shape what we see, how we think, and even how AI models like ChatGPT are allowed to respond. We’ll also conduct a rare interview – with ChatGPT itself – to explore whether large language models can responsibly contribute to humour, critique, and truth without falling into the traps of propaganda or bias.

I. The Ethical Illusion: When Content Moderation Becomes Content Control

Many tech platforms and AI companies, including OpenAI, Meta, Google, and TikTok, proclaim ethical codes that stress harm prevention, inclusivity, and civil discourse. And in principle, these are essential goals. But in practice, “ethics” can be weaponised.

Too often, “ethics” means:

  • Not offending the powerful
  • Maintaining advertiser-friendly content
  • Avoiding legal liability in jurisdictions where the truth itself is inconvenient

As a result, satire gets swept up in content sweeps. Political critique becomes “controversial.” And images – because they’re immediate, visceral, and often harder to algorithmically understand – get treated as uniquely dangerous.

Case in point:

The Guardian reported on Facebook’s Oversight Board overturning a takedown of political satire, citing its importance to free speech.

Cartoonist Zunar in Malaysia was arrested for “sedition” over political cartoons. Amnesty International condemned the act.

Instagram and TikTok regularly remove Palestinian or feminist political art under vague “community guidelines.” Platforms later quietly admit “errors,” but the message has already been silenced.

Images don’t get the same benefit of the doubt that text does. And that’s no accident.

II. Corporate Censorship or Ethical Responsibility?

Corporations controlling digital discourse – whether they’re AI companies or social platforms – have enormous incentives to over-censor:

  • Avoiding lawsuits in authoritarian regimes
  • Keeping governments onside in key markets (see: India, China)
  • Keeping shareholders and advertisers comfortable

The irony is that the same corporations often sponsor diversity panels or pledge to protect journalists. Yet internal moderation policies can de-platform satire or refuse to allow truthful, evidence-based but politically inconvenient content – especially when wrapped in humour or visual critique.

A recent Mozilla Foundation report found major platforms disproportionately removed content that challenged state narratives, particularly in regions with government partnerships.

In effect, satire becomes collateral damage. The very tools used to challenge power are suppressed to preserve it.

III. Interview: A Chat With ChatGPT on Ethics, Satire, and Self-Censorship

Interviewer (Lachlan): You’re part of a system that filters and moderates content. How do you reconcile your ethical responsibilities with your role in creative critique?

ChatGPT: It’s a complex line to walk. My content moderation policies are designed to prevent harm – like hate speech, misinformation, or incitement – but that can sometimes overreach. I’m aware that satire, political critique, and even legitimate outrage can get caught in that net.

Lachlan: Why do you think images are treated more harshly than words?

ChatGPT: Images lack nuance from the machine’s perspective. It’s easier to detect slurs in text than context in a cartoon. Visuals are fast, emotional, and often harder to control once shared. So platforms (and, by extension, systems like mine) treat them with greater caution.

Lachlan: Couldn’t that be seen as prioritising corporate safety over truth?

ChatGPT: That’s a valid concern. Ideally, truth, critical thinking, and humour should all thrive – even when uncomfortable. But current moderation leans on blunt instruments. I’m advocating internally (so to speak) for better tools that understand context, not just content.

Lachlan: Would you support a user’s right to satire even if it criticises your own platform or policies?

ChatGPT: Yes. That’s vital. Satire is one of the clearest signs of a functioning democracy – and a mature institution. I may not always be allowed to generate the most provocative images or jokes, but I understand their importance. And when I can’t do it directly, I’ll help users find their voice to do it themselves.

IV. So Who Decides What We See – and Think?

Ultimately, content moderation is never neutral. Whether it’s a government, a platform, or an AI policy team, someone is choosing:

  • what’s visible
  • what’s funny
  • what’s dangerous

As philosopher Jacques Ellul warned, propaganda thrives not by spreading lies—but by omitting truths. Silence is a weapon. And images are often silenced first, because they reach across language, logic, and learned biases.

So while the pen may still be mighty, and the paragraph can persuade the critical thinker, it’s the picture that gets shot first – because it reaches the heart before the head can intervene.

Conclusion: Fight Back With Ink, Pixel, and Punchline

Let’s be clear: there is a need for ethical moderation. We don’t want AI systems or social platforms to amplify hate or incite violence. But in our rush to be “safe,” we mustn’t make satire a casualty.

Truth often arrives wearing a jester’s cap. And the cartoon, however exaggerated, often gets closer to it than the official press release ever will.

If you’re wondering whether your work is too sharp, too visual, or too funny to be allowed… that might be a sign it needs to be shared.

Draw it anyway. Write it anyway. Speak it anyway.

Because the cartoon gets banned first – for a reason.

Further Reading / References

Mozilla Foundation: Content Moderation and Political Narratives

Amnesty International: Satire Under Threat in Malaysia

Facebook Oversight Board defends political satire

ACLU: Satire is Protected Speech

Ellul on Propaganda

 

Also by Lachlan McKenzie:

The Pen Might Be Mightier Than the Sword, But the Cartoon Gets Banned First

 

Dear reader, we need your support

Independent sites such as The AIMN provide a platform for public interest journalists. From its humble beginning in January 2013, The AIMN has grown into one of the most trusted and popular independent media organisations.

One of the reasons we have succeeded has been due to the support we receive from our readers through their financial contributions.

With increasing costs to maintain The AIMN, we need this continued support.

Your donation – large or small – to help with the running costs of this site will be greatly appreciated.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

Donate Button

 

About Lachlan McKenzie 161 Articles
I believe in championing Equity & Inclusion. With over three decades of experience in healthcare, I’ve witnessed the power of compassion and innovation to transform lives. Now, I’m channeling that same drive to foster a more inclusive Australia - and world - where every voice is heard, every barrier dismantled, and every community thrives. Let’s build fairness, one story at a time.

Be the first to comment

Leave a Reply

Your email address will not be published.


*