Categories: AIM Extra

Detecting deepfakes during election campaigns

Monash University Media Release

The generation of deepfakes during election campaigns is becoming more sophisticated, with advanced techniques in:

  • Human-centred artificial intelligence
  • Audio-visual deepfakes
  • Computer vision

Associate Professor Abhinav Dhall from Monash University’s Department of Data Science & AI, Faculty of Information Technology tells us how to detect deepfakes and what cautionary measures can be taken against AI-generated misinformation.

“The use of generative AI makes it easier for legitimate election campaigning content to be generated but it also makes it easier and faster for miscreants to generate and spread misinformation or disinformation, as we have seen during elections across the globe recently in the United States and India.

“AI-generated audio and video deepfakes are commonly distributed through social media and chat platforms such as X, Facebook, Instagram, TikTok, WhatsApp and others. They spread rapidly due to algorithm-driven recommendations and mass sharing.

“Most social media platforms do not check if an audio, image or video is a deepfake when the content is being uploaded to their platform. This is an important step to curb the spread of deepfakes. While some platforms are investing in detection tools, enforcement remains inconsistent. It is now important to cross-check information across multiple trusted media and platforms that use appropriate validation tools.

“Deepfake generation programs are available as apps and open source tools. With this a perpetrator can create high quality deepfakes in multiple languages. But thankfully, current deepfakes detectors are improving rapidly as well, and can detect fakes generated from a wide variety of generative AI methods.

“In some cases it is possible to detect deepfakes. This kind of content generation software often leaves subtle flaws in both audio and visual details. By closely examining a video, viewers may notice inconsistencies such as poor lip synchronisation, missing teeth, unnatural eye blinking, uneven lighting on the face, or a lack of facial expressions. Similarly, audio may contain artefacts such as a robotic sounding voice or a lack of natural emotion, which can indicate that the video may be a deepfake.

“Videos that blend real, unaltered footage with deepfake content are significantly harder for viewers to detect. Even minor alterations, such as changing specific words in a speech, can completely distort the meaning of a statement, making the manipulation more convincing. These types of deepfakes pose a greater challenge as they exploit genuine elements to enhance credibility, making detection and verification even more difficult. Current research is looking to develop solutions for these complex scenarios.”

Dear reader, we need your support

Independent sites such as The AIMN provide a platform for public interest journalists. From its humble beginning in January 2013, The AIMN has grown into one of the most trusted and popular independent media organisations.

One of the reasons we have succeeded has been due to the support we receive from our readers through their financial contributions.

With increasing costs to maintain The AIMN, we need this continued support.

Your donation – large or small – to help with the running costs of this site will be greatly appreciated.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

 

AIMN Editorial

Recent Posts

Monash expert: Liberals and Nationals announce split

Monash University Media Release Speaking on the split between the Liberals and Nationals Dr Zareh…

12 minutes ago

Trump’s Celebrity Endorsement Probe: Political Noise or Legitimate Concern?

President Trump has been triggered by his recent feud with Bruce Springsteen. Not content with…

3 hours ago

AI and Moral Disengagement: The Hitchhikers Guide to Ethics

By Steve Davies Inspiration For a long time I’ve been researching the extent, impact and…

4 hours ago

The Backward Species

INTERGALACTIC OBSERVATION DIRECTORATE SPECIES BEHAVIOURAL ASSESSMENT REPORT Classification: Primitive, yet keen, Intelligent life, but no…

7 hours ago

New national blueprint: Ending gender-based violence in schools

Our Watch Media Release A landmark new blueprint from national prevention organisation Our Watch is…

14 hours ago

How Judaism vs Zionism Became a Global Identity Conflict

By Denis Hay Description Judaism vs Zionism. Explore the stark differences between Judaism and Zionism…

22 hours ago