Image from YouTube (Video uploaded by Deepfakery)
The generation of deepfakes during election campaigns is becoming more sophisticated, with advanced techniques in:
Associate Professor Abhinav Dhall from Monash University’s Department of Data Science & AI, Faculty of Information Technology tells us how to detect deepfakes and what cautionary measures can be taken against AI-generated misinformation.
“The use of generative AI makes it easier for legitimate election campaigning content to be generated but it also makes it easier and faster for miscreants to generate and spread misinformation or disinformation, as we have seen during elections across the globe recently in the United States and India.
“AI-generated audio and video deepfakes are commonly distributed through social media and chat platforms such as X, Facebook, Instagram, TikTok, WhatsApp and others. They spread rapidly due to algorithm-driven recommendations and mass sharing.
“Most social media platforms do not check if an audio, image or video is a deepfake when the content is being uploaded to their platform. This is an important step to curb the spread of deepfakes. While some platforms are investing in detection tools, enforcement remains inconsistent. It is now important to cross-check information across multiple trusted media and platforms that use appropriate validation tools.
“Deepfake generation programs are available as apps and open source tools. With this a perpetrator can create high quality deepfakes in multiple languages. But thankfully, current deepfakes detectors are improving rapidly as well, and can detect fakes generated from a wide variety of generative AI methods.
“In some cases it is possible to detect deepfakes. This kind of content generation software often leaves subtle flaws in both audio and visual details. By closely examining a video, viewers may notice inconsistencies such as poor lip synchronisation, missing teeth, unnatural eye blinking, uneven lighting on the face, or a lack of facial expressions. Similarly, audio may contain artefacts such as a robotic sounding voice or a lack of natural emotion, which can indicate that the video may be a deepfake.
“Videos that blend real, unaltered footage with deepfake content are significantly harder for viewers to detect. Even minor alterations, such as changing specific words in a speech, can completely distort the meaning of a statement, making the manipulation more convincing. These types of deepfakes pose a greater challenge as they exploit genuine elements to enhance credibility, making detection and verification even more difficult. Current research is looking to develop solutions for these complex scenarios.”
Independent sites such as The AIMN provide a platform for public interest journalists. From its humble beginning in January 2013, The AIMN has grown into one of the most trusted and popular independent media organisations.
One of the reasons we have succeeded has been due to the support we receive from our readers through their financial contributions.
With increasing costs to maintain The AIMN, we need this continued support.
Your donation – large or small – to help with the running costs of this site will be greatly appreciated.
You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969
National Tertiary Education Union (NTEU) Media Release Peter Dutton’s shadow education minister Sarah Henderson has…
By Denis Bright On the fiftieth anniversary of Labor’s victory in 1972, ABC News reminded…
By Denis Hay Description Government contracts exploit Pacific workers on restricted visas. Learn how government…
Paranoia manifests in various ways. It can eat away individuals in desperate solitude, whittling away sanity…
The Climate Council Media Release Opposition Leader Peter Dutton is set to send another wrecking…
There's always some strange moments in an election campaign but one of the strangest is…