Digital duty of care needed to mandate safeguards against online sexual abuse of children on Australian screens

Image from Webdunia

IJM Australia Media Release

eSafety’s transparency report published yesterday reveals the gaps in Telegram and Reddit’s detection and deterrence of child sexual exploitation and abuse on their services.

Failures to detect new child sexual exploitation and abuse images and videos should urgently be addressed to stop the circulation of this illegal material and protect the children depicted who may be at immediate risk of harm.

A legislated digital duty of care could require tech companies to ensure measures are in place to mitigate new child sexual abuse material being created or shared on their platforms, and to reported on and improve these measures if they do not adequately address this risk.

Telegram and Reddit provided responses to the eSafety Commissioner Julie Inman Grant’s questions about how they respond to the circulation of new images and videos of child sexual abuse material on their services, over the April 2023 – February 2024 reporting period.

Whilst Reddit reported it is using text classifiers to detect new images and videos, which may represent children who are presently in abusive situations, the eSafety Commissioner noted that its lack of use of tools like nudity detection and age estimation for new images and videos “may mean key indicators of CSEA are missed”.

IJM affirms that Reddit should do more to detect CSEA in new images and videos by employing on-market AI tools such as nudity detection and age estimation to detect and deter this content from being circulated and protect children from abuse.

Reddit stated that once CSEA is detected, the content is blocked/removed and the account is permanently banned; an enforcement ticket is created and prioritised for human review; and depending on the outcome of human review, the company may make a report to the National Center for Missing and Exploited Children and take further enforcement action, including account sanctions.

Following eSafety issuing Telegram a fine of over $950,000 for failing to respond to the reporting deadline by over five months, the company provided the requested information in response to eSafety’s transparency notice.

Telegram stated it uses an internal hash matching system to detect known CSEA images, except on Chats and Secret Chats, and is in the process of joining the Internet Watch Foundation’s safety programs to gain access to its hash lists.

In response to why hash matching tools were not used on Chats or Secret Chats user reports, Telegram stated that Telegram was “founded on the principle of defending user privacy and their right to private communication” and that “this commitment prioritizes user privacy above all”. Telegram stated that because of this commitment to user privacy, encrypted contents of private chats are always protected, ensuring that the confidentiality of private correspondence is never compromised.

Telegram stated that whilst it uses internal AI and machine learning models to detect new CSEA images and videos in public communications, it does not use it in Chats, Secret Chats, Private Group Chats or Private Channels.

eSafety noted that not using proactive detection tools to identify and review potential CSEA material increases the likelihood that such material will remain undetected and continue to circulate on these parts of the service.

This could place children who are in situations of abuse at further risk by creating a safe haven for perpetrators to share images and videos of child sexual exploitation and abuse without fear of detection.

Telegram stated it relies on alternative signals to assess and prioritise reports made about material in end-to-end encrypted parts of the service. eSafety noted that this may limit Telegram’s ability to review, assess, prioritise, and respond to reports about harmful and illegal material or activity occurring in Telegram’s Secret Chats, like child sexual abuse material, for which it receives user reports.

eSafety further noted that in relation to Secret Chats user reports, methods exist that could enable hash-matching tools to review content reported within these end-to-end-encrypted messages. IJM calls on Telegram to integrate such technologies to detect child sexual exploitation and abuse in Secret Chats.

Because Telegram’s Chats are encrypted with the company’s internal MTPro system, the company states it is already scanning for other illegal material, including terrorist and violent extremism content. IJM asserts this automatic scanning should be expanded to also detect for child sexual exploitation and abuse in Chats.

The company further stated it does not proactively detect or receive user reports of CSEA in private or public voice and video calls. This is particularly concerning, as it could enable child sex offenders to livestream child sexual abuse material using video calls without any risk of detection or reporting. IJM calls on Telegram to urgently address this child protection deficiency in their service.

Whilst Telegram noted that it maintains a dedicated email address hotline for user reports of child sexual abuse material, it admitted its in-service child abuse user reporting functionality for voice and video calls was not available during the reporting period. The company instead relies on users to report child abuse via the community info section and provide samples of the objectionable content.

In relation to action taken once CSEA is detected, Telegram stated that detections of known CSEA images and videos through hash-matching resulted in the automated removal “of all users, Communities and publications involved”.

As evidenced by this latest transparency notice from eSafety, tech companies operating in Australia still have a long way to go when it comes to protecting children from online sexual exploitation and abuse on their services.

Tech company compliance with eSafety’s Basic Online Safety Expectations is currently unenforceable, however eSafety can require service providers to report on the steps they are taking to meet the Expectations, such as this transparency notice.

In its initial response to the Online Safety Act review report, the Minister for Communications Michelle Rowland announced the Albanese Government would legislate a digital duty of care, underpinned by risk assessment and risk mitigation, and informed by safety-by-design principles, which would potentially absorb the Basic Online Safety Expectations currently embedded in the Act.

Beyond transparency, tech companies operating in Australia should be required to prevent child sexual abuse material on their services, including in new images and video. This could be mandated under a legislated digital duty of care, which IJM is presently calling on all Australian parliamentarians to adopt.

 

Dear reader, we need your support

Independent sites such as The AIMN provide a platform for public interest journalists. From its humble beginning in January 2013, The AIMN has grown into one of the most trusted and popular independent media organisations.

One of the reasons we have succeeded has been due to the support we receive from our readers through their financial contributions.

With increasing costs to maintain The AIMN, we need this continued support.

Your donation – large or small – to help with the running costs of this site will be greatly appreciated.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

Donate Button

 

Be the first to comment

Leave a Reply

Your email address will not be published.


*