Synopsis
Social media platforms are now removing non-consensual intimate imagery within two hours of complaints, a significant improvement from the previous 24-hour deadline. This faster takedown, mandated by updated IT rules, applies to various sensitive content, including deepfakes and material targeting women and children. While compliance is high, challenges like false positives and cross-jurisdictional content necessitate human review, impacting timelines.Amendments to the information technology (intermediary guidelines and digital media ethics code) rules 2026 that kicked into effect on February 20, mandated that NCII be taken down within two hours, as opposed to 24 hours earlier. This was part of a wider move to reduce takedown timelines for a wide category of sensitive content including artificial intelligence (AI)-generated deepfakes or morphed media, nudity or sexual acts, impersonation, and those specifically targeting women and children.
"Compliance audits have shown that significant social media intermediaries (SSMIs) have made major headway on all of these categories by integrating their compliant mechanisms with the NCCRP. This is a major step since NCII represented a large chunk of user-filed complaints on social media," an official at the Ministry of Electronics and Information Technology (MeitY) said. SSMIs are social media platforms with 5 million or more registered users in India.
According to the latest national statistics released by the National Crime Records Bureau (NCRB) last week, there were 3,288 cases of cybercrime committed specifically against women in 2024 nationwide. This was down from 3,678 in 2023.
But in meetings with the Indian Cyber Crime Coordination Centre (I4C), under the Ministry of Home Affairs, social media companies have also flagged that moving beyond this threshold remains a challenge, another official said. False positives, and content arising in other jurisdictions, are necessitating human oversight, and stretching the takedown timeline, they have said.
To combat clearly illegal content such as child sexual abuse material, social media companies deploy automated hash matching technology, which generates unique digital fingerprints or hashes for known abusive media and uses them to scan for duplicates, preventing the need for human reviewers.