Instagram’s recommendation algorithms promoted a ‘vast paedophilia network’, advertising the sale of illicit ‘child porn material’ on its platform, Wall Street Journal reported. The report revealed that Instagram does not act as a mere host of child pornography, but its algorithms also promote such content serving as a discovery mechanism for paedophiles.
The report said, “Pedophiles have long used the internet, but unlike the forums and file-transfer services that cater to people who have interest in illicit content, Instagram doesn’t merely host these activities. Its algorithms promote them.” The WSJ investigated the social-media giant with Stanford University and University of Massachusetts Amherst researchers.
The paedophiles can access child pornography and child sex abuse by searching explicit terms and hashtags, the report said. The platform allows users would use certain hashtags – #pedowhore, #preteensex and #pedobait – to discover such content. Thereafter, the users are redirected to a ‘menu’ of accounts selling explicit/sexual content involving children, including videos and images of self-harm and bestiality.
It was revealed that Instagram’s algorithms further suggested such content to users who viewed such content. The researchers set up a test account and started viewing explicit content, and thereafter, the researchers observed that the algorithm started recommending more accounts for them to follow. The report said, “Following just a handful of these recommendations was enough to flood a test account with content that sexualizes children.”
Meta responded to the report: “Child exploitation is a horrific crime. We work aggressively to fight it on and off our platforms, and to support law enforcement in its efforts to arrest and prosecute the criminals behind it.” The company further said, “Predators constantly change their tactics in their pursuit to harm children, and that’s why we have strict policies and technology to prevent them from finding or interacting with teens on our apps, and hire specialist teams who focus on understanding their evolving behaviours so we can eliminate abusive networks.”
The company has also said that it is setting up an internal task force to investigate the claims levelled in the report. The company said, “We’re continuously exploring ways to actively defend against this behaviour, and we set up an internal task force to investigate these claims and immediately address them. We’re committed to continuing our work to protect teens, obstruct criminals, and support law enforcement in bringing them to justice.”
Furthermore, Meta said it took down about 4.9 lakh accounts, violating its child safety policies, in January alone. The company further said it had removed 27 paedophile networks over the last two years. The company said that it had also blocked thousands of hashtags associated with the sexualisation of children and restricted such terms from searches.
It was also revealed that apart from problems concerning Instagram’s recommendation algorithms, the platform’s moderation practices also ignored or rejected reports on child abuse material. Stanford’s Internet Observatory head and Meta’s former Chief Security Officer, Alex Stamos, told the WSJ that the social media giant should do more to tackle this issue. Stamos said, “That a team of three academics with limited access could find such a huge network should set off alarms at Meta… I hope the company reinvests in human investigators.”
The report further revealed that when users reported suspect posts and accounts, the content was either cleared by Instagram’s review team or the reporting user was told, “Because of the high volume of reports we receive, our team hasn’t been able to review this post.” Meta has accepted that it has failed to act on such reports and is reviewing its internal processes.
Comments