Instagram Takes Action to Combat Growing Wave of Sextortion Scams
By Pavethran Batmanathen
Meta, the parent company of Facebook and Instagram, unveiled new initiatives on Thursday to tackle sextortion, a form of online blackmail where perpetrators manipulate victims, frequently teenagers, into sending sexually explicit images of themselves.
These new steps include enhanced restrictions on who can follow or send messages to teen accounts, along with safety alerts in Instagram’s direct messages and Facebook Messenger, particularly concerning suspicious cross-border interactions.
These efforts build upon Instagram’s recently introduced “Teen Accounts” feature, launched last month, which is specifically aimed at safeguarding young users from the risks that come with using the photo-sharing platform.
In addition to these measures, Meta is introducing limitations on scammers’ ability to access followers lists and view interactions. They are also blocking the ability to take screenshots within private messages. Furthermore, the company is rolling out a global nudity protection tool that will blur potentially explicit images in direct messages, warning teens before they send or receive such content.
In select countries, including the United States and the United Kingdom, Instagram will also present teens with a video in their feeds educating them on how to detect sextortion scams. The goal is to help teens recognize red flags, such as someone being overly aggressive, asking for photos, or attempting to move the conversation to another app.
John Shehan, representing the US National Center for Missing & Exploited Children, highlighted the urgent need for these actions, stating, “The alarming rise in sextortion scams is having a profound impact on children and teens. Reports of online enticement have skyrocketed, with a staggering increase of over 300% from 2021 to 2023.”
Shehan praised campaigns like Meta’s, emphasizing that they “bring essential education to help families recognize these threats early on,” as stated in a blog post from Meta announcing the new safety protocols.
The Federal Bureau of Investigation (FBI) had previously sounded the alarm earlier this year, citing the increasing prevalence of sextortion, with teenage boys being the primary victims and offenders often located outside the United States. From October 2021 to March 2023, U.S. federal authorities identified at least 12,600 victims, with twenty cases tragically leading to suicide.
Meta’s decision to ramp up protection for young users comes amid mounting global pressure on the social media giant, which was founded by Mark Zuckerberg, as well as its competitors. In October of last year, a group of forty U.S. states filed a complaint against Meta, accusing its platforms of contributing to the “mental and physical harm” of young people by exposing them to risks such as addiction, cyberbullying, and eating disorders.
Despite these concerns, Meta has, for the time being, opted not to implement direct age verification of users, citing privacy concerns. Instead, the company is advocating for legislation that would mandate ID verification at the operating system level, through platforms like Google’s Android or Apple’s iOS. With these new measures, Meta hopes to provide an additional layer of protection for teens as it confronts the growing issue of sextortion on its platforms, while also continuing to balance privacy concerns with the need for user safety.
Source: https://www.thestar.com.my/tech/tech-news/2024/10/18/instagram-moves-to-face-rising-tide-of-sextortion-scams, Picture Credits: TurkNet