Instagram says it will begin notifying parents if their teenagers repeatedly search for suicide or self-harm related terms within a short period, as governments intensify efforts to strengthen online safety protections for minors.
The move comes amid growing international scrutiny of social media use by under-16s, following Australia’s decision to ban access for children below that age, while Britain and several European countries are considering similar restrictions to enhance child protection online.
The platform, owned by Meta Platforms Inc., said parents enrolled in its optional supervision feature will receive alerts if their children attempt to access such sensitive content.
It said in a statement that “These alerts build on our existing work to help protect teens from potentially harmful content on Instagram.
“We have strict policies against content that promotes or glorifies suicide or self-harm.”
READ ALSO:Australia Bans Social Media for Kids Under 16
Instagram existing policy is to block such searches and redirect people to support resources.
It said that it would begin the alerts from next week for those signed up in the United States, Britain, Australia and Canada.
Governments are increasingly seeking to protect children from harm online, particularly after worries over the AI chatbot Grok which has generated non-consensual sexualised images.
In Britain, measures designed to stop access to pornography sites for children have had implications for adults’ privacy, and have led to tension with the U.S. over limits on free speech and regulatory reach.
Instagram’s teen accounts for under 16s need a parent’s permission to change settings, while parents can select an extra layer of monitoring with the agreement of their teenager.
Reuters

