By Niharika (Y11)

Everyone knows about Instagram, the acclaimed content-sharing app filled with a myriad of features which allow its users to show off and make connections. Over the years, Instagram has become a stronghold in its industry, ringing synonymous to social media itself, having become the prime example of all it has to offer. From posts and interaction to the existence of private messaging, it truly has it all. As such, it has a large userbase consisting predominantly of teens who use it as their regular source of intel and communications.  

Among the vast number of debates surrounding the protection of young people, the negative impacts of social media upon teenagers, who are often more vulnerable to scams and misinformation, has always been at the forefront of discussions. From government-mandated child safety laws to individual corporation procedures to create safer environments for children, there have been many attempts to try and tackle this issue. Instagram has recently contributed to this effort through the announcement of a new child safety policy aimed at protecting its youngest users from harmful content and the darker side of social media. In September 2024, the company introduced stricter content moderation for users under 16. Algorithms will actively filter hate speech, bullying, and explicit material, ensuring young users encounter safer digital spaces. This move comes amid growing concerns over the rise of online toxicity and the effects of social media on teenagers. Instagram’s policy includes several updates designed to safeguard children. Moreover, features like content alerts, parent-controlled activity logs, and tighter restrictions on direct messaging have been implemented to give caregivers more oversight, striving to “better support parents and give them peace of mind that their teens are safe with the right protections in place”. 

These measures reflect Instagram’s acknowledgment of its role in shaping online environments for children. As hate speech becomes an increasingly significant issue, Instagram’s new policy is a proactive step. Social media has long since been rooted in toxicity, with the younger generations being raised by these apps. To prevent them from being severely affected, steps are being taken to protect adolescents from accessing harmful or hateful content. However, whether this move addresses deeper issues surrounding social media and children’s use is a matter of debate. 

This is only one side of the argument. Instagram’s algorithm-driven moderation is a promising solution, but it is not without flaws. Most critics are of the opinion that this will not truly prevent vulnerable children from facing the worst effects of social media, especially on a platform as notoriously toxic as Instagram. Have child-safety policies not been implemented before? Despite individual measures being taken to protect children, their exposure to pernicious subjects has only increased in recent years, displaying no signs of efficacy. This would be no different. Furthermore, it would not stop curious children from creating over-18 accounts without their parents’ knowledge, which may simply make the app more dangerous. Hate speech has grown rampant on social media platforms, fuelled by anonymity and lax content moderation. The implications are numerous: studies link exposure to hate speech to increased anxiety, depression, and behavioural issues in children and adolescents. This trend has left vulnerable groups, including children, exposed to harmful rhetoric and will continue to do so as long as users are allowed to express their opinions and popularise harmful ideologies through likes and reposts. After all, negative rhetorics are always more widespread and reach further than the positive messages these apps strive to publicise. Critics also argue that algorithms could miss context or inadvertently censor legitimate expression. While this policy may be a step in the right direction, a broader, industry-wide effort to combat online hate is necessary. 

The debate over Instagram’s policy inevitably touches on the cultural phenomenon of “iPad kids” — children who spend a significant amount of time on screens, often as a result of modern parenting dynamics. For many parents, digital devices serve as convenient tools for distraction or education. Most children experience some aspect of social media before they turn eight years old and become exposed to the overwhelming amount of content available on these sites.  However, excessive screen time has been linked to developmental delays, reduced social skills, and emotional challenges.  

Critics of the “iPad kid” culture often blame parents for not setting boundaries. However, this criticism overlooks the fact that devices and, by extension, social media apps had become crucially embedded in daily routines. In an era where technology is integrated into every aspect of life, limiting children’s screen exposure requires systemic support, not just individual effort. It is easy to point fingers at parents for allowing excessive screen time, but the issue is more nuanced. The debate surrounding “iPad kids” and bad parenting should shift toward addressing the structural issues that lead to over-reliance on technology. Society as whole, including parents, should take an active stance to reduce kids’ screen time to focus on fostering relationships and being more active. By promoting a dialogue that emphasises collaboration rather than blame, society can ensure children grow up in a healthier environment. 

Instagram’s new child safety policy is a timely intervention. However, it cannot single-handedly address the broader challenges of hate speech and digital dependency. Tackling these issues requires a collaborative approach involving tech companies, educators, policymakers, and parents. While shining a light on these issues to help reach reasonable solution is a step in the right direction, it is just one piece of the puzzle in creating a safer and more nurturing online world for the next generation. 

Categories: Uncategorised