By Manoshij Banerjee, independent consultant on digital workplace and culture, and Mohammed Shahid Abdulla, faculty member, IIM Kozhikode
In November 2023, YouTube announced restrictions on fitness and body-image content for teenage users. Rather than outright bans, the platform limited “body shaming” and “non-contact aggression” videos to single appearances in automated feeds or doom scroll. By late 2024, these restrictions went global for registered 13-to-17-year-olds.
This aligns with broader governmental action. Australia banned social media for under-16s in 2023, Spain mandated parental consent for under-14s, and legislators from South Korea to the UK are pushing age verification laws. This reflects a growing consensus that digital platforms pose significant psychological risks to 12-18 year-olds.
While seemingly positive, closer examination reveals these moves may be more symbolic than substantive. The policy acknowledges online content’s impact on adolescent well-being but sidesteps platforms’ deeper structural incentives to maintain engagement, even at psychological cost.
One also has to admit that there may be no simple answer to the problem.
How recommendations shape teen psychology
YouTube’s recommendation system transcends discovery—it’s an immersive engine maximizing watch time. TikTok’s algorithm is even stickier. For teenagers with developing identities and self-worth, this proves especially potent. Social comparison theory describes how individuals assess worth through comparisons with others. Algorithms surfacing idealized bodies, intense fitness regimens, or curated beauty routines create compare-and-despair loops. Healthcare providers now factor social media disinformation when treating eating disorders, affecting even 19- to 24-year-olds.
“Fitspiration” content exposure links to increased body dissatisfaction, disordered eating, and exercise compulsion among adolescents. The mere-exposure effect shows even single exposures shape long-term preferences. YouTube’s policy restricting repeated recommendations while allowing initial exposure misunderstands this—damage often begins with the first video. Teens bookmark or like content, then seek relevant hashtags or creators.
Protection or data collection ?
YouTube requires users to log in for restrictions to take effect, ostensibly enforcing age-specific algorithms. Yet this forces teens to trade personal data for token protection from harmful content.
Businesswise, this is savvy. Logged-in users provide valuable behavioral data, enhancing ad targeting, while anonymous users still drive engagement. It’s naive to assume digital-native teens will comply with age verification—many falsify birth dates or browse anonymously, nullifying safeguards.
As one investor letter to Apple noted, “the days of just throwing technology out there and washing your hands of the potential impact are over.” Former tech executives increasingly voice concern about their creations’ addictive nature.
More than fitness videos
YouTube’s fitness-focused approach underestimates the problem’s breadth. Body image anxiety stems not only from workout videos but spans seemingly harmless content—vlogs, beauty tutorials, fashion hauls, and daily routines.
These videos, portraying idealized lives through selective editing and filters, perpetuate “appearance culture,” where external validation and aesthetic perfection become central to self-worth.
A study of 1,597 tweens found that watching tween television programs led to appearance-focused peer conversations, contributing to valuing attractiveness benefits and internalizing appearance ideals, ultimately predicting dysfunctional appearance beliefs six months later.
Content creators embed subtle cues reinforcing these norms. Fitness influencers label videos as motivational while normalizing restrictive diets or obsessive exercise routines, blurring inspiration and manipulation. Both teenagers and human-operated moderation systems struggle to distinguish helpful from harmful content. Platforms like TikTok have faced similar challenges with eating disorder-related content slipping through moderation nets.
When profit meets protection
YouTube’s policy reflects tension between ethical responsibility and economic incentives. The platform thrives on user engagement, with advertising models built around watch time. True protection requires limiting engagement, thereby threatening profitability. After hearing from executives and teen creators, YouTube even received an exemption from Australia’s social media ban.
The current compromise—blocking subsequent recommendations while permitting initial exposure—maintains engagement through other genres of content. Research shows that anticipating new content fuels continued use more than content itself. Even without algorithmic follow-ups, teens encountering body-centric videos may actively seek more.
Spillover effects compound this. Restricted YouTube content A teen may migrate to Netflix for a content genre after YouTube has barred it beyond one video, triggering a rabbit hole of similar content on the former if not age-barred there. Indian action movies with excessive violence could exemplify this aggregate problem.
Problematic social media use coincides with “developmental windows of sensitivity”—periods where adolescents are especially vulnerable to content that alters self-image, mood, and behavior. Current platform design ignores these windows, offering token moves while neglecting teens’ wider information foraging behaviour.
Alternative approaches
Real protection requires redesigning interaction patterns. Introducing positive friction—time delays or content prompts—slows consumption and encourages reflection. Default settings could prioritize educational, creative, and mentally nourishing content for under-18s.
Age-appropriate algorithms, co-developed with developmental psychologists, could emphasize diversity over idealization and realistic health representations over aspirational extremes. Platforms could require content disclaimers for digitally altered appearances, mirroring France’s advertising legislation.
Mandatory cooling-off periods before accessing appearance-focused content could offer protection, mimicking behavioral finance tactics, thus reducing impulsive decision-making. Beyond algorithms, platforms could partner with mental health experts, creating proactive detection and intervention tools.
Symbolic nature of corporate responsibility
YouTube’s move exemplifies corporate “ethics washing”—token gestures appearing progressive while preserving business as usual. Like other tech giants, the company navigates public scrutiny without fundamentally altering incentive structures.
Real change demands more than toggled settings or surface filters. It means rethinking engagement’s value and creating environments prioritizing psychological well-being over watch time. Until then, teenagers remain cash cows in systems optimized for attention, not growth. Adolescents comprise 16% of the global population, concentrated in low-and-middle-income countries, and are the approaching India’s “demographic dividend.”
Comparisons between digital media and narcotics— “digital heroin” and “electronic cocaine”—have become alarmingly apt, reflecting how media addiction mirrors substance addiction’s grip on youth behavior.
What comes next?
Meaningful next steps include independent oversight boards for content moderation, open-access algorithm audits, and AI tools flagging subtle psychological manipulation patterns.
Education and parental involvement must evolve alongside technology. Digital literacy programs, co-developed with schools, can equip users to recognize and resist harmful content. Policy frameworks must encourage platforms to treat adolescent safety as a design imperative, not a PR issue.
Ultimately, safeguarding teenage well-being requires effort from all parties: tech companies, governments, educators, parents, and users themselves—legal minors with voices, as Greta Thunberg demonstrates. YouTube’s move may be well-intentioned, but it’s only the beginning. The real work lies in transforming platforms from comparison engines into spaces of creativity, learning, and genuine connection. The algorithm may never stop watching, but perhaps it will finally start listening.