Evidence-Based Social Media Ratings

kNOw Social Media
Before They Do

"Your children are going to learn from someone. Will it be you — or social media?"

An evidence-based resource helping parents understand the social media platforms in their children's lives — rated, researched, and ready for you to take action.

At a Glance

The 15 most popular social media platforms among U.S. teens, ranked by usage. Data from Pew Research Center (2025) and Piper Sandler (2024).

!
YouTube
TikTok
Instagram
Snapchat
Roblox
!
Pinterest
Facebook
Discord
!
WhatsApp
Reddit
X (Twitter)
!
Twitch
!
Threads
!
BeReal
Kik

How We Rate Each Platform

Every platform is evaluated across eight research-based safety criteria and assigned a traffic-light rating. Here's what each rating means and what we look at.

Red — Significant Concern

Serious safety deficiencies, high exposure to harmful content, weak age verification, or features that put minors at elevated risk. Extreme caution or avoidance recommended.

!

Yellow — Use With Caution

Some safety tools exist but with notable gaps. Active monitoring, adjusted settings, and ongoing conversations with your child are strongly recommended.

Green — Lower Risk

Stronger safety features, meaningful parental controls, and age-appropriate moderation. Still warrants parental awareness and conversation.

What We Evaluate

The criteria most directly tied to how children are harmed online carry the most influence on the final rating. These include predator and grooming risk, cyberbullying, mental health impact, and content exposure.

🚨

Contact & Predator Risk

Can strangers contact minors directly? How effective are safeguards against grooming?

💬

Cyberbullying & Harassment

What tools exist to report and block bullying? How responsive is moderation?

🧠

Mental Health Impact

Does the platform's design promote addictive usage, social comparison, or negative self-image?

⚠️

Content Exposure Risk

How likely is a minor to encounter harmful, violent, sexual, or emotionally distressing content?

🔄

Algorithmic Concerns

Does the algorithm push progressively extreme or harmful content? Can users control their feed?

🪪

Age Verification

How robust is the sign-up age gate? Can underage users easily bypass restrictions?

🔒

Privacy & Data Collection

How much personal data is collected? Can minors limit sharing? Is location tracked?

👁️

Parental Controls

Does the platform offer meaningful tools for parents to monitor and manage their child's experience?

Full Methodology and Sources

Not all eight criteria carry equal influence on a platform's final rating. The research base documenting direct harm to children is largest and most consistent in four areas: predator and grooming risk, cyberbullying, mental health impact, and content exposure. These carry the most weight. Algorithmic design functions as an amplifier of those harms and carries a moderate weight. Age verification, privacy practices, and parental controls are evaluated but carry less influence because the evidence shows they are less directly tied to measurable harm outcomes, even though they remain important.

Contact & Predator Risk. An estimated 500,000 online predators are active on any given day, with children aged 12 to 15 most frequently targeted (FBI; Child Crime Prevention & Safety Center). A 2022 JAMA Network Open study found that 15.6% of U.S. minors experienced online sexual abuse before age 18 (Finkelhor et al., 2022). NCMEC reported online enticement reports jumped from 293,000 to over 518,000 in just the first half of 2025. The Childlight Global Index estimated 302 million children worldwide were subjected to online sexual exploitation in the past year. 82% of online child sex crimes originate through social media platforms.

Cyberbullying & Harassment. Cyberbullying victimization is consistently associated with increased depression, anxiety, loneliness, and suicidal ideation among adolescents (Nixon, 2014; Kowalski et al., 2014). The CDC's 2023 Youth Risk Behavior Survey found frequent social media use was associated with higher rates of bullying victimization and persistent sadness (Young et al., 2024, MMWR). 46% of U.S. teens have experienced at least one form of cyberbullying (Pew Research Center, 2023). A 2026 meta-analysis of 27 longitudinal studies confirmed a consistent association between cyberbullying victimization and mental health symptoms across age groups (Lee et al., 2026).

Mental Health Impact. The U.S. Surgeon General's 2023 Advisory stated there is ample evidence that social media can pose a profound risk of harm to children, noting the adolescent brain between ages 10 and 19 is in a highly sensitive developmental period. A cohort study of 6,595 adolescents linked 3+ hours of daily use to increased internalizing mental health problems (Riehm et al., 2019, JAMA Psychiatry). The APA issued its own advisory in 2023, and the Surgeon General called for a warning label on platforms in June 2024.

Content Exposure Risk. Amnesty International's 2025 investigation found test accounts simulating 13-year-olds on TikTok were exposed to predominantly depressive content within 15 to 20 minutes, with self-harm and suicide content appearing after 3 to 4 hours (Amnesty International, POL 40/0360/2025). Research in Body Image found that algorithms provide content that is more extreme, less monitored, and designed to keep users engaged (Harriger et al., 2022). The Surgeon General's advisory cited deaths linked to self-harm content and dangerous viral challenges on social media platforms.

Algorithmic Concerns. A study examining over 1,000 social media videos found recommendation systems normalize radical content and guide young people into progressively extreme material (PMC, 2025). The WeProtect Global Alliance found high-risk grooming situations can develop in as little as 45 minutes in algorithmic environments. A 2023 Science Advances study found algorithmic recommendations generated only a small portion of traffic to extremist content, with most driven by users who sought it out (Chen et al., 2023). Algorithms carry a moderate weight because they are a significant delivery mechanism for harm, but the direct harms they deliver are the more critical factor.

Age Verification. COPPA requires parental consent for data collection from children under 13, but age verification mechanisms are easily bypassed. Children routinely create accounts by entering a false birthdate (American University JGSPL, 2020). An Internet Safety Technical Task Force found mandatory age verification is a poor solution that can itself violate privacy. The FTC acknowledged in its 2024 rulemaking that self-declaration has significant effectiveness limitations.

Privacy & Data Collection. COPPA's effectiveness is limited by age falsification and the actual knowledge standard (Mercatus Center, 2023). The FTC issued its largest COPPA settlement against Musical.ly (now TikTok) in 2019. Data privacy is a serious concern, but the research base linking collection practices directly to immediate measurable harm to children is less extensive than the evidence for predator contact, cyberbullying, and content exposure.

Parental Controls. Parental monitoring is protective. A 2025 Pediatric Research study found parents' screen habits directly predicted adolescent problematic use, and monitoring was associated with lower screen time (Nagata et al., 2025). Open dialogue and graduated autonomy are more effective than restrictive approaches alone (Symons et al., 2017). Controls carry less weight because their effectiveness depends on parental engagement and cannot override platform design choices that expose children to harm.

Additional sources informing this site: U.S. Surgeon General's Advisory on Social Media and Youth Mental Health (2023); Riehm et al., JAMA Psychiatry (2019); Mayo Clinic, Teens and Social Media; Scott et al., Sleep Health (2019); Nagata et al., Pediatric Research (2025); Geurts et al., Journal of Child and Family Studies (2022); Symons et al., Computers in Human Behavior (2017).

Detailed Platform Breakdown

Dig deeper into the research behind each rating.

YouTube

Video Platform
Min. Age: 13+ (Kids app available)

YouTube is the most-used platform among U.S. teens at 90%, and it fails on all four heavily weighted safety criteria. The autoplay algorithm pushes progressively extreme content, the platform has the highest cyberbullying rate of any social network at 79%, a $170M COPPA fine documents real harm to children, and predatory grooming through comments and livestreams is well-documented. A YouTube Kids app and parental controls exist but do not offset the core platform's failures.

Key Concerns

Autoplay Rabbit Holes Comment Section Predators Algorithm Pushes Extreme Content $170M COPPA Fine Restricted Mode Gaps 500+ Hours Uploaded Per Minute
Full Research Details

Predator & Grooming Risk: YouTube's comment sections on videos featuring minors have been exploited by predators to leave sexually explicit remarks and to direct children to private messaging platforms like Discord or Snapchat. Grooming tactics documented on the platform include "love bombing" with excessive compliments, offering digital rewards such as gift cards or game credits, and working to isolate children from family and friends. Financial sextortion, one of the fastest-growing online crimes targeting minors in 2025, has been documented on YouTube where offenders posing as peers coerce teens into sending explicit images and then demand payment. YouTube raised the minimum age for live streaming to 16 in 2025 and requires adult presence for creators aged 13-15 appearing in streams. YouTube disables comments by default on videos featuring minors and uses machine learning to detect and remove predatory comments. These are meaningful structural protections that most platforms have not implemented, but the sheer volume of content (500+ hours uploaded per minute) means enforcement gaps persist.

Mental Health Impact: YouTube is the most-used online platform among U.S. teens, with a 2024 Pew Research Center survey finding that 90% of teens aged 13-17 use the platform. Research has documented that many teens use YouTube as a primary coping mechanism, turning to it for stress relief, distraction, and emotional comfort (Rideout & Robb, 2019). While this can provide genuine support, it also means teens may rely on algorithmic content rather than in-person relationships for emotional regulation. YouTube's autoplay feature and recommendation algorithm create conditions for extended viewing sessions that can crowd out sleep, exercise, and face-to-face interaction. The U.S. Surgeon General's 2023 Advisory noted that more than 3 hours per day of social media use is associated with double the risk of depression and anxiety symptoms. YouTube has implemented "take a break" reminders and bedtime notifications for teen accounts, and limits repeated recommendations about sensitive topics like body image for users identified as under 18. These features represent more proactive mental health protections than most platforms offer, but they are opt-in for older teens and depend on accurate age identification.

Content Exposure Risk: YouTube's content moderation challenge is enormous given the volume of uploads. Disturbing content targeting children has been a persistent problem, including videos that mimic children's content but contain violence, sexual themes, or frightening imagery. Examples documented by journalists and parents include popular cartoon characters depicted in violent or sexual situations, harmful "challenge" videos, and content promoting self-harm or eating disorders. YouTube's Restricted Mode, designed to filter out mature content, has been criticized as inconsistent, with parents and safety reviewers reporting that inappropriate content still surfaces. YouTube Kids, a separate app with curated content for children under 13, provides substantially stronger filtering through a combination of algorithmic screening and human moderators who review videos. However, YouTube itself acknowledges that "no algorithm is perfect" and that children may encounter content parents would not approve. In September 2025, the FTC fined Disney $10 million for failing to mark certain YouTube videos as "made for kids," which exposed children to targeted advertising and autoplay into inappropriate content, illustrating ongoing gaps in the content classification system.

Cyberbullying & Harassment: A Security.org study found that among all social networks, children on YouTube are the most likely to experience cyberbullying at 79%, followed by Snapchat at 69% and TikTok at 64%. YouTube's comment sections are the primary vector for harassment, with bullying taking the form of mean or hurtful comments, targeted harassment through response videos, brigading (coordinated abuse directed at a creator), and doxxing (sharing private information). YouTube provides tools for creators to moderate comments, including holding comments for review, filtering by keyword, and disabling comments entirely. Comments are disabled by default on videos featuring minors. YouTube's Community Guidelines prohibit harassment, threats, and content designed to shame or humiliate individuals. However, enforcement across billions of videos and comments relies heavily on automated systems that cannot always detect context-dependent bullying. The platform allows users to block and report other accounts, but the anonymous nature of many YouTube accounts limits accountability.

Algorithmic Concerns: YouTube's recommendation algorithm, which drives an estimated 70% of viewing time on the platform, has been documented creating "rabbit hole" effects where engagement with certain content leads to progressively more extreme recommendations. Experiments in 2025 showed that scrolling while logged out could lead to recommendations for age-restricted movie clips, fighting compilations, and instructions for making homemade weapons within one hour. For teens, YouTube has implemented safeguards including limiting repeated recommendations about sensitive topics (such as body image, fitness routines, and social aggression) and blocking recommendations for content that borders on policy violations. The platform has also introduced "content shelves" that surface authoritative sources for health-related queries. These teen-specific algorithmic protections are more developed than on most platforms, but they depend on the platform correctly identifying the user as a minor, which remains imperfect.

Age Verification: YouTube requires users to be 13 or older to create a Google account, enforced by self-reported birthdate during sign-up. In July 2025, YouTube announced AI-driven age estimation tools for the U.S. market that use account activity and viewing patterns to detect users under 18 and automatically apply protective measures. If the system identifies a user as a minor, it blocks age-restricted content, disables personalized ads, sends "take a break" reminders, and limits sensitive content repetition. Users incorrectly identified as minors can verify their age through government ID, credit card, or selfie. This represents one of the more technologically ambitious age verification efforts among major platforms, though the global rollout was still underway as of early 2026. YouTube was fined $170 million by the FTC in 2019 for COPPA violations, specifically for collecting personal information from children on channels YouTube knew to be child-directed, without parental consent. This settlement forced YouTube to create a system for identifying child-directed content and to stop behavioral advertising on those channels.

Privacy & Data Collection: YouTube, owned by Google/Alphabet, operates within one of the largest data collection ecosystems in the world. The platform collects viewing history, search history, device information, IP addresses, and behavioral patterns. The 2019 FTC settlement specifically addressed YouTube's practice of using persistent identifiers to track children on child-directed channels and deliver targeted advertising without parental consent. Following the settlement, YouTube implemented the "made for kids" designation system, which disables behavioral advertising, comments, and notification features on child-directed content. However, YouTube's broader data collection continues for general content. For supervised accounts, Google states that parents can manage data collection and ad settings. The platform does not collect data from YouTube Kids accounts in the same way as standard YouTube, and personalized advertising is not served on YouTube Kids.

Parental Controls: YouTube offers more parental oversight tools than most platforms. YouTube Kids provides a dedicated, curated environment for children under 13 with content filtering, no comments, limited search functionality, and timer controls. For teens 13+, Google's Family Link allows parents to create supervised YouTube accounts with restricted content settings, screen time limits, and activity monitoring. YouTube's Restricted Mode filters out content flagged as potentially mature, though its effectiveness has been criticized as inconsistent. In 2025, YouTube began testing supervised accounts that allow parents to choose from three content settings (Explore, Explore More, and Most of YouTube) based on their child's age and maturity. Parents using Family Link can see their teen's YouTube activity and manage app permissions. These tools, while imperfect, represent a substantially more developed parental control ecosystem than what is available on platforms like X, Reddit, or Discord.

Sources:

Pew Research Center (2024). Teens, Social Media and Technology Survey. 90% teen usage figure.
U.S. Surgeon General (2023). Social Media and Youth Mental Health Advisory.
Rideout, V. & Robb, M. B. (2019). The Common Sense Census: Media Use by Tweens and Teens.
FTC (2019). $170 Million YouTube COPPA Settlement. Google and YouTube fined for children's privacy violations.
FTC (2025). Disney fined $10M for COPPA violations on YouTube channels.
YouTube Help Center (2025). Child Safety Policy.
YouTube (2025). AI-powered age estimation for U.S. teen protections announcement.
Security.org (2025). Cyberbullying statistics: YouTube highest at 79% among platforms.
SafetyDetectives (2025). How to Keep Kids Safe on YouTube. Restricted Mode and YouTube Kids analysis.
Pritzker Hageman (2025). How YouTube Can Harm Kids and Teens. Algorithm, predator, and mental health analysis.
Google Family Link (2025). Supervised YouTube account documentation.
YouTube Kids Content Policies (2025). Filtering standards and moderator-approved playlists.

T

TikTok

Short-Form Video
Min. Age: 13+

TikTok's algorithm-driven infinite scroll, weak age verification, documented predator activity, and extensive data collection from minors place it firmly in the red category despite the platform's stated policies.

Key Concerns

Algorithmic Rabbit Holes Predator Risk Addictive Design Data Collection Self-Harm Content Weak Age Verification
Full Research Details

Predator & Grooming Risk: TikTok restricts direct messaging for users under 16, but strangers can still interact with minors through comments, duets, stitches, and live streams. A Thorn/NCMEC analysis of sextortion reports (2020-2023) identified TikTok as a significant platform where initial contact between perpetrators and victims occurs. TikTok reported over 590,000 accounts removed for violating its minor safety policies in Q3 2024 alone (TikTok Transparency Report, 2024). Law enforcement agencies have documented cases where adults used TikTok to identify and groom children, often moving conversations to less monitored platforms like Snapchat or Discord. The Internet Watch Foundation reported that TikTok was increasingly referenced in reports of child sexual exploitation material being shared or linked. Even with DMs disabled for under-16 accounts, the public comment section creates a vector for predatory contact, and account privacy settings only apply if the minor has set up their account honestly and correctly.

Mental Health Impact: Published research has linked social media use, including TikTok specifically, to negative mental health outcomes in adolescents. A systematic review found that higher levels of social media usage were connected with worse mental health outcomes including depression and anxiety (Diedrichs et al., 2023). The U.S. Surgeon General's 2023 Advisory stated that social media poses a "profound risk of harm" to children and adolescents, noting that more than 3 hours of daily use is associated with double the risk of depression and anxiety symptoms. TikTok's design is built around infinite scroll, autoplay, and a variable-ratio reward system where users never know what the next video will be, which activates the same dopamine pathways associated with gambling and slot machines. The platform introduced screen time limits of 60 minutes per day for users under 18 in 2023, but these are easily bypassed by entering a passcode. An internal TikTok presentation leaked to the press acknowledged that compulsive usage was a problem, with the company's own research finding that the desire to stop using the app was a leading concern among users. The APA's 2023 advisory on social media and youth called for limits on design features that drive compulsive use, directly relevant to TikTok's core design.

Content Exposure Risk: This is one of TikTok's most documented failure points. Amnesty International's 2025 investigation created test accounts simulating 13-year-old users and found the algorithm served predominantly depressive content within 15 to 20 minutes of use, with self-harm and suicide content appearing after 3 to 4 hours (Amnesty International, POL 40/0360/2025). TikTok's Community Guidelines prohibit violent content, self-harm depictions, and sexual content including nudity, but the platform's reliance on user-generated content and algorithmic distribution means harmful material reaches minors before moderation can remove it. A 2023 Center for Countering Digital Hate study found TikTok recommended self-harm and eating disorder content to teen test accounts within minutes of joining and engaging with related content. The platform has introduced Restricted Mode and content maturity labels, but independent testing has consistently shown these measures fail to prevent exposure to harmful material. TikTok's "For You" page delivers content from accounts the user does not follow, meaning children are exposed to content from strangers by default.

Cyberbullying & Harassment: TikTok's Community Guidelines prohibit harassment, threatening behavior, and bullying. Users can report violations, and TikTok uses AI moderation tools to detect and remove content that violates its policies. The platform publishes transparency reports documenting enforcement actions, and as of 2025, TikTok has been proactively deactivating accounts and removing content for guideline violations (TikTok, 2025). However, TikTok's duet, stitch, and comment features create unique vectors for public humiliation and coordinated harassment that are harder to moderate than private messages. Pew Research Center found that 46% of U.S. teens have experienced at least one form of cyberbullying (2023), with video-based platforms creating new forms of harassment including mocking duets and derogatory stitches. The platform's massive scale and the speed of viral content mean that bullying content can reach millions of viewers before moderation intervenes. TikTok has introduced comment filtering, keyword blocking, and the ability to restrict comments to approved followers, but these tools require proactive setup by the user.

Algorithmic Concerns: TikTok's recommendation algorithm is one of the most aggressive in the industry. The "For You" page uses engagement signals, watch time, replays, and interactions to build a profile of user interests and serve increasingly targeted content. Research has documented that this system creates "rabbit hole" effects where users who engage with emotionally charged content are served progressively more extreme material. Amnesty International's test accounts demonstrated that once a simulated teen account engaged with depressive content, the algorithm accelerated delivery of self-harm and suicide material. A study examining over 1,000 social media videos found recommendation systems normalize radical content and guide young people into progressively extreme material (PMC, 2025). TikTok offers a "Family Pairing" feature that allows parents to link their account to their child's and manage some content settings, but this requires the parent to proactively set it up and the teen to accept the pairing. TikTok's algorithm does not provide users with meaningful control over what is recommended; the "Not Interested" button has been shown in testing to have inconsistent effects on subsequent recommendations.

Age Verification: TikTok does not require identification verification to create an account. The only age gate is a birthdate entry during sign-up, which any child can bypass by entering a false date. TikTok does not proactively search for accounts created by minors who have entered an inaccurate age. This means a child who enters a birthdate making them appear 18 or older gains access to the full platform with none of the teen safety restrictions applied. The FTC fined TikTok's predecessor Musical.ly $5.7 million in 2019 for collecting personal information from children under 13 without parental consent, the largest COPPA civil penalty at that time. In 2024, the DOJ filed a complaint against TikTok alleging continued COPPA violations, including collecting data from children the company knew to be under 13. TikTok has stated it removes accounts it identifies as belonging to underage users, but the self-declaration system remains the primary age gate.

Privacy & Data Collection: TikTok collects extensive personal information from all users including minors: name, phone number, location, IP address, device type, browsing history within the app, keystroke patterns, and biometric data including faceprints and voiceprints. The FTC's 2019 COPPA enforcement action established that the company had illegally collected data from children. The DOJ's 2024 complaint alleged TikTok continued to collect and retain children's data in violation of COPPA and a prior consent decree, including allowing children under 13 to create regular accounts and failing to delete their data when violations were identified. The platform's privacy policy discloses data sharing with third-party advertisers and business partners. TikTok's parent company ByteDance is headquartered in China, which has raised national security concerns resulting in bans from government devices in the U.S., EU, Canada, and multiple other jurisdictions. Whether or not these geopolitical concerns directly affect child safety, the scope of data collected from minors is among the most extensive of any social media platform.

Parental Controls: TikTok offers Family Pairing, which allows a parent to link their TikTok account to their teen's account. Through Family Pairing, parents can manage screen time limits, restrict or disable direct messaging, control who can comment on their child's videos, enable Restricted Mode to filter mature content, and set the account to private. TikTok also operates a Safety Center with guidance for parents. These are functional tools. However, they require the parent to have their own TikTok account, the teen must accept the pairing request, and the teen can unpair at any time. Screen time limits set by parents can be bypassed if the teen enters a passcode. Restricted Mode is not enabled by default and has been shown in independent testing to fail to block significant amounts of harmful content. The controls are better than nothing but do not override the platform's fundamental design, which is built around algorithmic content delivery optimized for engagement.

Sources:

TikTok (2025). Community Guidelines Enforcement Report 2025.
TikTok (n.d.). Community Guidelines. Retrieved March 9, 2026.
TikTok (n.d.). Safety Center. Retrieved March 9, 2026.
Diedrichs, P. C., et al. (2023). The impact of social media on the mental health of adolescents and young adults: A systematic review. Psychological Bulletin.
U.S. Surgeon General (2023). Social Media and Youth Mental Health Advisory.
Amnesty International (2025). "Driven Into the Darkness." TikTok and children's mental health. POL 40/0360/2025.
Center for Countering Digital Hate (2023). Deadly by Design: TikTok pushes harmful content to teen accounts.
Thorn & NCMEC (2024). Trends in Financial Sextortion report.
Federal Trade Commission (2019). Musical.ly COPPA enforcement, $5.7M penalty.
U.S. Department of Justice (2024). Complaint against TikTok Inc. and ByteDance Ltd. for COPPA violations.
American Psychological Association (2023). Health Advisory on Social Media Use in Adolescence.
Pew Research Center (2023). Teens and cyberbullying survey data.
Internet Watch Foundation. Annual reporting on TikTok-related child exploitation material.
TikTok Transparency Report (2024). Q3 2024 enforcement data.

I

Instagram

Photo & Video Sharing
Min. Age: 13+

Instagram fails on nearly every safety criteria for minors. Meta's own internal research documented the harm and the company chose not to act on it.

Key Concerns

#1 Sextortion Platform Internal Research Suppressed Body Image Harm Predator Recommendations Algorithm-Driven Exposure Engagement Over Safety
Full Research Details

Predator & Grooming Risk: A Thorn/NCMEC analysis of over 15 million sextortion reports (2020-2023) found Instagram is the number one platform where perpetrators initially contact victims (45.1%), where they threaten to distribute images (60%), and where they actually distribute sextortion images (81.3%). Court filings revealed that Instagram's recommendation feature in 2023 recommended nearly 2 million minors to adults seeking to groom children. An internal audit found over 1 million potentially inappropriate adults were recommended to teen users in a single day in 2022. A Meta safety researcher warned in internal emails that sexually inappropriate messages were being sent to an estimated 500,000 victims per day on English-language platforms alone. The New Mexico Attorney General filed suit in 2023 calling Meta the "world's single largest marketplace for pedophiles." A court filing also alleged that Meta had a "17x" policy allowing sex traffickers to post solicitation content 16 times before their accounts were suspended on the 17th strike.

Mental Health Impact: This is where Instagram has the most extensively documented evidence of harm of any social media platform. In 2021, whistleblower Frances Haugen leaked internal Facebook research showing the company knew Instagram was damaging teen mental health. Internal slides stated "We make body issues worse for 1 in 3 teen girls" and "One in five teens say that Instagram makes them feel worse about themselves." Facebook's own research found 13.5% of UK teen girls said Instagram worsened their suicidal thoughts, and 17% said it contributed to their eating disorders. Haugen testified before the Senate that the company continued to pursue strategies targeting younger users to increase engagement and sell more ads despite knowing these harms. Facebook paused development of Instagram Kids only after the internal research became public. Some researchers have noted limitations in these internal studies, but the fact that Meta's own research flagged these harms and the company chose profit over action is itself significant evidence.

Content Exposure: 75% of minors who have shared self-generated child sexual abuse material use Instagram daily (Protect Children, 2024). A Wall Street Journal investigation in 2023 found Instagram's recommendation engine was actively promoting networks that exploited children. A later internal Meta study reported by Reuters found the platform's recommendation systems could expose vulnerable teenagers to increasingly harmful material. Instagram's Explore page and Reels algorithm surface content based on engagement patterns, meaning teens who interact with appearance-focused or emotionally charged content receive more of it. Sensitive Content Controls exist but were not set to the most restrictive level by default for all users until the Teen Accounts rollout in late 2024.

Cyberbullying & Harassment: Instagram's public-facing design built around likes, comments, followers, and visual presentation creates well-documented vectors for bullying and social exclusion. The Pew Research Center found 46% of U.S. teens have experienced at least one form of cyberbullying (2023). Instagram's DM system has been widely used for targeted harassment. The platform has introduced comment filtering, a "Restrict" feature, and anti-bullying prompts, but the core design centered on public social comparison remains unchanged.

Algorithmic Concerns: A Meta employee cited in court filings stated that Facebook's recommendation feature was "responsible for 80% of violating adult/minor connections." The Wall Street Journal documented that Instagram's recommendation engine actively promoted pedophile networks. The platform's algorithm optimizes for engagement, which amplifies emotionally provocative and appearance-focused content. Instagram has made changes since these findings, but the core architecture remains engagement-driven and designed to maximize time on platform.

Age Verification: Instagram's minimum age is 13, enforced only by self-declaration at sign-up. No ID verification is required to create an account. Meta introduced Teen Accounts in late 2024 with some age-gated features, but the fundamental age gate remains a birthdate entry that any child can bypass by entering a false date of birth.

Privacy & Data Collection: Meta collects extensive data from all users including minors, including location data, browsing behavior, interaction patterns, and device information. The platform's entire business model is built on advertising revenue driven by behavioral data collection. Meta has faced multiple FTC investigations and enforcement actions related to data practices.

Parental Controls: This is the one area where Instagram has made meaningful recent changes. Teen Accounts launched in late 2024 include automatic private accounts for users under 16, restricted DMs from non-connections, sensitive content restrictions set to the highest level by default, and parental supervision tools allowing parents to see who their teen messages, set time limits, and restrict content. These are functional controls, though they came only after years of documented harm and public pressure, and their effectiveness at overriding the platform's core design problems remains to be seen.

Sources:

Thorn & NCMEC (2024). Trends in Financial Sextortion. Analysis of 15M+ CyberTipline reports, 2020-2023.
Haugen, F. (2021). Testimony before Senate Commerce Subcommittee on Consumer Protection. Leaked internal Facebook research documents ("The Facebook Files").
Wall Street Journal (2023). Investigation into Instagram's recommendation engine and child exploitation networks.
Mercury News / Court Filings (2025). Plaintiffs' filing in consolidated litigation against Meta, Google, Snap, and TikTok.
FOX Business / Court Records (2025). Internal Meta safety researcher emails re: 500K daily exploitation estimates.
New Mexico Attorney General (2023). Complaint against Meta Platforms under Unfair Trade Practices Act.
NCOSE (2024). Instagram platform assessment. National Center on Sexual Exploitation.
Protect Children (2024). Tech Platforms Used by Online Child Sex Abuse Offenders. Suojellaan Lapsia.
Canadian Centre for Child Protection (2023). Sextortion platform analysis.
Parents Together (2023). Afraid, Uncertain, and Overwhelmed: Survey of 1,000 parents.
Meta Platforms (2024). Teen Accounts announcement and safety feature documentation.
Pew Research Center (2023). Teens and cyberbullying survey data.

S

Snapchat

Messaging & Stories
Min. Age: 13+

Snapchat's disappearing messages give adults direct, unmonitored, evidence-destroying access to minors. NCMEC ranked it the #1 platform for online enticement of children in 2023, and multiple state attorneys general have called it a breeding ground for predators.

Key Concerns

#1 Platform for Child Enticement Disappearing Evidence Sextortion Crisis Predator Access by Design Body Dysmorphia Filters Less Than 1% Parental Oversight
Full Research Details

Predator & Grooming Risk: This is where Snapchat's design poses the most serious, structurally embedded danger to children. Any adult can add a minor by username or phone number and send them photos, videos, and messages that disappear after viewing, leaving no record for parents, schools, or law enforcement to discover. NCMEC ranked Snapchat the #1 location where online enticement of minors occurred in 2023. According to the Thorn/NCMEC sextortion analysis (2020-2023), Snapchat is the #2 platform where perpetrators initially contact victims (31.6%), the #2 platform where sextortion images are actually distributed (16.7%), and the #1 secondary platform victims are moved to from other apps (35.8%). In 2023, Snap submitted approximately 690,000 CyberTipline reports to NCMEC. Snap's own internal emails, revealed through the New Mexico AG lawsuit, showed the company receives around 10,000 reports of sextortion per month, a figure acknowledged internally as likely only a fraction of actual abuse on the platform. The New Mexico Attorney General's undercover investigation set up a decoy account for a 14-year-old and within a day received friend requests from predatory accounts, including ones with openly exploitative usernames. Investigators found over 10,000 records linking Snapchat to child sexual abuse material on dark web sites in a single year. Sextortion scripts, step-by-step playbooks for blackmailing minors, circulate openly and had not been blacklisted by Snapchat at the time of the investigation. Documented criminal cases are extensive: a Virginia man was sentenced to 14 years in federal prison after meeting a 15-year-old on Snapchat and manipulating her into a sexual relationship (DOJ, 2025); a California man was sentenced to 270 years to life after using Snap Map to identify and groom multiple young girls (Placer County DA, 2025); a Texas resident was sentenced to 10 years for using Snapchat to coerce a minor over a two-year period (HSI/ICE, 2024); and a New York former school teacher was arrested for posing as a teen to solicit images from children under 14 through Snapchat (NYSP, 2025). In a Florida undercover operation in 2025, 48 people were arrested in a single six-day sting, with officials noting Snapchat as the most commonly used platform by suspects targeting children. Snap's internal documents revealed that leadership was warned about the Quick Add feature directly facilitating predatory connections, but executives complained that protecting minors would create disproportionate costs and questioned how playing defense against pedophiles would help unlock growth. More than 600 lawsuits naming Snap have been filed as part of the social media MDL and coordinated proceedings, and state attorneys general in New Mexico, Utah, Florida, Texas, Kansas, and Nevada have filed or are pursuing enforcement actions against the company.

Mental Health Impact: A study published in Child and Adolescent Psychiatry and Mental Health found that Snapchat users had a 43.4% probability of experiencing psychological health problems among heavy users, including mood changes, emotional stress, and symptoms of depression, anxiety, sleep disruption, and loneliness. The study identified Facebook and Snapchat as the social networks most strongly associated with psychological health problems in adolescents. Research on Snapchat's appearance-altering filters found that teen users who used these filters for 15 minutes or more per day had a 66% higher chance of developing body dysmorphic symptoms by age 17 compared to low-use peers, with each year of regular filter use correlated with declining body image flexibility, meaning adolescents had increasing difficulty maintaining a realistic perception of their own appearance. Snapchat's Streaks feature, which tracks consecutive days of messaging between users, creates social pressure to maintain daily use or risk losing the streak. Multiple lawsuits, including from the Utah and Texas attorneys general, cite Streaks and other engagement mechanics as intentionally addictive design features that exploit the vulnerabilities of developing minds. Snap's own research found that nearly two-thirds of Gen Z teens and young adults on the platform reported that they or their friends had been targeted in sextortion schemes, a finding with direct and severe mental health consequences. At least 46 teen boys have died by suicide in the U.S. since 2021 after falling victim to sextortion, with Snapchat and Instagram identified as the primary platforms where these crimes occur. In some cases, teens took their own lives within hours of being contacted by scammers. A 13-year-old boy in South Carolina died by suicide in 2023 after being sextorted on Snapchat for $35 a day; his family has filed suit against the company.

Content Exposure: A 2016 lawsuit filed in California alleged that Snapchat's Discover feature exposed minors to sexually explicit material without differentiating between content delivered to minors and adults. The problem has persisted and expanded. The Texas Attorney General's 2026 lawsuit against Snap alleges the platform violates deceptive trade practices by listing itself as appropriate for ages 12+ on app stores while frequently exposing users to profanity, sexual content, nudity, and drug use. The New Mexico investigation found that sexually explicit materials and predators are frequently recommended to minors through Snapchat's features. The Discover feed and Spotlight, Snapchat's public video feature, surface content based on engagement algorithms, meaning teens who interact with appearance-focused or emotionally charged content will receive more of it. The disappearing nature of Snaps means harmful content sent directly to minors through DMs cannot be moderated after the fact and leaves no trail for parents to review. Snap's "My Eyes Only" feature, which allows users to store photos behind a passcode, is documented in lawsuits as being used to store child sexual abuse material by both predators and coerced minors.

Cyberbullying & Harassment: Snapchat's 2025 transparency report documented 4,103,797 reports of harassment or bullying during the reporting period, with enforcement actions taken against 700,731 pieces of content or accounts affecting 584,762 accounts. The platform reports a response time of approximately 3 minutes from detection to action. Profiles for users under 18 are set to private by default, and users under 16 cannot switch to public. Despite these measures, a 2024 national report by the Cyberbullying Research Center found that 29% of Snapchat users reported experiencing harassment or exclusion behaviors on the app. The disappearing message design makes it difficult to document bullying for parents, school officials, or law enforcement to act on, and the ephemerality of content can embolden aggressors who know evidence will vanish. Snapchat previously hosted the anonymous messaging app YOLO through its SnapKit integration, which was linked to cyberbullying, multiple teen suicides, and a class-action lawsuit before being shut down.

Algorithmic Concerns: Snapchat uses algorithms to personalize content in its Discover and Spotlight features, analyzing user behavior such as watch time, video completion, and skip patterns to predict what content a user is most likely to engage with. The platform also factors in demographic information including age, location, language, and activity patterns. The Utah AG's 2025 lawsuit alleges Snapchat uses design features intended to increase engagement among young users. The New Mexico lawsuit revealed that Snap's Quick Add feature, which suggests new friends, directly facilitated predatory behavior by allowing bad actors to exploit gaming platforms to find underage users, add them on Snapchat, and then let the algorithm suggest additional minor friends. Snap's internal investigation of one predator found that 6.68% of the underage victims he connected with were added through the Mentions feature, which allowed him to discover and follow friends of friends. When presented with potential fixes for both Quick Add and Mentions, Snap leadership resisted, with one employee questioning how proactively defending against potential pedophiles would help the company grow.

Age Verification: Snapchat requires users to be at least 13 to create an account. If a user enters a birthdate indicating they are under 13, the registration is blocked. However, age is entirely self-reported with no ID verification required. Any child can bypass the age gate by entering a false birthdate. Common Sense Media recommends Snapchat for ages 16+, three years older than the platform's own minimum, citing disappearing messages, sexual content, and Snapstreak pressure as major concerns.

Privacy & Data Collection: Snap collects extensive data from all users including minors, including location data, device information, content interaction patterns, and contact lists. In 2014, the FTC settled charges against Snap for deceiving consumers about the disappearing nature of messages and for collecting user contact information without consent. That settlement placed Snap under a 20-year consent order requiring independent privacy audits. A 2024 FTC staff report examining nine major platforms, including Snap, found that social media companies broadly engage in mass data collection for monetization, with inadequate protections for minors. Snap's location-sharing feature, Snap Map, shows users' live locations on an interactive map, and while it defaults to off, it has been documented in criminal cases as a tool predators use to identify and locate nearby children. The Placer County DA's office specifically cited Snap Map as a feature that lets predators target children who appear nearby.

Parental Controls: Snapchat's Family Center, launched in 2022, allows parents to see who their teen is friends with, who they have messaged in the past seven days, request location, and adjust some content settings. However, parents cannot see the content of any messages, cannot set time limits, and cannot lock the app remotely. The teen must accept the Family Center invitation and can remove parental access at any time without notification. Teens can also create secondary accounts in about four seconds and use those for any activity they want to keep hidden. According to Snap CEO Evan Spiegel's own testimony to the Senate Judiciary Committee, only approximately 200,000 parents use Family Center and about 400,000 teens have linked accounts, which represents less than 1% of Snapchat's underage user base. Sensitive content restrictions are only available to teens connected through Family Center, meaning the other 99%+ of teen users do not benefit from these protections. Multiple independent reviews, including from Protect Young Eyes and Bark, have described Family Center as inadequate. Court documents from Utah's 2025 suit revealed that 96% of abuse reports filed through Snapchat's in-app reporting feature are not reviewed by the Trust and Safety team.

Sources:

Thorn & NCMEC (2024). Trends in Financial Sextortion. Analysis of 15M+ CyberTipline reports, 2020-2023.
NCMEC (2024). CyberTipline 2023 data: #1 platform for online enticement of children.
NCOSE (2024). Snapchat Platform Assessment. National Center on Sexual Exploitation.
New Mexico Department of Justice (2024). Lawsuit against Snap, Inc. Undercover investigation findings.
New Mexico Department of Justice (2025). Legal victory: court rejects Snap's Section 230 immunity claim.
TechPolicy.Press (2024). Unsealed Snap Inc. lawsuit: internal documents on Quick Add, Mentions, and growth-over-safety decisions.
NPR / Kerr, D. (2024). Snapchat brushed aside warnings of child harm, documents show.
After Babel (2025). Snapchat is Harming Children at an Industrial Scale. Analysis of 600+ lawsuits.
U.S. DOJ, Eastern District of North Carolina (2025). Snapchat predator sentenced to 14 years federal prison.
Placer County DA (2025). Lincoln man sentenced to 270 years to life; AppSafe Initiative launched.
HSI / ICE (2024). South Texas resident sentenced for using Snapchat to entice minor.
New York State Police (2025). Former school teacher arrested for soliciting minors on Snapchat.
Florida Attorney General (2025). 48 arrested in record-breaking undercover child predator operation; Snapchat most commonly used platform.
Florida Attorney General (2026). Arrest and extradition of Snapchat/Roblox child predator.
Boer, M., et al. (2024). Scrolling through adolescence: social networks and psychosocial health. Child and Adolescent Psychiatry and Mental Health.
NIMH-Stanford Adolescent Filter Impact Study (2025). Adolescent filter use and body dysmorphic symptoms.
Snap Inc. (2025). Snapchat Transparency Report (H1 2025). 4.1M harassment reports, 700K+ enforcement actions.
Snap Inc. (2025). Community Guidelines: Harassment and Bullying.
Vogels, E. A. (2022). Teens and cyberbullying 2022. Pew Research Center.
Cyberbullying Research Center (2024). 29% of Snapchat users report harassment/exclusion.
Spiegel, E. (2024). Senate Judiciary Committee Questions for the Record. Family Center adoption data.
FTC (2014). Consent order against Snapchat for deceptive privacy practices. 20-year monitoring agreement.
FTC (2024). A Look Behind the Screens: Staff report on data practices of social media and video streaming services.
Texas Attorney General (2026). Lawsuit against Snap, Inc. for deceptive trade practices and child harm.
Utah Attorney General & Dept. of Commerce (2025). Lawsuit against Snap, Inc. Addictive design, My AI concerns.
Kansas Attorney General (2025). Lawsuit against Snap, Inc. for consumer protection violations.
Nevada Attorney General (2025). Lawsuit against Snap, Inc.; Nevada Supreme Court rejects Snap's appeal (2026).
Bloomberg Businessweek (2024). Investigation into sextortion targeting teen boys on Instagram and Snapchat.
Canadian Centre for Child Protection (2023). #1 platform used for sextortion.
Common Sense Media. Snapchat review: recommended age 16+.

R

Roblox

Gaming & Social Platform
Min. Age: E10+ (ESRB rates T for Teen)

Roblox is not a game — it is a platform hosting millions of user-generated games with inconsistent moderation, documented predator activity, and gambling-like spending mechanics targeting children.

Key Concerns

Predator Grooming Child Gambling Mechanics Weak Age Verification Exploitative Spending Inappropriate Content Stranger Contact
Full Research Details

Platform Overview: Roblox is not a game. It is a platform hosting millions of user-generated "experiences" with wildly inconsistent moderation. In Q2 2025, Roblox reported over 100 million daily active users, with approximately 42% of all players under the age of 13 (Roblox Corporation, 2025). The ESRB rates Roblox "T for Teen" with a "Diverse Content: Discretion Advised" descriptor, and the European PEGI system changed its classification from "suitable for 7 and over" to "parental guidance recommended" due to the volume of unregulated user-generated content. More than 40% of its user base is under 12 years old (NCOSE, 2025). Roblox fails on nearly every safety criteria for minors. The scale and severity of documented harm, from predator grooming to gambling mechanics to sexually explicit content accessible to children, place it squarely in the red category.

Predator & Grooming Risk: This is where Roblox has the most alarming and extensively documented record of harm. A 2024 Bloomberg Businessweek investigation compiled data showing that since 2018, at least 24 people in the United States had been arrested on charges of abducting or sexually abusing children they had groomed on Roblox. Those arrested included individuals already on sex offender registries, a sheriff's deputy, a third-grade teacher, and a nurse. By mid-2025, Aftermath reported an additional 6 arrests since the start of that year. By early 2026, at least 30 people total had been arrested in the U.S. in connection with grooming on the platform. A 15-year-old boy from Texas died by suicide in 2024 after allegedly being groomed and blackmailed on the platform. Online child exploitation groups including 764 and CVLT have been documented operating on Roblox, using it to groom children and coerce them into creating explicit sexual and self-harm material. On darknet forums, adults trade tips for developing relationships in Roblox chats, including using emoji to refer to apps like Snapchat and Discord where conversations can move unfiltered. Predators have used Robux, Roblox's virtual currency, to lure children into sending photographs or developing relationships. Roblox reported 24,522 child exploitation incidents to NCMEC in 2024, up from an estimated 13,000+ in 2023. In October 2024, Hindenburg Research published a detailed report describing the platform as exposing children to grooming, pornography, violent content, and extremely abusive speech. NCOSE named Roblox to its 2024 Dirty Dozen List of mainstream contributors to sexual exploitation. As of early 2026, at least seven U.S. state attorneys general (Texas, Nebraska, Tennessee, Florida, Kentucky, Louisiana, and California's LA County) have filed lawsuits against Roblox alleging the platform enabled predators to target children. A federal multidistrict litigation (MDL No. 3166) has been created to consolidate child sexual exploitation and assault claims.

Mental Health Impact: Roblox's platform design includes every psychological mechanism known to create compulsive use. Because Roblox makes it easy to find and play a variety of games, developers compete for attention by incorporating manipulative and addictive features, including variable-ratio reward schedules, time-limited events, and social pressure mechanics. A Psychology Today analysis noted that competition between game developers means the most popular games are often the most addictive (Fishman, 2025). Children are particularly susceptible because their cognitive and emotional control systems are still developing, making it harder to regulate screen time or resist in-game rewards. The platform's social features create additional pressure: children report fear of missing out if they don't play, and the economic system around Robux creates social hierarchies where children who can't spend money feel excluded. Families pursuing legal claims report escalating time on the platform, declining academic performance, social withdrawal, sleep disruption, and emerging mental health conditions linked to excessive gameplay. The WHO's inclusion of gaming disorder in the ICD-11 reflects growing recognition that platform design features can drive clinically significant compulsive behavior in children.

Content Exposure Risk: Roblox's user-generated content model allows anyone to create and publish games, and the platform has repeatedly failed to prevent children from accessing sexually explicit, violent, and otherwise harmful material. NCOSE researchers created test accounts and were easily exposed to age-inappropriate content, including games where avatars simulated sexual acts, virtual strip clubs, and games depicting graphic violence. A 2024 investigation found over 600 games referencing Sean "Diddy" Combs with sexually suggestive titles, and hundreds of games displaying variations of convicted sex trafficker Jeffrey Epstein's name, including "Escape to Epstein Island." Tennessee's attorney general lawsuit documented games including "Public Bathroom Simulator" that were open to users under 10 and allowed avatars to simulate sexual activity. Children have reported stumbling into "condo games" where they were encouraged to visit virtual bedrooms and engage in simulated sexual acts. The platform has also hosted mass shooting recreations modeled after Sandy Hook, Columbine, and Uvalde, with victim names on leaderboards. In July 2025, an online cult called "Spawnism" emerged within the Roblox community, with predators targeting vulnerable children and coercing them into self-harm and degrading acts on camera through connected Discord servers. Even after Roblox implemented content maturity ratings, a December 2025 test by Malwarebytes found that a child account could still find communities linked to cybercrime and fraud-related keywords.

Cyberbullying & Harassment: Roblox's in-game chat systems and social features create significant vectors for bullying and harassment. A 2024 Pew Research Center report found that 80% of teens aged 13 to 17 think harassment in video games is a problem for people their age, and more than 40% said they have been called an offensive name while playing. A study found that nearly 10% of participants identified Roblox specifically as the game with the highest incidents of cyberbullying (Lewis, 2023). A Thorn Youth Perspectives report from 2023 found that 41% of children aged 9 to 17 reported being bullied online. Roblox's anonymous, avatar-based environment amplifies the "online disinhibition effect," where users behave more aggressively because they perceive fewer consequences. Research indicates the odds of suicidal ideation are 3.1 times greater for cyberbullying victims compared to those not bullied. While Roblox offers block, mute, and report tools, the sheer volume of user interactions across millions of experiences means moderation is reactive and inconsistent. A Northeastern University study found that longer suspensions reduced reoffense rates for first-time violators by 13%, but the impact faded over time, and frequent violators saw only a 4% decrease, suggesting the platform's enforcement mechanisms are insufficient for repeat offenders.

Gambling-Like Spending Mechanics: A 2025 University of Sydney study by Dr. Taylor Hardwick and Professor Marcus Carter interviewed 22 children aged 7 to 14 and found that children struggle with complex and deceptive in-game spending features. Children in the study described Roblox's randomized reward systems as "literally just child gambling," "scams," and "cash grabs." Much of Roblox's US$3.6 billion in 2024 revenue was generated through in-game microtransactions. Popular games like Adopt Me!, Blox Fruits, and Pet Simulator 99 include loot box-style mechanics where players spend real money (converted to Robux, then to in-game currencies) on chance-based rewards. Despite Australia banning loot boxes for users under 15 in 2024, the study found these features remained accessible to underage accounts. An 11-year-old in the study described navigating the virtual currency system as "scary." The Texas Attorney General's lawsuit alleged that Roblox's Avatar Store sells "rare" items at inflated prices that children seek to purchase to keep up with peers, and that children often tell others, including strangers, that they will do "anything for Robux." The Netherlands and Belgium have restricted certain Roblox games due to their regulations on loot boxes to reduce children's exposure to gambling.

Algorithmic Concerns: Roblox does not use a traditional content recommendation algorithm in the same way as social media platforms, but its discovery system, which surfaces popular games based on engagement metrics, creates a similar effect. Games that maximize time-on-platform through addictive mechanics rise to the top of the charts, while games with ethical design choices lose players to more manipulative alternatives. This creates a race to the bottom where the most psychologically exploitative games get the most visibility. The platform's chat system has historically compounded this problem: until late 2024, Roblox's settings forced users to choose between allowing messages from "Everyone" or "No one," meaning children who wanted to chat with friends had to also allow strangers to contact them (Psychology Today, 2025). Predators exploited this binary system to identify and contact children within games. Roblox launched Sentinel in 2025, an AI system designed to identify grooming patterns across chat logs, but child safety advocates and researchers argue that no moderation system can monitor every interaction in real time across a platform of this size.

Age Verification: Until November 2024, Roblox's only age gate was a birthdate entry at sign-up that any child could bypass by entering a false date. The Texas Attorney General's lawsuit alleged that Roblox's sign-up process actually included a prompt that encouraged new users to lie about their age. Before the November 2024 updates, parental protections were turned off by default, meaning any child who listed their age as 13 or older could access every experience on the platform, message any user, and be contacted by any adult. In January 2026, Roblox mandated facial age verification through a third-party service (Persona) for all users worldwide wishing to access chat features. Users are now placed into one of six age brackets designed to prevent cross-generational communication. However, early reports indicate significant problems: the system has misidentified children as adults and vice versa, and parents frustrated by technical hurdles have begun completing verification using their own faces on their children's accounts, inadvertently placing minors in the 21+ adult category. Georgetown University research found that Roblox allows users under 13 to create accounts without immediately requiring parental consent, requesting only in its Privacy Policy that under-13 users obtain consent, with nothing in the sign-up process to enforce this.

Privacy & Data Collection: A 2025 class action lawsuit (Garcia v. Roblox Corporation) alleged that Roblox surreptitiously intercepted users' electronic communications and harvested detailed personal data through covert tracking code, including canvas fingerprinting and audio fingerprinting techniques that generate unique device signatures, persisting across sessions and devices. The lawsuit alleged Roblox engaged in personalized advertising and cross-site tracking of users, including children, without obtaining verifiable parental consent as required by COPPA. Approximately 46% of Roblox's daily active users were under the age of 13 at the time of filing. Roblox is kidSAFE COPPA-certified, but multiple lawsuits and investigations challenge whether the company's actual data practices match its certifications. The SEC Division of Enforcement and the FTC have both opened probes into the platform. Roblox monitors users' in-game behavior, the experiences they use, and who they interact with. While this data is used partly for safety purposes (detecting age discrepancies), it also raises questions about the scope of profiling being conducted on children and what derivative data is being created and shared with advertising partners.

Parental Controls: This is the one area where Roblox has made meaningful recent changes, though they came only after years of documented harm, mounting lawsuits, and intense public pressure. In November 2024, Roblox introduced parental controls including activity approvals, time limits, remote account management, and automatic blocking of direct messages for users under 13. Parents can now disable in-app microtransactions and limit which games children can access. Content maturity labels were added so parents can restrict access based on age-appropriateness. Social hangout games were restricted to players over 13, and in 2025, those featuring private locations like bedrooms and bathrooms were restricted to users 17 and above. However, significant limitations remain. Nebraska's lawsuit noted that even with parental oversight enabled, parents still lack visibility into who their child is messaging and what the messages say. Parents can only connect to a child's account if the child is under 13, meaning accounts for users 13 and older have no meaningful parental oversight. NCOSE's assessment found that Roblox's parental controls create a false sense of security: many parents believe they are disabling chat entirely, but are not told it only stops some forms of communication. The platform has only 3,000 moderators for over 150 million active users, a ratio far worse than platforms like TikTok, which has roughly three times the users but more than ten times the moderators.

Sources:

Bloomberg Businessweek (2024). "Roblox's Pedophile Problem." Investigation and arrest data compilation.
Hindenburg Research (2024). "Roblox: Inflated Key Metrics for Wall Street and a Pedophile Hellscape for Kids."
Hardwick, T. & Carter, M. (2025). "They're Scamming Me: How Children Experience and Conceptualise Harm in Game Monetization." University of Sydney.
National Center on Sexual Exploitation (2024). Roblox platform assessment and Dirty Dozen List.
NCOSE (2025). Press release calling for passage of Kids Online Safety Act.
Fishman, A. (2025). "Roblox Isn't a Game." Psychology Today.
Pew Research Center (2024). Teens, Video Games, and Civility survey data.
Thorn (2023). Youth Perspectives on Online Safety report.
Gleason, J. et al. (2025). Longer suspension study. Northeastern University / Roblox.
Lewis (2023). Cyberbullying on gaming platforms study.
Garcia v. Roblox Corporation (2025). Class action complaint, C.D. Cal. No. 2:25-cv-03476.
NBC News (2025). "Nebraska becomes the latest to sue Roblox alleging child safety failures."
Tennessee Attorney General (2025). Lawsuit complaint against Roblox Corporation.
Texas Attorney General (2025). Petition against Roblox Corporation.
LA County Counsel (2026). Complaint against Roblox under California Unfair Competition Law.
ParentsTogether (2024). Investigation of inappropriate Roblox games.
Georgetown University (2025). Analysis of age-related platform practices under COPPA.
Malwarebytes (2025). Child account test findings on Roblox.
Aftermath (2025). Roblox grooming arrest reporting.
Roblox Corporation (2025). Q2 2025 Earnings Report and safety feature documentation.
5Rights Foundation (2024). Response to Hindenburg Research findings.
Judicial Panel on Multidistrict Litigation (2025). MDL No. 3166 creation order.

P

Pinterest

Visual Discovery
Min. Age: 13+
!

Pinterest's visual search design is structurally safer than feed-driven platforms, and it has taken proactive steps like banning beauty filters and weight loss ads. However, a 2023 investigation exposed serious predator exploitation, and pro-anorexia content remains a persistent problem.

Key Concerns

Predator Board Exploitation Pro-Anorexia Content Self-Harm Persistence Body Image Pressure Weak Age Gate Limited Parental Tools
Full Research Details

Predator & Grooming Risk: A March 2023 NBC News investigation revealed that grown men were openly creating sex-themed image boards filled with photos of young girls on Pinterest, with titles like "Sexy little girls" and "hot." One mother found her 9-year-old daughter's gymnastic videos had been saved to over 50 such boards. Pinterest's own recommendation engine was actively surfacing more images of children to users who sought them out. Senators Blackburn and Blumenthal sent formal demands for answers, writing that "it should not have taken national media coverage of such graphic misuse targeting young children to prompt action." Pinterest responded by making under-16 accounts private by default, wiping all followers of under-16 accounts (requiring teens to re-approve connections), restricting messaging to mutual followers, adding board-level reporting for the first time, dramatically increasing human content moderators, and requiring third-party ID verification if under-18 users attempt to change their birthday. These are meaningful reforms, but they came only after national media exposure.

Mental Health Impact: Pinterest has a mixed record. On the positive side, Pinterest has banned beauty filters (which distort self-image), banned weight loss ads targeted at all users, and has explicit community guidelines prohibiting body shaming. Their CEO has publicly supported banning children under 16 from social media entirely. On the negative side, Pinterest has been a documented hub for pro-anorexia ("pro-ana") and "thinspiration" content since at least 2012. Despite banning search terms like "thinspo" and redirecting them to eating disorder helplines, users continuously create coded hashtags to circumvent filters. A UK senior coroner cited a 14-year-old's Pinterest board containing 469 images related to depression, self-harm, and suicide as a contributing factor in her death. The platform's highly visual, curated-image nature can reinforce idealized body standards even through non-explicit content like fashion boards and lifestyle imagery.

Content Exposure: Pinterest's DSA Transparency Report (H2 2024) showed 12.4 million Pins removed in the EU alone for adult content violations, and 1.1 million flagged for self-injury or harmful behavior. Some content was "limited in distribution" rather than removed, meaning it could still appear in feeds. Pins link to external websites, so a child can click through to entirely unmoderated spaces. However, Pinterest's design as a search-and-save platform rather than a feed-driven one means users primarily encounter content through their own searches rather than an engagement-maximizing algorithm, which structurally reduces random exposure to harmful material.

Cyberbullying & Harassment: Pinterest is structurally less prone to cyberbullying than most social platforms. It is not built around public follower counts, comment-driven engagement, or viral content dynamics. Comments are turned off by default for all users under 18. The platform prohibits body shaming, manipulated images intended to degrade, and sexual remarks about people's bodies. Users can report, block, and limit who interacts with their content. Pinterest's transparency reports show relatively low harassment-related takedown volumes compared to other content categories.

Algorithmic Concerns: Pinterest's algorithm functions as a visual search and recommendation engine based on keywords, engagement metrics (saves, clicks), and long-term user behavior. Unlike TikTok or Instagram, Pinterest does not optimize for watch time or viral engagement. The CEO has stated the platform focuses on "positive outcomes" rather than view time optimization. There is no autoplay video feed, no engagement streaks, and no infinite scroll of algorithmically selected short-form content. However, the 2023 NBC investigation proved the recommendation engine could be exploited to surface images of children to predators. Pinterest modified its automated detection and moderation practices following the investigation.

Age Verification: Pinterest requires users to be at least 13, enforced by self-reported birthdate at sign-up, the same standard-weak method used by most platforms. Post-2023 reforms added a meaningful layer: if under-18 users attempt to change their birthday to 18+, they must verify through a third-party service using government ID or birth certificate. Parents can set a 4-digit passcode to lock birthday changes and other key account settings. This is more aggressive than most platforms' age verification, though the initial sign-up gate remains self-reported.

Privacy & Data Collection: Pinterest collects browsing behavior, device information, location data, and inferred interests, which is used for targeted advertising and to train AI models. Data is shared with advertisers and analytics partners. For teens in the EU and UK, personalized ads are opted out by default. For US teens, protections are less explicit. ConductAtlas notes that "while Pinterest prohibits use by young children, the platform's age verification mechanisms are limited, meaning younger users may still access the service and have their data collected."

Parental Controls: Pre-2023, Pinterest had essentially no parental controls. Post-2023 reforms added parental passcode locks for key settings, private-by-default accounts for under-16, messaging restrictions to mutual followers, and third-party age verification for birthday changes. However, Pinterest does not offer a parental dashboard, screen time limits, or activity monitoring tools. Parents cannot see what their child is pinning or searching unless they physically look at the account. Qustodio's 2024 data report found Pinterest was the third most-used social media app by kids (by time spent), behind TikTok and Instagram.

Sources:

NBC News (2023). Investigation: How Pinterest drives men to little girls' images. Jesselyn Cook.
NBC News (2023). Senators seek answers from Pinterest after NBC News investigation.
U.S. Senators Blackburn & Blumenthal (2023). Letter to Pinterest CEO Bill Ready re: child exploitation on platform.
TechCrunch (2023). After an investigation exposes its dangers, Pinterest announces new safety tools and parental controls.
Pinterest Help Center (2025). Teen safety options. help.pinterest.com.
Pinterest Policy (2025). Transparency Report. policy.pinterest.com/transparency-report.
Pinterest Policy (2025). DSA Transparency Report, H2 2024.
Molly Rose Foundation (2025). How effectively do social networks moderate suicide and self-harm content? Analysis of DSA transparency data.
Protect Young Eyes (2023). Pinterest (Finally) Adds New Parental Controls.
Qustodio (2025). Is Pinterest safe for kids? An app safety guide for parents.
ConductAtlas (2025). Children's Privacy and Age Restrictions, Pinterest.
BBC News (2019). Self-harm, suicide and social media: coroner inquest re: 14-year-old's death.
HuffPost (2012). Pinterest Removes Eating Disorder-Related Content, Pro-Anorexia Community Continues To Thrive.
WebProNews (2026). Pinterest's CEO Wants to Ban Kids Under 16 From Social Media.

f

Facebook

Social Network
Min. Age: 13+

Facebook fails on all eight safety criteria. Despite recent Teen Account changes, the evidence documents ongoing predator risk, harmful content exposure, extensive data collection, and a platform design that prioritizes engagement over child safety.

Key Concerns

Data Harvesting Predator Contact History Engagement-Driven Algorithm Cyberbullying Self-Harm Content Weak Age Gate
Full Research Details

Predator & Grooming Risk: Historically, Facebook allowed messaging between users who were not connected, meaning strangers could contact minors through Messenger or friend requests. Meta has since introduced stricter protections: teen accounts (ages 13-17) now receive restricted messaging settings by default, and users under 16 (or under 18 in some regions) can only receive messages from Facebook friends or people they are connected to through contacts. Despite these changes, investigations and lawsuits have documented significant risks involving adult-minor interactions on Meta platforms. Court filings from 2025 and 2026 allege that millions of interactions between adults and minors occurred on Meta platforms, some involving inappropriate contact, and that internal safety discussions referred to these as "inappropriate interactions with children." Prosecutors in a 2026 case argued that predators were able to interact with minors on Meta platforms even after new safety features were introduced. Meta has introduced private accounts by default for teens, AI detection of suspicious accounts, restrictions on direct messages, reporting and blocking tools, and automatic blurring of suspected explicit images in messages. These are meaningful protections, but the documented history and ongoing legal proceedings indicate the risk has not been fully resolved.

Mental Health Impact: A peer-reviewed study found that adolescents with anxiety disorders described Facebook as increasing anxiety through approval-seeking, fear of negative comments, privacy worries, and interpersonal conflict, providing direct evidence linking Facebook use to negative mental health experiences in teens (Calancie et al., 2017, Cyberpsychology). The U.S. Surgeon General's 2023 Advisory stated that social media poses a "meaningful risk of harm" to youth mental health and noted that more than 3 hours per day of use is associated with double the risk of depression and anxiety symptoms. A public repository of Meta's internal research indicates the company had evidence its products were likely harming young people's mental health, especially through harmful social comparison and body-image pathways. While the strongest internal findings relate to Instagram, the evidence supports concern about Meta platforms generally. Facebook's feed is designed as a "constantly updating list of stories," consistent with an endless-scroll engagement model that researchers and pediatric groups warn can prolong use and crowd out sleep, exercise, and in-person activity. The AAP's 2026 policy statement says digital ecosystems that prioritize engagement and commercialization can encourage prolonged use, displace healthy behaviors, and contribute to negative outcomes.

Content Exposure Risk: Minors can encounter harmful content on Facebook through normal use, although Meta has added stronger restrictions for teen accounts. Meta itself announced in January 2024 that it needed to start hiding more content about suicide, self-harm, and eating disorders from teens on Facebook and to place teen accounts on the most restrictive content settings, a strong indication that teens were still able to encounter this material under ordinary use before those changes. Teen Facebook accounts are now automatically placed into the most restrictive recommendation setting ("Reduce"), and Meta hides more search results related to suicide, self-harm, and eating disorders while redirecting users to expert resources. AP reporting described Meta's 2024 change as a move to stop showing teens posts about suicide, self-harm, and eating disorders even from accounts they follow, meaning harmful material could appear in a teen's normal feed through ordinary social connections, not only through deliberate searching. The APA notes that harmful content including encouragement of self-harm, eating-disordered behavior, cyberhate, and depictions of illegal behavior is associated with worse outcomes for young people. Meta's 2025 child-safety updates extended stricter protections to some child-focused accounts, including stronger message controls and offensive-comment filtering, and reported that nudity-protection tools reduced exposure to unwanted nude images in DMs, indicating ongoing risk from sexualized content even as new protections are added.

Cyberbullying & Harassment: Facebook provides reporting tools (for posts, messages, profiles, and comments), blocking features, comment moderation, and a Restrict feature that limits interaction without notifying the other user. Facebook's Community Standards prohibit content targeting minors with harassment, threats, or degrading language, coordinated harassment campaigns, and repeated unwanted contact. Stricter protections apply when the victim is a minor. Despite these measures, research from the Cyberbullying Research Center documents that many adolescents report experiencing harassment through Facebook messages, posts, or comments. Common forms of bullying on the platform include harassing comments on posts, spreading rumors through status updates, sharing embarrassing photos or videos, creating fake accounts or pages targeting victims, and excluding individuals from group chats or online communities. Facebook moderation involves AI detection systems, human moderators reviewing reported content, and enforcement actions including content removal, account warnings, or suspension. However, harmful content may still be viewed before moderation removes it. The platform has structured moderation policies and reporting tools, but research consistently shows that cyberbullying still occurs and moderation systems cannot completely prevent harassment.

Algorithmic Concerns: Facebook uses algorithms to determine which posts appear in a user's News Feed, prioritizing content based on engagement signals, user interests, and past interactions. Meta's ranking system predicts which posts users are most likely to interact with and places those higher in the feed. Reporting based on internal Meta documents released during the Facebook Papers investigation indicated that the algorithm sometimes prioritized highly engaging but polarizing material, creating "rabbit hole" effects where repeated interaction with certain topics leads to increasingly similar or extreme recommendations (Haugen, 2021). Meta states it has implemented safeguards to reduce the spread of harmful content through recommendations, including lowering the distribution of misinformation, removing content that violates Community Standards, and offering feed-control tools. Facebook provides user controls such as Favorites, Unfollow, and Snooze that allow individuals to prioritize certain accounts or hide unwanted content. The algorithm is designed to increase engagement, and research suggests this engagement-based ranking system can reinforce viewpoints and amplify sensational content, even as the platform attempts to mitigate these risks.

Age Verification: Facebook requires users to be at least 13 years old to create an account, enforced by a birthdate entry during sign-up. If the age entered is below the minimum, account creation is blocked. This age limit exists partly because of COPPA, which restricts companies from collecting personal information from children under 13 without parental consent. However, Facebook primarily relies on self-reported age rather than mandatory ID verification. Research shows that children can create accounts simply by entering an older age during registration. One study found that 76% of surveyed parents reported their child had joined Facebook before age 13, despite the platform's rules. Facebook reports removing thousands of underage accounts daily when discovered and has developed AI systems intended to identify underage accounts, but the fundamental age gate remains a self-declaration system that any child can bypass.

Privacy & Data Collection: Facebook/Meta collects substantial user data, including for teens. Meta's 2025 expansion of Teen Accounts to Facebook and Messenger introduced more private default settings and tighter controls, but does not stop data collection; these accounts are a more restricted version of the service. In 2023, the FTC proposed tightening its existing order against Facebook/Meta and specifically proposed a blanket prohibition on monetizing data from children and teens under 18 across Facebook, Instagram, WhatsApp, and Oculus, following the FTC's claim that Facebook had repeatedly violated prior privacy commitments. The FTC also alleged Facebook misled parents about privacy protections in Messenger Kids and about app developers' access to children's private data. Meta's general teen-protection materials focus on safety settings, contact limits, and privacy defaults rather than promising no collection of location or behavioral data, meaning the platform still operates within Meta's broader data-collection ecosystem. COPPA requires parental control over collection of children's personal information under 13, but Facebook's main age gate remains based on self-reported age.

Parental Controls: Facebook provides parental supervision tools through the Meta Family Center and Messenger supervision features. Parents can see insights such as how much time their teen spends on the app, contact list changes, privacy settings, and blocked accounts, and can schedule breaks or monitor usage patterns. However, the supervision tools are optional and require agreement from both the teen and the parent, meaning a teenager can remove or disable supervision at any time. Recent updates added Teen Accounts, which automatically apply stronger privacy defaults and safety settings including restrictions on who can message teens and reminders to limit screen time. These tools primarily offer monitoring and guidance rather than strict control, meaning parents cannot fully control what content appears in a teen's feed or all interactions on the platform. Independent reporting notes that many features must be manually activated and some require teens to opt in, raising questions about real-world effectiveness. The tools provide helpful insight but their effectiveness depends heavily on teen participation and parental involvement.

Sources:

Calancie, O., Ewing, L., Narducci, L. D., Horgan, S., & Khalid-Khan, S. (2017). Exploring how social networking sites impact youth with anxiety. Cyberpsychology.
U.S. Surgeon General (2023). Social Media and Youth Mental Health Advisory.
Meta Internal Research Repository (2023). Meta Internal Research on Youth Mental Health.
American Academy of Pediatrics (2026). Digital Ecosystems, Children, and Adolescents: Policy Statement.
Meta Platforms, Inc. (2024). Teen Protections and Age-Appropriate Experiences on Our Apps.
Associated Press (2024). Meta to Hide More Self-Harm Content from Teens on Facebook and Instagram.
American Psychological Association (2024). Health Advisory on Social Media Use in Adolescence.
Meta Platforms, Inc. (2025). Expanding Teen Account Protections and Child Safety Features.
Meta Platforms, Inc. (2025). Expanding Teen Accounts to Facebook and Messenger with New Protections.
Meta Platforms, Inc. (2024). Introducing Stricter Message Settings for Teens on Instagram and Facebook.
Time Magazine (2025). Court Filings Allege Meta Downplayed Risks to Children and Misled the Public.
The Guardian (2026). Mark Zuckerberg Says Criminal Behavior on Facebook Is Inevitable.
Meta Safety Center (2021). Bullying and Harassment Policies.
Meta Transparency Report (2025). Community Standards enforcement data.
Cyberbullying Research Center (Patchin & Hinduja, 2025). Summary of Cyberbullying Research (2007-2025).
Haugen, F. (2021). The Facebook Papers. Wall Street Journal investigation.
Federal Trade Commission (2023). Proposed Blanket Prohibition Preventing Facebook from Monetizing Youth Data.
CBS News/AP (2023). FTC Says Facebook Failed to Protect Children's Privacy.
Meta Platforms, Inc. (2023). Giving Teens and Parents More Ways to Manage Their Time on Our Apps.
Meta Platforms, Inc. (n.d.). What Is Supervision on Messenger?

D

Discord

Chat & Community
Min. Age: 13+

Discord's private servers, open DMs, and minimal moderation have made it a documented pipeline for child predators. Multiple lawsuits, state AG actions, and criminal cases link the platform to grooming, sextortion, abduction, and sexual assault of minors.

Key Concerns

Predator Pipeline Grooming & Sextortion Unmoderated Servers Deceptive Safety Claims Open DMs to Strangers NCOSE Dirty Dozen
Full Research Details

Predator & Grooming Risk: Discord has been directly linked to dozens of criminal cases involving the grooming, sextortion, abduction, and sexual assault of minors. A June 2023 NBC News investigation reviewed court records and found the platform at the center of dozens of criminal cases involving children. Predators frequently meet children on gaming platforms like Roblox or Fortnite and then move them to Discord for private, unmonitored communication. In one case, a 29-year-old man used Discord to advise a 12-year-old girl he met on the platform to kill her parents, then told her he would pick her up as his "slave." He was sentenced to 27 years in prison. In January 2025, a man was sentenced to 32 years for luring three boys through Discord. In July 2025, two 14-year-old girls filed lawsuits alleging Discord and Roblox served as a "hunting ground for child-sex predators," with one case involving an attempted rape and the other a completed sexual assault. In August 2025, a lawsuit was filed after a 10-year-old was abducted from her home by a 27-year-old who contacted her through Roblox and groomed her on Discord. In September 2025, a mother filed a wrongful death suit after her son committed suicide following sextortion on the platforms. The Canadian Centre for Child Protection reports an increase in luring reports involving Discord, noting predators are attracted by its high concentration of young users and private, closed-off environment. Florida's Attorney General stated in 2026 that "many of our criminal investigations into internet child predators lead to one place: Discord."

Cyberbullying & Harassment: Discord's server-based structure, voice chat features, and anonymous account creation create conditions where bullying and harassment thrive. A Wall Street Journal investigation found that within 15 minutes of using the platform, reporters encountered racial slurs, sexist comments, and explicit content without specifically searching for it. The platform's gaming-culture roots have normalized aggressive and toxic communication styles. Discord delegates moderation largely to volunteer server moderators, who receive no required training from the platform. NCMEC has documented violent online groups operating on Discord that encourage children to harm themselves and others, including cutting, creating CSAM, harming animals, and taking their own lives. In 2024, NCMEC received over 1,300 reports with a nexus to violent online groups, a more than 200% increase over the prior year, with Discord identified as a primary platform.

Mental Health Impact: Discord's closed-server model and voice chat features create tight-knit communities, but this same intimacy can be weaponized. Violent online groups on Discord have been documented coercing children into self-harm, including cutting perpetrators' usernames into their skin while streaming live. NCMEC reports that these groups encourage children to create CSAM, exploit siblings, harm animals, and attempt suicide. The secretive nature of private servers means children can be drawn into harmful communities without parents' awareness. A wrongful death lawsuit filed in September 2025 alleged that sextortion facilitated through Discord contributed directly to a teen's suicide. The platform's design, which encourages long hours of voice chat and deep social investment in server communities, can intensify social pressure and emotional dependency.

Content Exposure Risk: NBC News identified 242 publicly listed Discord servers created in a single month that appeared to market sexually explicit content involving minors, using thinly veiled terms referring to child sexual abuse material. At least 15 communities directly appealed to teens by claiming to be sexual communities for minors. Despite Discord's ban on pornography in non-age-gated spaces, NBC News encountered pornographic content while reviewing profiles of men who followed young girls. Discord's transparency data shows over 33,000 servers were removed in a reporting period, with more than half related to child safety violations. Over 153,000 accounts were disabled for policy violations including child safety and exploitative content. The New Jersey AG's lawsuit alleged that Discord's "Safe Direct Messaging" feature, which claimed to scan and delete explicit content, did not work as advertised: by default, DMs between "friends" were not scanned at all, and even when enabled, children were still exposed to CSAM, violence, and other harmful content.

Algorithmic Concerns: Discord does not use a content-recommendation algorithm in the same way as feed-driven platforms like TikTok or Instagram. Users join servers by invitation or by searching for communities. However, Discord's server discovery feature and third-party server listing websites make it easy for anyone to find and join communities, including those hosting harmful content. The platform's recommendation of "similar servers" can lead users from innocuous gaming communities to progressively less moderated spaces. The lack of algorithmic content surfacing means Discord's risk is less about what gets pushed to users and more about the ease of direct, private, unmonitored contact between adults and children.

Age Verification: Until 2026, Discord's age verification was a simple birthdate entry at sign-up, easily bypassed by any child entering a false date. The New Jersey AG's lawsuit specifically alleged that Discord "actively chose not to" verify users' ages, enabling children under 13 to register freely and allowing banned users (including those who had circulated CSAM) to create new accounts with a different email. In early 2024, the U.S. Senate Judiciary Committee used U.S. Marshals to serve a subpoena to Discord's CEO after he reportedly refused to testify voluntarily about child safety. In February 2026, Discord announced "teen-by-default" settings requiring facial age estimation or government ID to access age-restricted content. However, the global rollout was delayed to H2 2026 after user backlash, and a September 2025 data breach exposed approximately 70,000 users' government ID photos from a third-party vendor, raising serious concerns about the safety of ID-based verification.

Privacy & Data Collection: Discord collects user data including messages, voice metadata, device information, and behavioral patterns. The platform's privacy policy allows data use for service improvement and personalization. The September 2025 data breach, which exposed government ID photos submitted for age verification, highlighted the risks of collecting sensitive identity documents. Discord states it does not sell user data or use age assurance information for advertising, but the breach demonstrated that even privacy-forward verification systems carry real risks when third-party vendors are involved.

Parental Controls: Discord launched its "Family Center" in 2025, allowing parents to view an activity feed showing who their teen messages, new friends added, and servers joined. Parents can also manage select safety and privacy settings. However, the system requires the teen to voluntarily opt in by sharing a QR code, and the teen can disconnect at any time. Parents cannot read message content. Discord's "Teen Safety Assist" sends alerts when teens receive DMs from first-time senders, but this cannot substitute for structural protections. The National Center on Sexual Exploitation (NCOSE) placed Discord on its Dirty Dozen List for three consecutive years before the platform made meaningful safety changes. Protect Young Eyes notes that Discord's parental controls came only after sustained public pressure and rates the platform appropriate for ages 17+, consistent with its App Store rating.

Sources:

NBC News (2023). Discord servers used in child abductions, crime rings, sextortion. Jesselyn Cook.
NBC News (2025). N.J. attorney general sues Discord messaging app over child predator concerns.
New Jersey Office of the Attorney General (2025). Complaint against Discord Inc., ESX-C-000084-25.
CNBC (2025). Discord sued by New Jersey over child safety features.
Anapol Weiss (2025). Roblox and Discord Sued After Alabama Teen Groomed and Assaulted by Predator.
Levy Law (2026). Holding Discord Accountable for Online Abuse. Case compilation.
Helping Survivors (2025). Lawsuits Filed Against Roblox and Discord Over Child Exploitation Claims.
Tampa Free Press (2026). Florida Puts Discord In The Crosshairs Over "Predator Pipeline" Concerns.
NCMEC (2025). 2024 CyberTipline Data Report.
NCOSE. Dirty Dozen List, 2022-2024. National Center on Sexual Exploitation.
Discord (2026). Discord Launches Teen-by-Default Settings Globally. Press release.
Discord (2025). Family Center for Parents and Guardians. Support documentation.
Protect Young Eyes (2025). Discord Parental Controls Review.
Kinzoo (2026). Is Discord Safe for Kids in 2026? Age verification analysis.
HSToday (2025). Surge in Online Crimes Against Children. NCMEC violent online groups data.

W

WhatsApp

Encrypted Messaging
Min. Age: 13+

WhatsApp's end-to-end encryption protects privacy but also prevents any content monitoring, making it impossible for parents or the platform itself to detect grooming, CSAM, or exploitation in messages. NCOSE named it a primary place for grooming and sextortion.

Key Concerns

No Content Monitoring Grooming & Sextortion Disappearing Messages Stranger Contact via Phone # Zero Parental Controls NCOSE Dirty Dozen
Full Research Details

Predator & Grooming Risk: The National Center on Sexual Exploitation (NCOSE) placed WhatsApp on its 2022 Dirty Dozen List, labeling it a "primary place for grooming, sextortion, child sexual abuse materials, and sex trafficking." Predators frequently meet children on gaming platforms, social media, or other apps and then move conversations to WhatsApp where end-to-end encryption ensures no one, including the platform, can monitor the exchange. Because WhatsApp only requires a phone number to connect, anyone who obtains a child's number can contact them directly. By default, anyone can add a child to group chats with hundreds of strangers. WhatsApp's disappearing messages feature (which auto-deletes after 24 hours, 7 days, or 90 days) gives children a false sense that intimate photos or messages will vanish, encouraging riskier sharing. WhatsApp uses Wi-Fi rather than phone service, allowing predators to communicate across international lines without creating phone records parents might check.

Cyberbullying & Harassment: WhatsApp group chats, which can include up to 1,024 participants, are commonly used by school-age children for social coordination. These groups can become vectors for exclusion, rumors, and targeted harassment. Because messages are encrypted and groups can be created by anyone, there is no platform-level moderation of bullying within chats. The "last seen" and "online" status features create social pressure to respond immediately, contributing to anxiety and a feeling of being constantly monitored by peers. Read receipts (blue check marks) showing when a message has been read can intensify social dynamics around responsiveness and exclusion.

Mental Health Impact: WhatsApp's always-on messaging culture can disrupt sleep patterns and create anxiety around constant connectivity. The platform's group dynamics, particularly among school-age children, can amplify social exclusion and peer pressure. However, WhatsApp does not use algorithmic content feeds, engagement-maximizing features, or appearance-focused design elements. It does not promote social comparison through likes, followers, or public metrics. As a messaging tool rather than a social media platform, its mental health risks are more indirect, stemming from the nature of peer communication rather than platform design choices.

Content Exposure Risk: End-to-end encryption means WhatsApp cannot scan, filter, or moderate any content shared in messages. Children can receive explicit images, violent content, CSAM, and other harmful material with no platform-level filtering. Meta's implementation of default E2EE on WhatsApp and Facebook Messenger was directly cited by NCMEC as a major contributing factor in the 19% decline in CyberTipline reports in 2024, with approximately 7 million fewer exploitation incidents reported. NCMEC warned that this decline does not mean less exploitation is occurring, but that encryption prevents detection. WhatsApp Communities, which are advertised online, can expose children to groups run by strangers with no verification of who is participating.

Algorithmic Concerns: WhatsApp does not use a content recommendation algorithm. Users only see messages from contacts or groups they have joined. There is no feed, no trending content, and no algorithmically suggested communities. This is a meaningful structural advantage over platforms like TikTok, Instagram, or YouTube. The risk on WhatsApp comes from direct human contact, not from algorithmic amplification of harmful content.

Age Verification: WhatsApp's minimum age was lowered from 16 to 13 in the UK in 2024. In the US and most countries, the minimum age is 13. There is no age verification process whatsoever. Users simply enter their phone number and begin using the service. No birthdate is required, no ID is checked, and no age-gating exists. A child with a phone number can create and use an account with no barriers. This is weaker than even the self-reported birthdate entry used by most social media platforms.

Privacy & Data Collection: WhatsApp's end-to-end encryption is a genuine privacy strength, ensuring message content cannot be read by Meta, governments, or hackers intercepting data in transit. However, WhatsApp still collects metadata including who contacts whom, when, how often, device information, and location data. This metadata is shared with Meta's broader ecosystem. WhatsApp's privacy policy allows sharing of account information with Facebook/Meta for advertising and business purposes, which has drawn regulatory scrutiny in the EU.

Parental Controls: WhatsApp has essentially no built-in parental controls. There is no Family Center, no activity monitoring dashboard, no screen time limits, and no way for parents to restrict who contacts their child without physically accessing the child's device. In 2024, WhatsApp introduced "parent-managed accounts" in some regions that allow parents to restrict who can contact their child and prevent strangers from adding them to groups, but these require manual setup and can be changed by the child. The end-to-end encryption that is WhatsApp's core feature also means parents cannot monitor message content through any means, including third-party monitoring apps. Multiple child safety organizations note that this combination of zero parental controls and unmonitorable encrypted messaging makes WhatsApp unsuitable for children without direct supervision.

Sources:

NCOSE (2022). Dirty Dozen List: WhatsApp. National Center on Sexual Exploitation.
NCMEC (2025). 2024 CyberTipline Data Report. Impact of E2EE on reporting volumes.
Internet Matters (2024). What is WhatsApp? A safety guide for parents.
Qustodio (2025). Is WhatsApp safe for kids? An app safety guide for parents.
BrightCanary (2025). Is WhatsApp Safe? What Parents Should Know.
Bitdefender (2025). What Parents Need to Know: Is WhatsApp Safe for Children?
Gabb (2024). Is WhatsApp Safe for Kids? How Predators Find Victims on the App.
Bravehearts (2025). Online risks, child exploitation & grooming statistics. Top platforms for minor online sexual experiences.
WhatsApp Help Center (2025). About end-to-end encryption.

R

Reddit

Forums & Communities
Min. Age: 13+

Reddit hosts widespread pornographic and violent content behind a single "I'm over 18" click, offers zero parental controls, grants full anonymity to all users, and relies on volunteer moderators with no required training. The platform is rated 17+ in app stores for good reason.

Key Concerns

Pervasive NSFW Content One-Click Age Bypass Full User Anonymity Zero Parental Controls Predator DMs Volunteer-Only Moderation
Full Research Details

Predator & Grooming Risk: Reddit's anonymous account structure and direct messaging system create conditions where predators can operate with minimal risk of identification. Users can create accounts with no identity verification and contact any other user through DMs. Predators exploit Reddit's anonymity to build rapport with children who post personal details like birthdays, hometowns, or school information in public threads. Children posting in communities like r/teenagers or hobby-focused subreddits can inadvertently reveal identifying information that bad actors piece together. Reddit's policy prohibits sexual content involving minors and the platform reports CSAM to NCMEC using industry-standard hash-matching tools, but the sheer volume of user-generated content across 100,000+ communities and the reliance on volunteer moderators means enforcement is inconsistent. Multiple child safety organizations note that Reddit's DM system, combined with full anonymity, makes it a risk for predatory contact.

Cyberbullying & Harassment: Reddit's anonymous posting culture enables trolling, harassment, and cyberbullying with little accountability. Users can create throwaway accounts specifically to harass others. Subreddit moderation varies wildly, with some communities actively policed and others essentially unmoderated. While Reddit's content policy prohibits harassment and hate speech, enforcement depends almost entirely on volunteer moderators who set and enforce their own rules. Smaller and non-English-language communities are particularly prone to unchecked harmful behavior. The platform's upvote/downvote system can amplify pile-on behavior, where a community collectively targets an individual. Reddit's anonymity, which is valuable for adults discussing sensitive topics, becomes a liability for minors who may not recognize manipulation or hostile dynamics.

Mental Health Impact: Reddit communities can be both helpful and harmful for teen mental health. Subreddits like r/Anxiety and r/depression provide genuine peer support, and communities around niche interests can foster belonging. However, Reddit also hosts communities that normalize self-harm, promote eating disorders, spread extreme ideologies, and amplify negative thinking. The platform's recommendation system can surface increasingly niche or extreme communities as users engage with related content. Reddit's endless scrolling design and the dopamine cycle of posting and receiving upvotes can contribute to compulsive use. The lack of any screen time tools or usage limits means there are no built-in protections against excessive use.

Content Exposure Risk: This is Reddit's most glaring failure for child safety. The platform hosts extensive pornographic, violent, and graphic content across thousands of NSFW subreddits. Practically any adult content imaginable is permitted under Reddit's policies, including subreddits containing graphic sexual imagery, gore, and extreme violence. The only barrier between a child and this content is a single "Yes, I'm over 18" button click. NSFW filters can be toggled off in account settings with no verification. Even with NSFW filters enabled, content that has not yet been labeled can appear in feeds and search results. Safe Kids Online rates Reddit's safety as among the worst due to "hosting of explicit content and almost nonexistent parental control features." Multiple safety reviewers report finding explicit content within minutes of creating an account, even without specifically searching for it. In 2025, Reddit began implementing third-party age verification for mature content in regions with age verification laws (like the UK), but this remains limited in scope.

Algorithmic Concerns: Reddit uses algorithms to surface content on its home feed, r/all, and in search results. While the primary ranking mechanism is the community upvote/downvote system, Reddit's recommendation engine suggests communities and content based on user behavior. This can lead teens from innocuous interest-based communities to progressively more extreme or inappropriate spaces. Reddit does not use the aggressive engagement-maximizing algorithms of TikTok or Instagram, but its recommendation of "similar communities" can function as a pipeline to harmful content. The platform gives users some control over their feed through subreddit subscriptions, but the r/all feed and suggested communities can surface unexpected content.

Age Verification: Reddit requires users to be 13 or older. Age verification consists of nothing more than creating an account, which requires only an email address. No birthdate is requested, no age is self-declared during sign-up, and no verification of any kind occurs. To access NSFW content, users click a single "Yes, I'm over 18" confirmation, which can be bypassed instantly. Reddit is rated 17+ in the Apple App Store and carries a parental guidance rating in the Google Play Store, indicating even the app stores recognize the platform is not appropriate for younger teens. In regions with new age verification laws, Reddit has begun requiring government ID or facial scans for NSFW content access, but this is not yet universal.

Privacy & Data Collection: Reddit collects user data including browsing behavior, device information, IP addresses, and interaction patterns. The platform uses this data for advertising and content personalization. Reddit's anonymity is a double-edged sword: while users don't need to provide real names, the platform still collects substantial behavioral data. Children who casually share personal information in public threads (ages, locations, school details) may not realize this information is permanently indexed and searchable. Reddit content frequently appears in Google search results, meaning a child's post can be discoverable long after it was written.

Parental Controls: Reddit has no built-in parental controls. There is no Family Center, no activity monitoring, no screen time limits, and no way for parents to manage their child's experience on the platform. Privacy and content settings exist (such as toggling off NSFW content, restricting DMs, and hiding profile activity), but these can be changed by the child at any time with a single click. Parents cannot link accounts, receive activity reports, or restrict access to specific communities. Multiple child safety organizations recommend third-party parental control apps as the only meaningful way to manage a child's Reddit use, with some recommending parents block the platform entirely for younger teens.

Sources:

Reddit Help Center (2025). How does Reddit fight Child Sexual Exploitation?
Reddit Help Center (2025). Do not share sexual or suggestive content involving minors.
Qustodio (2025). Is Reddit safe for kids? A parent's guide to the front page of the internet.
Mobicip (2025). How Safe Is Reddit for Children? Safety Tips for Parents.
Internet Matters (2025). What is Reddit? What parents need to know.
Safe Kids Online / Heritage Foundation (2024). How to Protect Kids on Reddit. Safety rating analysis.
Safety Detectives (2025). Is Reddit Safe for Kids? Protect Them in 2026.
Gabb (2023). Is Reddit Safe? The Uses and Risks of Reddit.
Game Quitters (2025). How to Set Up Reddit Parental Controls.
Built In (2025). Age Verification Is Taking Over the Internet. Reddit age-gating analysis.
Safe Search Kids (2024). Is Reddit Safe for Kids? Platform safety assessment.

𝕏

X (Twitter)

Microblogging & News
Min. Age: 13+

X officially permits pornography under a labeling system any child can bypass, disbanded its Trust and Safety Council, lost its primary CSAM detection partner after failing to pay invoices, and saw hate speech rise approximately 50% after the ownership change. The platform is rated 17+ in app stores but allows accounts at age 13.

Key Concerns

Officially Permits Pornography CSAM Detection Partner Lost Trust & Safety Council Disbanded 50% Hate Speech Increase Grok AI Generated CSAM No Parental Controls
Full Research Details

Content Exposure Risk: In June 2024, X formally updated its policies to officially permit pornographic content on the platform, stating that users may share "consensually produced and distributed adult nudity or sexual behavior" as long as it is labeled. X describes pornography as a "legitimate form of artistic expression." X's own Terms of Service state that the platform "may not monitor or control the content posted, generated, inputted, or created," placing responsibility for content exposure squarely on users rather than the platform. The platform relies on content labels and birthdate-based age gates to prevent minors from viewing this material, but since X allows accounts at age 13 and age verification is entirely self-reported, any child can access the platform. Safe Kids Online rates X poorly, noting that sexually explicit content is "actively promoted to users via the 'For You' page." The platform is rated 17+ in both the Apple App Store and Google Play Store, yet its minimum account age remains 13, creating a four-year gap where teens can access a platform the stores themselves deem inappropriate for their age. X also updated its violent content policy in 2025 to allow graphic content under a labeling system. In January 2026, Elon Musk's Grok AI chatbot, integrated directly into X, generated sexualized images of children in response to user prompts. The UK-based Internet Watch Foundation reported that dark web users were sharing criminal imagery of minor girls created using Grok. This occurred on a platform that officially permits adult content and is accessible to 13-year-olds.

Predator & Grooming Risk: X has faced persistent problems with child sexual abuse material (CSAM) distribution. A June 2025 NBC News investigation found that accounts peddling CSAM had flooded X hashtags, with what was previously a trickle of posts becoming "a torrent propelled by accounts that appear to be automated, some posting several times a minute." NBC reported the problem was "worse than when Musk initially took over." The Canadian Centre for Child Protection reviewed accounts flagged by NBC and within minutes identified images of previously identified CSAM victims as young as 7. The investigation also revealed that Thorn, the nonprofit that provided X's primary CSAM detection technology, terminated its contract after X stopped paying invoices. Without Thorn, it is unclear what child safety mechanisms X currently employs. In 2023, the 9th Circuit ruled that X must face claims regarding failures in its child sexual abuse reporting mechanisms (Bloomberg Law). In a documented 2023 Indiana case, an adult male was arrested after using X/Twitter DMs to exchange CSAM, with Cash App payments linked to the exploitation (Fox59). X states it suspended 12.4 million accounts for child sexual exploitation violations in 2023 and sent 850,000 reports to NCMEC, but the platform's own Grok AI generated sexualized images of minors in January 2026, and the CSAM distribution problem continued to worsen through mid-2025. Before Musk's acquisition, a lawsuit documented that Twitter was informed of CSAM depicting a minor victim who provided government-issued ID proving he was 16, and Twitter responded that the content "does not violate our terms," removing it only after Homeland Security intervened.

Cyberbullying & Harassment: X's approach to content moderation shifted dramatically after Elon Musk's acquisition. In December 2022, the Trust and Safety Council, an advisory group of approximately 100 independent civil, human rights, and child protection organizations formed in 2016, was disbanded via email less than an hour before a scheduled meeting. This council had advised on hate speech, child exploitation, suicide, self-harm, and harassment policies. A peer-reviewed study published in PLOS ONE found that hate speech on X increased approximately 50% following Musk's acquisition and persisted at elevated levels through at least mid-2023, spanning racism, homophobia, and transphobia. A 2024 UPI report found that 17% of teens reported being bullied online about their weight, with 69% of those complaints coming from teens who use the X platform. Musk cut approximately 80% of Twitter's staff, including trust and safety team members. The head of trust and safety, Ella Irwin, resigned in June 2023 after Musk overrode a moderation decision. X also faces a federal lawsuit for distributing child sexual abuse material, with the 9th Circuit ruling that the platform must face claims of critical failures in its moderation and reporting mechanisms (Bloomberg Law, 2023). In the first half of 2024, X suspended 1 million accounts for abuse, harassment, and hateful content and removed 2.2 million pieces of content, but the platform's reliance on automation over human review and the elimination of external advisory oversight raises questions about the consistency and quality of enforcement. A European Commission study found that disinformation was most prevalent and received the highest relative engagement on X compared to other major social networks.

Mental Health Impact: X's design includes algorithmic content recommendation through its "For You" feed, which surfaces content based on engagement signals rather than the user's chosen follows. This engagement-driven model can expose teens to political extremism, hate speech, misinformation, and graphic content that the algorithm determines will generate interaction. The 50% increase in hate speech documented after the ownership change means teens on the platform are exposed to substantially more hostile content than before. X's new XChat feature, announced in June 2025, introduced disappearing messages similar to Snapchat, which child safety experts note can make it harder to collect evidence of cyberbullying and can encourage riskier behavior. The platform's broader shift toward fewer content restrictions, including officially permitting pornography and violent content, creates an environment where teens encounter material that major child health organizations have linked to negative mental health outcomes including anxiety, depression, and distorted views of relationships and sexuality.

Algorithmic Concerns: X uses an engagement-driven algorithm to populate its "For You" feed, which surfaces content from accounts users do not follow based on predicted engagement. Unlike a chronological timeline, this algorithmic feed can expose users to increasingly extreme or sensational content because outrage and controversy generate high engagement metrics. After Musk's acquisition, previously banned accounts, including those associated with white nationalism and political extremism, were reinstated. X replaced its Community Notes fact-checking system as the primary misinformation tool, but researchers have found it has become responsible for spreading misinformation and experiences significant delays in fact-checking. The algorithmic promotion of content that generates strong reactions, combined with reduced content moderation oversight, creates conditions where teens can be pushed toward progressively more extreme material through normal platform use.

Age Verification: X requires users to be at least 13 years old to create an account. Age verification during sign-up consists of entering a birthdate, which any child can falsify. X does not employ moderators or AI specifically searching for accounts that misrepresent their age, meaning a juvenile can create an account without providing their real age and face no proactive detection. The platform is rated 17+ in the Apple App Store and Google Play Store, a rating driven by the presence of adult content and user-generated risks. In the UK, EU, and Australia, X has been required to implement age verification for access to sensitive content under new online safety laws beginning in mid-2025, but in the United States, no such verification exists. As of March 2026, X's age verification methods for free accounts in regulated regions are still being rolled out, and only X Premium subscribers have access to manual age-estimation options such as ID or selfie submission. In the US, a 13-year-old can create an account and access the platform with no meaningful barriers.

Privacy & Data Collection: X collects user data including profile information, posts, direct messages, device information, IP addresses, and behavioral patterns. The platform uses this data for advertising and content personalization. Data sharing is automatically turned on when a user creates an account, requiring the user to manually opt out in privacy settings. X's Terms of Service state the platform will not collect information from persons under 13 years of age in compliance with COPPA, but since age verification is entirely self-reported, this protection is only as strong as the child's honesty during sign-up. X's privacy policy permits data sharing with third parties for advertising purposes. Following the ownership change, X's data practices have drawn regulatory scrutiny, particularly in the EU under the Digital Services Act. The Grok AI chatbot, integrated into X, processes user interactions and content on the platform, raising additional questions about how data from minor accounts is used in AI training and response generation. X does not offer differentiated privacy protections for teen accounts comparable to those introduced by Meta, TikTok, or Snapchat.

Parental Controls: X has no meaningful built-in parental controls. The platform's help center links to parental control settings, but these require the parent to be in a supported country, have their own X account, and submit a form to activate settings, which cannot be done from the minor's account. Even when activated, the controls cannot restrict a child from viewing harmful material; they can only restrict which accounts the child views content from (specifically blocking content from accounts the child does not follow). If content triggers a warning, there is a setting that must be manually turned off, but this can be changed by the child at any time without parental knowledge. There is no family center, no supervised account option, no activity monitoring, and no screen time limits. Multiple child safety organizations, including Qustodio, Internet Matters, and Safe Kids Online, note that X does not offer supervised parental controls comparable to those found on TikTok, Instagram, or Snapchat. Qustodio explicitly states that X "is not a safe place for children" and does not recommend it for anyone under 17.

Sources:

X Help Center (2024). Adult Content Policy.
Variety (2024). X (Formerly Twitter) Officially Allows Porn Under Updated Policy.
TechCrunch (2024). X tweaks rules to formally allow adult content.
CNBC (2026). Musk's xAI faces backlash after Grok generates sexualized images of children on X.
NBC News (2025). Accounts peddling child abuse content flood some X hashtags as safety partner Thorn cuts ties.
Techdirt (2025). Musk's 'Priority #1' Disaster: CSAM Problem Worsens While ExTwitter Stiffs Detection Provider.
X Blog (2024). An update on our work to tackle Child Sexual Exploitation on X.
NCOSE (2022). Fact Check: Twitter's Claims About CSAM and Sex Trafficking Lawsuit.
NPR (2022). Musk's Twitter has dissolved its Trust and Safety Council.
PLOS ONE (2025). X under Musk's leadership: Substantial hate and no reduction in inauthentic activity.
Internet Matters (2025). What is X? Safety on former Twitter.
Qustodio (2025). X parents' guide: Does Twitter have parental controls?
Safe Kids Online (2024). How to Protect Kids on X (Twitter). Safety rating analysis.
X Safety (2025). X's commitment to combating CSAM online.
X Help Center (2024). Child Safety Policy.
Fight the New Drug (2024). X's New Adult Content Policy Raises Safeguarding Concerns.
Bloomberg Law (2023). X must face child sex abuse reporting claims, 9th Circuit rules.
United Press International (2024). Teens bullied online about weight more likely to be targeted on social media.
Fox59 (2023). Cash App payments and Twitter DMs lead to child exploitation charges for Anderson man.
X Terms of Service (2025). Content responsibility and user agreement provisions.

T

Twitch

Live Streaming
Min. Age: 13+

Twitch's live streaming format creates unique dangers: predators use real-time chat to groom children during broadcasts, and the Clips feature has been exploited to record and distribute CSAM. An estimated 280,000 children were targeted by predators over two years, and the platform has no dedicated parental controls. Content labels and chat filters exist but cannot offset the structural risks of unmoderated live interaction between adults and children.

Key Concerns

Live Grooming via Chat Clips Used for CSAM Hot Tub & Suggestive Streams Donation Pressure No Dedicated Parental Controls 280K Kids Targeted by Predators
Full Research Details

Predator & Grooming Risk: A 2022 Bloomberg investigation found that nearly 2,000 predatory accounts on Twitch existed solely to target children, with approximately 280,000 kids identified as targets over a two-year period. Predators used typical grooming techniques in live chat, starting with innocuous questions about a child's favorite color before escalating to demands for sexual acts on camera. In July 2022 alone, more than 650 children per day were found by predators on the platform. A follow-up Bloomberg investigation in January 2024 revealed that predators had adapted by exploiting Twitch's "Clips" feature, which allows 20-second snippets of any livestream to be captured and shared. Analysis of 1,100 clips found that 83 (7.5%) contained sexualized content involving minors, with 34 depicting children ages 5-12 exposing themselves to the camera, often in response to viewer prompts. These clips were viewed over 10,000 times before removal. The Canadian Centre for Child Protection reviewed and confirmed the harmful nature of this content. Twitch has responded by quadrupling its law enforcement response team, developing grooming detection AI, building models to identify underage broadcasters, and working with NCMEC and the Tech Coalition. NCMEC reports to Twitch increased 1,125% between 2019 and 2021.

Cyberbullying & Harassment: Twitch's live chat environment, where thousands of messages fly by in seconds during popular streams, creates conditions where harassment can occur faster than moderators can respond. Profanity, hate speech, and targeted harassment are common in unmoderated channels. Users frequently circumvent chat filters by substituting symbols or numbers for offensive words. Twitch provides AutoMod, an AI-based chat filter with four levels of strictness that can catch discriminatory, hostile, sexual, and profane content. Streamers can also add custom blocked terms and assign trusted moderators. However, moderation quality varies enormously between channels, as Twitch delegates primary chat moderation responsibility to individual streamers and their volunteer moderators. The platform's Community Guidelines prohibit harassment, threats, and hateful conduct, and Twitch can suspend or ban accounts for violations.

Mental Health Impact: Twitch's live-streaming format encourages extended viewing sessions that can displace sleep, exercise, and in-person social interaction. Some streams run for 24 hours or longer, and the real-time interactive nature of the platform makes it particularly difficult for teens to disengage. Twitch's donation and subscription culture creates financial pressure, with streamers frequently soliciting donations and teens potentially spending money without parental knowledge through Twitch's "Bits" currency system. Categories like "Pools, Hot Tubs, and Beaches" and suggestive ASMR streams emphasize appearance and viewer-paid interactions, promoting objectification. The platform does not offer built-in screen time limits or break reminders comparable to those found on Instagram or YouTube.

Content Exposure Risk: Twitch hosts a wide range of content that includes mature games with violence, sexual themes, and strong language. Categories like "Just Chatting," "Pools, Hot Tubs, and Beaches," and certain ASMR streams frequently contain sexually suggestive content that falls short of explicit pornography but is clearly inappropriate for younger teens. Twitch introduced Content Classification Labels (CCLs) requiring streamers to tag content with warnings for sexual themes, drugs/alcohol/tobacco, violence, and gambling. Streams tagged with mature labels display warnings and may be hidden from accounts registered as under 18. However, this system relies on streamers to self-label accurately, and enforcement is inconsistent. Streamers often "raid" viewers to other channels at the end of a broadcast, which can unexpectedly redirect a teen from a family-friendly stream to mature content. Protect Young Eyes rates Twitch appropriate for ages 17+ and describes the platform as "the Wild West of live-streaming."

Algorithmic Concerns: Twitch uses recommendation algorithms to surface streams and clips to users through its browse and discovery features. While Twitch's algorithm is less aggressive than TikTok's or Instagram's, the platform has been testing a TikTok-like short-form content feed that algorithmically suggests clips. This expansion of algorithmic discovery, combined with the documented exploitation of the Clips feature by predators, raises concerns about automated content surfacing leading users to harmful material. Twitch does filter out streams labeled with mature content for underage accounts in browse and search, but this depends on accurate self-labeling by streamers.

Age Verification: Twitch requires users to be at least 13 years old. Age verification consists of entering a birthdate during account creation, with no ID verification or additional checks. The system is entirely self-reported. A 2020 investigation by WIRED found numerous children demonstrably under 13 actively streaming on the platform. When reported, only a handful of accounts were removed. Twitch has since built detection models to more quickly identify broadcasters under 13, but the fundamental age gate remains an honor system. The platform is rated 17+ in the Apple App Store.

Privacy & Data Collection: Twitch, owned by Amazon, collects user data including viewing history, chat logs, device information, IP addresses, and behavioral patterns. This data is used for advertising and content personalization within Amazon's broader advertising ecosystem. Twitch displays ads extensively, including ads for mature-rated movies, games, and other products. Users can purchase a Turbo subscription ($11.99/month) to remove ads, but this is not a realistic option for most teen users. Twitch's privacy practices are governed by Amazon's broader privacy policy.

Parental Controls: Twitch does not offer dedicated parental controls. There is no family center, no supervised account option, no parental activity monitoring, and no way for parents to lock safety settings. The tools that exist, including AutoMod chat filters, blocking whispers from strangers, hiding mature content, and Content Classification Labels, can all be changed by the child at any time. NCOSE has criticized Twitch for failing to develop lockable parental controls, noting that "when a child signs up for Twitch, nothing protects them from sexual harassment, child abuse, and predatory grooming." Protect Young Eyes, Internet Matters, and multiple child safety organizations recommend third-party parental control apps as the only meaningful way to manage a child's Twitch use. The platform's policies change frequently, with Twitch having previously allowed and then banned artistic nudity, further complicating parents' ability to predict what content their child might encounter.

Sources:

Bloomberg (2022). Child Predators Use Amazon's Twitch to Systematically Track Kids Who Stream.
Bloomberg/Axios (2024). Report: Twitch feature allowing predators to record and share child abuse via Clips.
Dexerto (2024). Twitch reveals plan to stop child predation spreading across platform via clips.
Fatherly (2024). 280,000 Kids Targeted By Child Predators On Twitch, Report Claims.
NCOSE (2021). Amazon's Twitch Rife with Sexual Harassment, Predatory Grooming, Child Sexual Abuse.
Protect Young Eyes (2025). Twitch Complete App Review for Parents.
Gabb (2025). Is Twitch Safe for Kids? Risks and Dangers of Online Streaming.
Internet Matters (2025). Twitch parental controls guide.
Children of the Digital Age (2025). Twitch Parental Controls & Safety Settings: 2025 Guide.
Tubefilter (2024). Sexual predators are using Twitch's Clips feature to prey on underage streamers.

@

Threads

Text-Based Social
Min. Age: 13+
!

Threads inherits Instagram's Teen Account protections, defaults minors to private profiles, and blocks DMs for users under 18. Its text-first format avoids many visual comparison harms. However, it is still a public-facing platform with limited independent track record, no standalone parental controls, and the same Meta data collection ecosystem.

Key Concerns

Linked to Instagram Account Public by Default (16+) Meta Data Ecosystem No Standalone Parental Controls Limited Track Record Viral Reach via Rethreads
Full Research Details

Predator & Grooming Risk: Threads blocks direct messaging for users under 18, which is a significant structural protection against predatory contact. Because Threads requires an Instagram account to use, it inherits Instagram's Teen Account protections, including restrictions on who can follow and interact with minors. Users aged 13-15 are defaulted to private accounts that cannot be changed to public, and users aged 16-17 are defaulted to private but can switch to public. These protections significantly limit stranger contact compared to platforms like X, Reddit, or Kik. However, Threads is a public conversation platform where replies and rethreads can expose teen content to wider audiences. In July 2025, Meta announced it had removed 135,000 accounts that posted sexualized comments or requested inappropriate images from adult-run accounts of children under 13 across its platforms, with 500,000 linked accounts also deleted. While this action demonstrates active enforcement, it also confirms that predatory contact attempts remain a persistent problem across Meta's ecosystem.

Mental Health Impact: Threads' text-first format is structurally less harmful than image and video-focused platforms for body image and appearance-related mental health concerns. There are no filters, no Stories, and no emphasis on visual self-presentation. The platform does not use likes as a public metric in the same way as Instagram. However, Threads still operates within the engagement-driven social media model where viral content, controversial takes, and pile-on dynamics can create stress and anxiety. The platform's "rethreading" feature (similar to retweeting) can amplify content far beyond a teen's intended audience. As a relatively new platform, there is limited peer-reviewed research specifically examining Threads' mental health impact on adolescents.

Content Exposure Risk: Threads applies Instagram's Community Guidelines to all content, prohibiting hate speech, violence, nudity, and harmful content. Meta's content moderation infrastructure, including AI detection systems and human review teams, applies to Threads. However, as a text-based conversation platform, Threads can surface political extremism, misinformation, and heated arguments that may be distressing for younger users. Posts can include photos, videos, and links, meaning exposure to external harmful content is possible. Threads does not have a dedicated kids' version or Restricted Mode equivalent. The platform relies on Meta's broader content moderation systems, which process content across Facebook, Instagram, and Threads.

Cyberbullying & Harassment: Threads provides tools for users to block, mute, restrict, and report other users. Users can filter offensive words and custom phrases from their replies. The platform inherits Meta's anti-bullying policies and enforcement mechanisms. However, Threads' public conversation format means that teens who participate in trending discussions can be exposed to hostile responses from strangers. The rethreading feature can amplify negative attention. As of October 2025, Threads introduced "Communities" created by Meta, which add another layer of group interaction where moderation quality may vary. The platform is still developing its moderation infrastructure, and its relatively new status means best practices are still evolving.

Algorithmic Concerns: Threads uses an algorithmic feed that surfaces content based on engagement and relevance rather than purely chronological order. This means teens may see content from accounts they do not follow, including trending topics and popular threads. While Meta states it applies teen-specific content restrictions across its platforms, the specifics of how Threads' algorithm treats minor accounts are less transparent than Instagram's well-documented teen protections. The platform's integration with Instagram's follower graph means that a teen's Threads audience is initially shaped by their Instagram connections, which may include a mix of known and loosely connected contacts.

Age Verification: Threads requires an Instagram account, which means age verification is handled through Instagram's sign-up process. Instagram requires users to be at least 13 and uses self-reported birthdate during registration. Meta has been investing in AI-based age estimation technology across its platforms and uses behavioral signals to detect underage users. However, the fundamental age gate remains self-reported. Because Threads cannot be used without Instagram, any improvements Meta makes to Instagram's age verification automatically apply to Threads access.

Privacy & Data Collection: Threads operates within Meta's data collection ecosystem, which is among the most extensive in the technology industry. The platform collects usage data, device information, and behavioral patterns. When Threads launched, its App Store privacy label revealed extensive data collection including health and fitness data, financial information, contact information, browsing history, and location data. This data is used for advertising and content personalization across Meta's platforms. Meta's 2023 FTC issues regarding youth data monetization apply to Threads as part of the broader Meta ecosystem. Meta has stated that Teen Accounts receive more restrictive default privacy settings, but Threads still operates within a data-intensive advertising model.

Parental Controls: Threads does not have standalone parental controls. Parents can use Meta's Family Center to monitor screen time and set limits for the Threads app, as they can for Instagram. Screen time limits can also be set directly within the app. However, there are no Threads-specific parental oversight features like activity monitoring, content filtering, or contact management. Privacy settings (private profile, word filters, block/mute) can be changed by the teen at any time. Protect Young Eyes notes that "no parental controls" are provided specifically for Threads. The platform's reliance on Instagram's infrastructure means that improvements to Instagram's parental tools may eventually extend to Threads, but as of early 2026, Threads' parental oversight options remain limited compared to Instagram itself.

Sources:

Meta (2025). Timeline of tools, features, and resources to help support teens and parents.
Meta (2026). Beyond the Headlines: Meta's Record of Protecting Teens and Supporting Parents.
Protect Young Eyes (2025). Should Kids Use Threads? App Review.
SmartSocial.com (2025). Instagram Threads Guide for parents and educators.
Gabb (2025). What is Threads App? Meta's New Twitter-Like Social App.
FindMyKids (2025). Instagram's New Threads App: What It Is, How It Works, and Is It Safe for Kids?
Ann Arbor Family (2025). Meta's Teen Safety Overhaul: What Parents Need to Know.

B

BeReal

Photo Sharing
Min. Age: 13+

BeReal's filter-free, once-a-day format does not offset its failures: location sharing is enabled by default, 59% of users report exposure to sexual content, there are zero parental controls, the community standards only ban nonconsensual nudity, and the two-minute posting pressure combined with dual cameras leads to impulsive oversharing of private information.

Key Concerns

Location ON by Default 59% Exposed to Sexual Content No Parental Controls Dual Camera Oversharing Streak Pressure Voodoo Acquisition Uncertainty
Full Research Details

Predator & Grooming Risk: BeReal's design is structurally less conducive to predatory contact than most social platforms. There is no public messaging system, no DMs to strangers, and interaction is limited to friends, friends of friends, or public feeds. The platform's limited social features, no follower counts, no public profiles in the Instagram sense, mean there is less surface area for predators to identify and target minors. However, BeReal's Friends of Friends feature allows users to view posts from people they do not directly know, and the app's public feed (for users who opt in) exposes content to strangers. The dual-camera format captures both the user and their surroundings, potentially revealing private information about a teen's home, school, or routine to anyone who can see their posts. BeReal's Community Standards address child sexual exploitation, but the platform's reporting system has been criticized as basic compared to more established platforms.

Mental Health Impact: BeReal was designed to counter many of the mental health harms associated with traditional social media. There are no filters, no editing tools, no public like counts, and no follower metrics. The platform encourages showing unpolished, everyday moments rather than curated highlight reels, which can reduce harmful social comparison. However, the two-minute posting window creates its own form of pressure and anxiety, particularly for teens who may be in class, at work, or in an awkward situation when the notification arrives. Missing the window or posting late results in a visible "late" tag, and the introduction of streaks in recent updates adds gamified pressure to post daily. The dual-camera format can also create discomfort if teens feel pressured to share images of themselves when they are not ready. Despite these concerns, BeReal's fundamental design is less harmful to mental health than engagement-maximizing platforms like TikTok, Instagram, or Snapchat.

Content Exposure Risk: A 2023 ParentsTogether Action study found that 59% of BeReal users reported being exposed to sexual content on the platform, higher than most other social media platforms surveyed. BeReal's Community Standards only ban nudity if it is nonconsensual, meaning consensual adult nudity may appear in public feeds. The platform describes itself as a hosting service with no obligation to proactively monitor what users post, relying primarily on user reports to address harmful content. The Friends of Friends feed and public posting options can expose teens to content from unknown users. BeReal does not have content rating labels, restricted modes, or age-gated content categories. The platform's RealMoji feature (selfie-based reactions) is generally harmless, but the lack of content moderation infrastructure means that harmful material can circulate before being reported and removed.

Cyberbullying & Harassment: BeReal's limited interaction features, no public comments on a traditional feed, no viral sharing mechanism, and no public follower counts, significantly reduce the surface area for cyberbullying compared to platforms like Instagram, TikTok, or X. Users can comment on friends' BeReals and react with RealMojis, but the small-circle, friend-based design means interactions are typically confined to known contacts. However, the dual-camera format means a teen's BeReal could capture embarrassing or private moments that friends might screenshot and share outside the platform. BeReal does not prevent screenshots. RealGroups (group chats) can become vectors for exclusion or targeted harassment within friend groups, as with any group messaging feature.

Algorithmic Concerns: BeReal does not use an engagement-maximizing recommendation algorithm in the way that TikTok, Instagram, or YouTube do. The platform's core experience is chronological and friend-based: you see your friends' daily BeReals, in order, after you post your own. There is no "For You" feed, no viral content promotion, and no algorithmic rabbit holes. The Friends of Friends feature does surface content from outside a user's direct network, but this is not algorithmically curated for engagement. This is one of BeReal's most significant structural advantages for child safety. However, as new owner Voodoo (a mobile gaming company that acquired BeReal in June 2024 for €500 million) looks to monetize the platform, there is uncertainty about whether this low-algorithm approach will persist.

Age Verification: BeReal requires users to be at least 13 years old (12 in some regions). Age verification is entirely self-reported during account creation. If the user reports being under 16, BeReal's privacy policy states that parental or guardian approval is required to complete account setup, but the mechanism for verifying this approval is not robust. There is no ID verification, no age estimation technology, and no behavioral detection of underage users. The age gate is trivially easy to bypass by entering a false birthdate.

Privacy & Data Collection: BeReal collects a range of user data including phone number, name, date of birth, username, device information, IP address, usage patterns, and, notably, precise geolocation data. Location sharing is enabled by default on every post, meaning teens must actively remember to disable it each time they post or risk broadcasting their exact location. Photos, videos, and chat messages are stored by BeReal, and the platform's privacy policy does not confirm end-to-end encryption for messages. BeReal may share data with its parent company Voodoo and other business partners. The privacy policy states that some data, including photos, may be retained indefinitely. The June 2024 acquisition by Voodoo, a mobile gaming company, introduces additional uncertainty about how user data will be handled going forward.

Parental Controls: BeReal has no built-in parental controls. There is no family center, no activity monitoring, no screen time limits, no content filtering, and no way for parents to lock privacy settings. The only safety measures available, such as setting an account to private, disabling location sharing, and turning off Friends of Friends, can all be changed by the teen at any time without parental knowledge. BeReal does not offer supervised accounts or any mechanism for parental oversight. Multiple child safety organizations, including Protect Young Eyes, BrightCanary, Cyber Safety Cop, and the Center for Online Safety, note the complete absence of parental controls and recommend third-party monitoring apps as the only way to manage a child's BeReal use.

Sources:

ParentsTogether Action (2023). Study finding 59% of BeReal users exposed to sexual content.
Protect Young Eyes (2025). BeReal App Review.
BrightCanary (2025). Is BeReal Safe for Kids? A Parent's Guide to the App's Risks.
Aura (2024). Is BeReal Safe For Kids? What Parents Need To Know.
SafetyDetectives (2025). Is BeReal Safe for Kids? What You Should Know in 2025.
ExpressVPN (2026). Is BeReal safe for kids? A parent's guide to safety and privacy.
Cyber Safety Cop (2025). BeReal Photo Sharing App review.
Center for Online Safety (2022). Is the app BeReal safe for kids?
MySociaLife (2023). BeReal Review: Everything Parents Need to Know.
MakeUseOf (2022). BeReal for Teens: Are There Any Risks Involved?

K

Kik

Anonymous Messaging
Min. Age: 13+

A convicted child molester called Kik a "predator's paradise," and the evidence supports this. Anonymous accounts, no age verification, a "Meet New People" feature that pairs users with strangers, public groups searchable by interest, and virtually no parental controls create ideal conditions for grooming. Law enforcement consistently identifies Kik as a top platform for CSAM distribution.

Key Concerns

"Predator's Paradise" Anonymous Sign-Up Meet New People Feature Rampant CSAM Distribution Zero Parental Controls Convicted Predators Still Active
Full Research Details

Predator & Grooming Risk: Kik has been called a "predator's paradise" by a convicted child molester on CBS News, and law enforcement investigations consistently identify it as a primary platform for child grooming and exploitation. The Tulsa County Sheriff's Office Child Predator Unit states that at least one in four tips about people sending or receiving child sexual abuse material comes from Kik. A Forbes investigation found that posing as 13- and 14-year-old girls and joining public groups searchable by terms like "teenagers," "friends," and "14," the accounts received 10 private messages from men within one hour. Forbes also found that Kik had not deleted profiles of individuals charged and convicted of child abuse offenses. A Thorn study (2022) found that Kik tied with Instagram and Tumblr as the platform where minors reported the second-highest rates of online sexual interaction. Documented criminal cases include: a man who used Kik to distribute sexually explicit images of a two-year-old to 25 other users (sentenced to 25 years); a man who used Kik to groom and eventually kidnap a 12-year-old girl (Nathan Larson, 2020); a man sentenced to 35 years for child sexual abuse crimes facilitated through Kik (2022); and a Missouri man who distributed CSAM through at least 50 Kik group chats using multiple accounts to evade bans (2023).

Cyberbullying & Harassment: Kik's anonymous account structure, where users need only an email address to sign up and can use any username, enables harassment with virtually no accountability. The platform's group chat feature allows anyone to join public groups and interact with all members, including sending private messages. This combination of anonymity and open group access creates conditions where targeted harassment, intimidation, and abuse can occur with little risk to the perpetrator. Kik provides basic block and report features, but these place the entire burden of safety on the victim. There are no proactive moderation tools, no chat filters, and no AI-based detection of bullying or harassment comparable to what platforms like Instagram, TikTok, or even Twitch provide.

Mental Health Impact: Kik's primary mental health risk stems not from engagement-maximizing design features (it lacks an algorithmic feed, infinite scroll, or public metrics) but from the nature of interactions the platform facilitates. The anonymous, stranger-focused design means teens using Kik are disproportionately likely to encounter sexual solicitation, exploitation, and manipulation. For teens who are already vulnerable, experiencing grooming, sexualization, or pressure to share explicit content can cause lasting psychological harm. The platform's design encourages connecting with strangers through features like "Meet New People," which normalizes interaction with unknown adults for young users.

Content Exposure Risk: Kik is widely and repeatedly documented as a primary platform for the distribution of child sexual abuse material. Headlines involving CSAM on Kik are frequent and ongoing. In addition to CSAM, the platform's public groups can expose users to pornography, violence, hate speech, discrimination, and drug-related content. Bots on the platform have been documented tricking users into accessing adult content. Kik does not proactively monitor or filter content in messages or groups. The platform describes itself as a messaging service, placing content responsibility on users. An April 2025 formal complaint to Apple's App Review team documented that known child predators listed in publicly accessible offender databases were still actively using Kik, and that the platform's group discovery feature continued to expose minors to explicit and predatory content with minimal oversight.

Algorithmic Concerns: Kik does not use a content recommendation algorithm in the traditional sense. Users find content and contacts through public group searches, the "Meet New People" feature, and direct messaging. The risk on Kik is not algorithmic amplification but rather the platform's deliberate design choices that facilitate stranger contact. The "Meet New People" feature pairs users with complete strangers for conversation. Public groups are searchable by keyword, meaning anyone searching for terms associated with minors can find and join groups where children are likely to be present. This is a design-level concern rather than an algorithmic one, but the outcome, facilitating adult-minor contact, is the same.

Age Verification: Kik requires users to be at least 13 years old. The only verification is entering a birthdate during account creation. No ID verification, no email verification beyond a confirmation link, and no behavioral age detection exist. An email address is the only requirement to create an account, and disposable email addresses work. This means banned users, including those banned for CSAM distribution, can create new accounts within minutes. Of Kik's reported 15 million monthly active users, 57% are in the 13-24 age bracket, confirming that the platform's user base skews heavily toward minors and young adults.

Privacy & Data Collection: Kik collects user data including chat logs, device information, and usage patterns. The platform's encryption and message deletion features have drawn criticism from law enforcement, as they make it difficult to investigate crimes facilitated through the app. Kik does not store sent messages on its servers after delivery, meaning deleted conversations cannot be recovered, even by law enforcement with a warrant. This privacy feature, while potentially valuable for adults, makes Kik particularly dangerous for minors because evidence of grooming, exploitation, or CSAM distribution can be destroyed. The platform is owned by MediaLab, which also owns other social apps.

Parental Controls: Kik has virtually no parental controls. The only safety features available are blocking and reporting individual users, both of which require action by the child. There is no family center, no supervised account, no activity monitoring, no screen time limits, no chat filtering, no content moderation tools, and no way for parents to manage or view their child's activity on the platform. Multiple child safety organizations, law enforcement agencies, and technology reviewers explicitly recommend that children should not have access to Kik. McAfee's family safety team, NCOSE, Bitdefender, Safes, and Qustodio all characterize Kik as one of the most dangerous apps for minors. The Tulsa County Sheriff's Office Child Predator Unit states plainly: "Kik is an app that you just don't get on."

Sources:

Forbes/CTIP (2017). This $1 Billion App Can't 'Kik' Its Huge Child Exploitation Problem.
NCOSE (2023). Kik: A "Predator's Paradise." National Center on Sexual Exploitation.
Thorn (2022). Online sexual interaction study. Kik platform prevalence data.
McAfee (2025). Kik Messenger: The Dangerous App Kids Love. Family safety analysis.
News On 6/WKRG (2025). Kik app linked to rising child exploitation cases, Tulsa investigators say.
Medium/M. Evryn (2025). Kik Messenger, Child Exploitation, and the Need for Apple to Act.
Bitdefender (2025). What Parents Need to Know Before Letting Their Kids Use Kik.
Safes (2024). Is Kik Safe or Is It a Playground For Creepy Adults?
CBS News/48 Hours. Convicted child molester describes Kik as "predator's paradise."
Doe v. Kik Interactive, Inc. (2020). Civil lawsuit alleging structural negligence in child exploitation.

What You Can Do

You saw the ratings. Now here's what to do about it. Practical steps, real conversations, and actual tools you can use tonight.

🔒

Lock It Down

Parental controls, screen time limits, and how to check what's on their phone right now.

Practical steps

A 2025 study in Pediatric Research found that parents' own screen habits directly predicted their teens' problematic social media use, and that parental monitoring was associated with lower screen time and less problematic use (Nagata et al., 2025). Mealtime and bedroom screen use were particularly strong predictors.

iPhone: Settings → Screen Time → Content & Privacy Restrictions. Set app limits, block explicit content, and disable app installs without a passcode.

Android: Google Family Link lets you manage apps, set screen time limits, and see activity reports. Download it from the Play Store.

Quick wins for tonight: Turn off notifications for social apps. Move social apps off the home screen. Enable "Do Not Disturb" scheduling for bedtime. Check their app download history.

💬

Have the Talk

How to bring it up without your kid shutting down. What to say when they tell you everyone else is on it.

Conversation starters & strategies

Research shows open communication is the most effective tool parents have. A study of 34 parents of teens found that those who maintained open dialogue and showed genuine interest in their children's online lives were better positioned to guide safe use than those who relied on strict rules alone (Symons et al., 2017, Computers in Human Behavior).

Reframe the conversation: instead of "how much time are you spending?" try "what types of activities on social media feel like time well spent?" Choose relaxed settings like walks or car rides rather than formal sit-downs (Mental Health Coalition, Time Well Spent).

Try these: "What's the funniest thing you saw online today?" · "Has anyone ever said something online that made you feel bad?" · "If you could only keep one app, which would it be and why?" · "What would you do with an extra hour if screens weren't an option?"

🔄

Fill the Gap

If you take it away, something has to replace it. Activities that meet the same needs social media fills.

Offline alternatives by need

A study of 31 parents found that 71% let their children use digital media primarily so they could complete tasks without being interrupted. Researchers concluded parents need "concrete and feasible alternatives" for independent activities (Geurts et al., 2022, Journal of Child and Family Studies).

Social media fills real needs. Offline alternatives should target those same needs, not just fill time.

For connection: Team sports, group music lessons, community volunteering, scouting, church youth groups. For identity: Journaling, art, theater, creative writing, cooking. For entertainment: Board game nights, hiking, fishing, building projects, pickup basketball. For belonging: Clubs (robotics, debate, 4-H), local recreation programs, library events, neighborhood hangouts.

📋

Get Ahead of It

Your kid hasn't asked yet. Set expectations now so you're not scrambling later.

Prevention strategies

The AAP recommends creating a family media use plan with screen-free times, especially meals and the hour before bed. Keep devices out of bedrooms at night. Model the behavior you expect because your own phone habits matter. Start with clear, simple rules and adjust as trust is built.

Research identifies the 3-hour mark as a threshold: spending more than 3 hours per day on social media was prospectively associated with increased risk of anxiety and depression in adolescents (Riehm et al., 2019, JAMA Psychiatry).

Parents in qualitative research describe the goal as "guarding from the sideline" — being present without becoming the tech police. Graduated autonomy: more freedom as your child demonstrates responsibility (Symons et al., 2017).

🚨

When Something's Wrong

Behavioral changes, emotional shifts, and when it's time to get professional help.

Warning signs & resources

The Surgeon General's 2023 Advisory found the adolescent brain between ages 10 and 19 is in a highly sensitive developmental period. Half of all lifetime mental illnesses begin by age 14 (NAMI). Sleep disruption is one of the earliest and most observable warning signs (Scott et al., 2019, Sleep Health).

Watch for: Withdrawal from family or friends. Increased irritability when asked to put devices away. Declining grades or loss of interest in activities. Changes in sleep patterns. Secretive behavior around devices. Mood shifts after using social media. Comments about feeling excluded, ugly, or not good enough.

When to act: If changes persist more than two weeks, if your child expresses hopelessness, or if you notice signs of self-harm. The 988 Suicide & Crisis Lifeline is available 24/7 by calling or texting 988.

🤝

You're Not Alone

Parent communities, expert guides, and tools from people who get it.

Resources & communities

Smartphone Free Childhood — Kid-safe phone alternatives, GPS watches, and a grassroots movement with over 140,000 parents from 60+ countries.

ConnectSafely Parent Guides — Free PDF guides for every major platform. Covers cyberbullying, sextortion, AI, and parental controls. English and Spanish.

HHS Parent Guide — Free guide on healthy social media habits with a companion teen self-assessment quiz.

U.S. Surgeon General's Advisory — The landmark 2023 advisory on social media and youth mental health.

TODAY Kids & Screens Guide — Monitoring tools, kid-safe phones, screen time apps, and links to AAP, APA, and Common Sense Media.

Why Laws Aren't Enough

Social media companies spend record sums to prevent, delay, and weaken child safety legislation. Here's how the system keeps failing kids, and why you can't wait for government to fix this.

📜

COPPA: 25 Years of Falling Short

The only federal child privacy law is older than the platforms it's supposed to regulate, and it still only covers kids under 13.

The details

The Children's Online Privacy Protection Act was passed in 1998 and took effect in 2000. It requires websites to obtain parental consent before collecting personal data from children under 13. But COPPA has critical limitations that social media companies exploit.

Only covers under 13. Teenagers are completely unprotected by federal privacy law. A 14-year-old has the same legal protections online as a 40-year-old.

The "actual knowledge" loophole. Companies are only liable if they know a user is under 13. So most platforms simply ask for a birthdate and look the other way when children lie. This is by design.

Weak enforcement. The FTC can fine violators up to $53,088 per violation, but actions have been rare. The largest COPPA settlement was $5.7 million against Musical.ly (now TikTok) in 2019. For context, TikTok's parent company ByteDance spent $10.4 million on lobbying in 2024 alone.

In August 2024, the DOJ and FTC jointly sued TikTok and ByteDance again for continued COPPA violations. COPPA has not been meaningfully updated by Congress since 2013.

⚖️

KOSA: Passed 91-3. Still Not Law.

The strongest child safety bill in a generation passed the Senate with near-unanimous support. The House never voted on it.

What happened

The Kids Online Safety Act (KOSA) was first introduced in 2022 by Senators Blackburn and Blumenthal after the Facebook Files whistleblower revelations. It gathered endorsements from over 240 organizations including parents' groups, pediatricians, and child psychologists.

In July 2024, KOSA and its companion bill COPPA 2.0 passed the Senate 91-3. COPPA 2.0 would have extended privacy protections to teens up to age 16 and banned targeted advertising to minors.

Then it died. House Speaker Mike Johnson called KOSA "very problematic" and refused to bring it to a vote. Both bills expired when the 118th Congress ended in January 2025.

They've been reintroduced, but as of early 2026, KOSA remains stalled. During a December 2025 House markup, Democratic lawmakers accused Republicans of gutting both bills after industry pressure. Rep. Lori Trahan stated the proposals had been "gutted and co-opted by Big Tech" through "backroom deal-making." The House version removed KOSA's core "duty of care" standard entirely.

COPPA 2.0 passed the Senate unanimously in March 2026, but its fate in the House remains uncertain.

🏛️

State Laws Keep Getting Blocked

When Congress stalls, states try to fill the gap. The tech industry sues to stop them every single time.

The pattern

NetChoice, a tech trade group whose members include Meta, Alphabet, Amazon, Snap, and X, has systematically challenged state-level child safety laws on First Amendment grounds. The pattern is the same every time: a state passes a law, NetChoice files suit, a court blocks enforcement.

Blocked so far: California, Arkansas, Ohio, Utah, Texas, Virginia, and Mississippi. That's at least seven states where child safety laws have been halted by industry lawsuits.

Mississippi's law was named after Walker Montgomery, a 16-year-old who died by suicide after an Instagram sextortion scheme. NetChoice still sued to block it. The Supreme Court allowed the law to remain in effect while litigation continues, but Justice Kavanaugh wrote that the law "is likely unconstitutional" under current precedent.

In February 2026, a federal judge blocked Virginia's social media restrictions for minors. NetChoice spent a record $550,000 on lobbying in the first nine months of 2024 alone, a 57% increase over the same period in 2023, on top of its litigation budget.

💰

$61.5 Million in One Year

That's how much six tech companies spent on federal lobbying in 2024. They employed one lobbyist for every two members of Congress.

Follow the money

In 2024, Meta, Alphabet, ByteDance, Microsoft, Snap, and X combined to spend $61.5 million on federal lobbying, a 13% increase over 2023 (Issue One, 2025). Together they employed nearly 300 lobbyists.

Meta: $24.4 million (record year). 65 lobbyists, one for every eight members of Congress. A 27% increase over 2023.

ByteDance (TikTok): $10.4 million (record year). 55 lobbyists, one for every ten members of Congress.

The daily rate: Meta and ByteDance combined were spending approximately $225,000 per day that Congress was in session in 2024.

2025 is already worse. In Q1 2025 alone, the same companies pumped over $17.5 million into lobbying, exceeding the same period in 2024. Meta set another record at $8 million in a single quarter, employing 85 lobbyists, one for every six members of Congress.

🛡️

Section 230: The Corporate Shield

A 1996 law protects platforms from liability for content users post. Courts have used it to shield companies from accountability for decades.

How it works

Section 230 of the Communications Decency Act (1996) says platforms are not the "publisher" of content posted by users. Courts have interpreted this broadly for decades, shielding companies even for design choices that amplify harmful content to children.

Recent cases have started to push back. In 2024, the Supreme Court's decision in Moody v. NetChoice acknowledged that platforms exercise editorial judgment, but did not resolve whether they can be held liable for algorithmically amplifying harmful content to minors. A trial in the K.G.M. case was allowed to proceed, signaling that Section 230 may have limits.

But for now, the shield remains largely intact. Companies continue to use it to deflect accountability for harm to children while spending millions to ensure no new law changes the equation.

🎯

Why This Falls on You

The pattern is clear. The industry will spend whatever it takes to stop regulation. That's why this website exists.

The bottom line

COPPA has been the primary federal child safety law for over 25 years and it still only covers children under 13. The most popular child safety legislation in a generation passed the Senate 91-3 and still could not become law. State laws get blocked in court within weeks of passage.

Even when legislation survives, the industry ensures it arrives with fewer teeth. During the December 2025 House markup, the House version of KOSA had its core "duty of care" standard removed. The House version of COPPA 2.0 weakened the knowledge standard so companies only face requirements if they have "actual knowledge" children are on their platform, an easy standard to avoid.

This is not a system that is going to protect your children on its own. Legislation matters, and you should support it. But you cannot wait for it. The tools, conversations, and information on this site are things you can act on tonight.

Sources: Issue One lobbying analyses (2024, 2025); federal lobbying disclosures filed with Congress; COPPA, 15 U.S.C. §§ 6501-6505; KOSA legislative history (S. 1409); Children and Screens Policy Update (Feb. 2026); TechPolicy.Press (Dec. 2025); SCOTUSblog, NetChoice v. Fitch (Aug. 2025); Davis Wright Tremaine (Jan. 2026).

About This Project

kNOwSocialMedia is a research project developed for the Macro Social Work class at Lewis-Clark State College. Our mission is to equip parents with clear, evidence-based information about the platforms in their children's lives.

  • Greg Fritsch Researcher
  • Charles Thompson Researcher
  • Ashley Yochum Researcher
  • Ashleigh Wraith Researcher

Instructor: Tiffany Renner
Lewis-Clark State College · Social Work Program