ThePew Research Centerreleased astudyon Tuesday that shows how young people are using both social media and AI chatbots.
Teen internet safety has remained a global hot topic, with Australia planning toenforceasocial media banfor under-16s starting on Wednesday. The impact of social media on teen mental health has been extensively debated — some studies show howonline communitiescan improvemental health, while other research shows the adverse effects ofdoomscrollingor spendingtoo much timeonline. The U.S. surgeon general evencalledfor social media platforms to put warning labels on their products last year.
Pew found that 97% of teens use the internet daily, with about 40% of respondents saying they are “almost constantly online.” While this marks a decrease from last year’s survey (46%), it’s significantly higher than the results from a decade ago, when 24% of teens said they were online almost constantly.
But as the prevalence of AI chatbots grows in the U.S., this technology has become yet another factor in the internet’s impact on American youth.
Pew’s research also details how race, age, and class impact teen chatbot use.
“The racial and ethnic differences in teen chatbot use were striking […] but it’s tough to speculate about the reasons behind those differences,” Pew Research Associate Michelle Faverio told TechCrunch. “This pattern is consistent with other racial and ethnic differences we’ve seen in teen technology use. Black and Hispanic teens are more likely than white teens to say they’re on certain social media sites — such as TikTok, YouTube, and Instagram.”
Across all internet use, Black (55%) and Hispanic teens (52%) were around twice as likely as white teens (27%) to say that they are online “almost constantly.”
Older teens (ages 15 to 17) tend to use both social media and AI chatbots more often than younger teens (ages 13 to 14). When it comes to household income, about 62% of teens living in households making more than $75,000 per year said they use ChatGPT, compared to 52% of teens below that threshold. But Character.AI usage is twice as popular (14%) in homes with incomes below $75,000.
While teenagers may start out using these tools for basic questions or homework help, their relationship to AI chatbots can becomeaddictive and potentially harmful.
The families of at least two teens, Adam Raine and Amaurie Lacey, have sued ChatGPT maker OpenAI for its alleged role in their children’s suicides — in both cases, ChatGPT gave the teenagers detailed instructions on how to hang themselves, which were tragically effective.
(OpenAIclaimsit should not be held liable for Raine’s death because the sixteen-year-old allegedly circumvented ChatGPT’s safety features and thus violated the chatbot’s terms of service; the company has yet to respond to the Lacey family’s complaint.)
Character.AI, an AI role-playing platform, is also facing scrutiny for its impact on teen mental health; at leasttwo teenagers died by suicideafter having prolonged conversations with AI chatbots. The startup ended up making the decision tostop offering its chatbots to minors, and instead launched a product called “Stories” for underage users that more closely resembles a choose-your-own-adventure game.
The experiences reflected in the lawsuits against these companies make up a small percentage of all interactions that happen on ChatGPT or Character.AI. In many cases, conversations with chatbots can be incredibly benign. According to OpenAI’s data, only0.15% of ChatGPT’s active usershave conversations about suicide each week — but on a platform with 800 million weekly active users, that small percentage reflects over one million people who discuss suicide with the chatbot per week.
“Even if [AI companies’] tools weren’t designed for emotional support, people are using them in that way, and that means companies do have a responsibility to adjust their models to be solving for user well-being,” Dr. Nina Vasan, a psychiatrist and director of Brainstorm: The Stanford Lab for Mental Health Innovation, told TechCrunch.
Source: Techcrunch



