#^The Misdiagnosis of American Mental Health | Podcast HighlightsListen to the podcast or read the full transcript here.Before we talk about your book, Catastrophe! How Psychology Explains Why Good People Make Bad Situations Worse, I’d love to talk about your recent excellent article on what people have called the “Loneliness Epidemic.”Americans are increasingly alone, but are they really lonely?This idea has been batted around for about a decade, and it was reinforced by the US Surgeon General about two years ago when they released this advisory saying there’s a loneliness epidemic and we’re lonelier than ever.
I was curious to see what the evidence was in support of this idea, and one of the things I noticed was that people kept switching what they were talking about. They would say, “We have a loneliness epidemic. Look over here, we’re spending less time with other people.” But those aren’t exactly the same thing.
We are spending a modestly smaller amount of time with other people. In one of the studies that the Supreme Court had highlighted, it worked out to be about a 1.7 percent decrease over about 20 years, excluding 2020, the Covid year. That seems to be robust, but of course, very modest. On the other hand, that’s not necessarily bad. People can be annoying, so sometimes spending less time around other people can be good. The classic example is that more people are working remotely, and remote work seems to be something a lot of people prefer.
Above all, we just don’t see robust evidence that we actually feel worse because of this modest change in the time we spend with others. So, I think the Surgeon General made a mistake by interpreting a modest decline in time with others as a mental health crisis.
These measures of time spent alone also revolve around physical proximity, literally being in the room with someone. But what about hanging out online? I’m a pretty big geek, and I play Dungeons & Dragons online with friends from around the country. We’re talking, we’re laughing, we’re having a good time, but we’re thousands of miles away from each other. Shouldn’t that count as time spent with others?
You and I met online and are talking in real time, which feels pretty socially fulfilling. I’ve done, at this point, well over 100 podcasts like these and met over 100 people that I’ve not met physically. So, I feel, in some ways, more socially connected than I would have been without this technology. On the other hand, it’s probably true that if none of these online avenues were afforded to me, I would go out and meet more people in person.Is it a problem that, because meeting our social needs online is more convenient, people choose to do that over meeting people in person?There are people who really do benefit socially from social media and smartphones because they struggle to meet people in real life. You can think of high-functioning autistic individuals, people with social phobias, or regular old garden-variety introverts. There are also certainly some people who don’t do social media well. Most people are probably somewhere in between, where it’s just frosting on the cake. They’re fine with it, but they would’ve been fine without it, too.
There are a few studies that looked at this and found that time spent on smartphones and time spent on social media do not actually have much impact on real-life relationships. Usually, time spent on social media and on smartphones draws teens and young adults away from television. So television is really the big casualty of the social media age.
Now, fifty years ago, people worried about television drawing people away from real-life relationships. The landline was also the subject of a similar panic 100-plus years ago, but it was about women. There was a sense that women were going to neglect their household duties and find lovers via the telephone. People also worried about the telegraph. And at the beginning of the 19th century, people were worried that young people would spend hours looking into kaleidoscopes and ruin their lives. So, there is a cycle of panic that goes on and on without anybody worrying too much about evidence.
To some degree, this is a perennial problem. People, especially older generations, tend to catastrophize new technology, but new generations eventually adapt to it.At the same time, we are seeing real trends of worsening mental health, and it seems plausible that, especially in young people, social media could be creating maladaptive patterns. You could imagine that, if someone is raised in a world where online interaction is the default, they might lack the opportunity to build the skills that would allow them to delay gratification and find a healthy balance between screens and in-person life. Do you worry about that?We have evidence that contradicts that narrative.
First, in most countries that have adopted smartphones and social media, we do not see a pattern of declining youth mental health. It seems to be something very specific to the United States. For various reasons, I think the best metric to track is suicides, because a body is a body, and self-report tends to be rubbish. And in most European countries, and in Japan, Australia, and New Zealand, we don’t see any evidence of a youth mental health crisis. In the United States, there was an increase in youth suicide in the 2010s, but it has now begun to reverse. Maybe it will reverse again, but so far, we’re seeing an improving trend for youth in the United States.
Second, the increase in suicide was actually much worse for middle-aged adults than it was for teens. Everybody’s worried about teenage girls, but a white male from 45 to 55 has roughly a three to five times elevated suicide risk compared to a teenage girl.
It seems to be a generational thing: Gen X was one of the worst generations on record, and if you follow the US trend line in teen suicide, it tracks almost perfectly with the suicide trend for middle-aged adults. We tend to find that the teens who are at the highest risk of suicide are those who have had parents who committed suicide, have substance abuse issues, or have been incarcerated, so the problems we’ve seen with teens in the United States may be downstream of their parents’ mental health problems.
We also have hundreds of studies that look at time spent on social media and mental health. Generally, across this literature, we do not find that time spent on social media or smartphones is predictive of negative mental health outcomes, nor do we find that reducing social media time improves mental health in experimental studies.
What makes the US an outlier for suicide?Probably a few different things. Part of it is simply that the United States has a sine wave when it comes to suicide. In other words, it constantly goes up and down. We had a peak of suicide in the late ’80s and early ’90s that was as high as the peak around 2017 in the United States.
Nobody really knows why the US has this sine wave of suicide, but changes in media use don’t seem to matter. What you do see is that parent suicide predicts later teen suicide. Political instability or polarization also seems to correlate, as well as income inequality. There were also some changes in education that occurred in the 2010s. For lack of a better word, I’m going to use the term “woke.” I understand it’s a controversial word, but the narrative that the US is racist, sexist, and oppressive seems to correlate with an increase in teen suicides.
My best guess is not that teens are watching the news and picking up on political polarization, but that these all represent general anxieties in society that are affecting the parents, and that trickles down. If your teacher is telling you that the US is racist and sexist and you have no chance of succeeding, and your parent has a fentanyl addiction, you’re getting hit from both sides.
I think the big mistake we made in this whole narrative about teens is that the real anxious generation is their parents. We looked at kids by themselves and didn’t look at their parents and how badly they are doing. It’s like the blind man and the elephant parable; if you only touch one part, you don’t see the larger picture. To the extent that teenagers are struggling, it’s probably because their parents, and to a lesser extent, their teachers, are freaking out. We should have addressed this as a middle-aged adult issue rather than a teen issue.
I’ve been fascinated lately with the role of narrative in mental health. There’s this interesting paradox where you can have stories that are objectively false, but still have real causal outcomes, including something like having a pessimistic take on your own history or identity.Do you think that children might be more susceptible to those pessimistic narratives?Yeah. Young kids are going to believe what the authorities tell them. When they hit puberty, they start believing that adults are wrong. So, first off, lessons need to be developmentally appropriate. On the progressive side, messages about race and gender issues were just not developmentally appropriate. You don’t want to tell five-year-olds that their country’s a hellhole or that maybe they’re a boy instead of a girl.
At the same time, you want to tell kids the truth. You could rightly criticize earlier conservative teaching as whitewashing American sins around slavery, segregation, and brutality towards native Americans, but that’s no longer the norm in American teaching. There was an overcorrection that portrays the United States and Europeans as uniquely bad, that slavery was invented by Spaniards and Native Americans before European arrival sat around campfires holding hands and singing Kumbaya. I think you can tell kids that people did bad things throughout history, and all societies have good and bad features. That we’re all human, and deeply flawed. But nobody wants to tell the truth; the truth is complicated.
Speaking of developmentally appropriate narratives, it’s interesting how children’s stories are dramatically oversimplified. There’s a good guy and a bad guy, and everyone’s cheering for the hero. You also talk about how political narratives often simplify things with a similar binary. It’s cognitively demanding to digest nuance.A lot of this comes down to a cognitive bias called “myside bias,” which is that we are generally more forgiving of individuals that we see as part of our social group and less forgiving of those we see as part of another social group.
Back around 2020, we saw a lot of progressive cancel culture. If you said the wrong thing about a sensitive issue, you could lose your job. Everybody on the right said this was terrible, which was true. You shouldn’t lose your job over a controversial post on your personal social media page. And now, five years later, we have people getting arrested by ICE because they wrote the wrong op-ed in a newspaper and getting canceled for their opinions about Charlie Kirk’s murder. Some of the things people posted were awful and unwise, but there was this reversal where conservatives who criticized cancel culture in 2020 suddenly thought it was the right thing to do today.
Cognitive biases are a perennial problem with human nature, which is, I think, both great news and tragic news. The great news is that society isn’t suddenly crumbling before us; these are problems we’ve overcome before. On the other hand, even highly educated people cherry-pick data and default to tribalism and emotional thinking. It takes not only training, but constant practice to overcome these biases.I sometimes see my own rational thinking slip into some of these intuitive arguments, though fortunately, I have a network of peers and colleagues who can check me.Yeah, it’s important to recognize that none of us are perfect, and sometimes the same people who talk about the importance of rational thinking can themselves slip into nonsense. We need to have the humility, both moral and epistemological, to recognize that sometimes we can just be wrong, and there’s no shame in reversing our position if the data shows us that we should.
People also often simply take the positions that they get rewarded for taking. 2020 was a great example. For like six months, everybody was saying “defund the police.” I do some criminal justice research, and I thought I woke up in opposite land, because there’s nothing in criminal justice research that suggests any form of defunding the police is going to be effective. If anything, you want to train them better and attract better talent, which is going to cost more money. Then, a couple of years later, people came forward and quietly said, “Well, I never really thought that was going to work, but I was so scared that if I said anything, I would lose my job or my funding, or I wouldn’t be able to get published.” I’m talking about academics here, but I think it’s true in a broader sense as well.
One thing that stood out to me in your loneliness article is that, oftentimes, technically true claims are spun in a way that packs unwarranted punch.Let’s talk about the claim from the Surgeon General’s report that we discussed at the beginning, that loneliness has the same adverse health effects as smoking up to 15 cigarettes per day. When you read the original study, they have a categorical measure of smoking. There are people who smoke more than 15 cigarettes per day, and people who smoke less, and it’s technically true that the small adverse effect of loneliness was the same as the effect of cigarettes on people in the low smoking category.In reality, the bulk of the effect is coming from people in the zero to one-cigarette range, and if you’re smoking 15 cigarettes, statistically, you’re almost identical to the next group up. So, while you’re making a technically true claim, people are going to interpret it as though loneliness causes the same amount of harm as smoking 15 cigarettes a day.Sadly, as you mentioned, scientists are often incentivized to maximally spin the narrative within the realm of what’s technically true into whatever sells and gets the grant funding.Yeah, it was a very strange comparison. And while it’s technically true, why compare loneliness to the low smoking group and not the high smoking group? The same person who did that study was the person who wrote the Surgeon General’s advisory. I reached out to her, and she just referred us to her frequently asked questions page. My only possible guess is that she used one to 15 because that was the comparison that sounded best.
You can also technically say that Americans are spending the most time alone that has ever been recorded. But the decrease in time spent with others over the past 20 years was 1.7 percent. So, one version of this story sounds horrible, and the other sounds like not a big deal.
This issue of effect size is a consistent problem with a lot of research in medicine and the social sciences. It’s entirely true that a study can find a statistically significant effect that has no meaning whatsoever in the real world. There’ve been a couple of unpublished studies of cell phone bans in schools that have been hyped as if they provide evidence for these bans, but they don’t, because the effect size is near zero. The actual impact of cell phone bans on student learning is zero. It does not improve student standardized testing scores, grades, or anything else. But when you run 600,000 kids through an analysis, everything is statistically significant. Plucking a hair out of their head once a day could’ve been statistically significant.
We need much greater rigor around this issue of effect sizes, and unfortunately, we are not rigorous either in medicine or in social science around that issue right now.
Despite your book being called Catastrophe, it ends on an optimistic note: once we’re aware of our cognitive biases, we can seek to limit them and prioritize truth seeking. Are there any of these adaptive strategies that we ought to cover?Yeah. There are two things that I can think of. One is simply that people do listen to data; you just have to be super patient with them. Most people are not going to back down in the middle of an argument and admit they’re wrong, so oftentimes when you’ve persuaded people, you may never find out. So, persuasion can feel very unfruitful and unrewarding. I have had arguments with people where I thought we’d never talk again, but a month later, they came back and said, “I actually thought about what you said, and I agree with some of the points you made.” And then usually you try to reciprocate and say, “Well, you made good points too,” and you eventually find some common ground. So repeating data over and over can work if you are patient and try to look like the more reasonable one in the debate. And you should recognize that you may not get rewarding feedback.
Another thing is the idea of stoicism. I find the research that stoicism is a good aspect of resiliency to be pretty compelling. First off, a lot of Cognitive Behavioral Therapy is essentially trying to teach stoicism: you have this belief, so test it against reality. What are some alternative hypotheses that may explain the same event? What is the evidence you have for each of these? How can you approach this in an intellectual rather than an emotional way?
Over the last 10 years, we’ve told people to do the opposite of that, to immerse themselves in their feelings and explore every nook and cranny of their trauma. I actually find that trying to intellectualize your way through things is related to more positive outcomes. I was just talking about persuasion in the sense of trying to give people data, but on the other side, being able to change our hypotheses about the world and about ourselves in accordance with data is very, very healthy.
The post
The Misdiagnosis of American Mental Health | Podcast Highlights appeared first on
Human Progress.