Science Historian Naomi Oreskes explains why science's social character makes it trustworthy, and what we can learn from science's past.
In this episode
- We find out what flossing has to do with trusting science
- Colleen and Naomi discuss Naomi's new book, Why Trust Science?.
- Naomi explains why double blind studies aren't always necessary, or possible
Timing and cues
- Opener (0:00-0:31)
- Intro (0:31-2:32)
- Interview part 1(2:32-17:25)
- Break (17:25-18:24)
- Interview part 2 (18:24-24:41)
- This Week in Science History Throw (24:41-24:55)
- This Week in Science History (24:55-27:57)
- Outro (27:57-28:59)
Related content
Show credits
This Week in Science History: Katy Love
Editing and music: Brian Middleton
Research and writing: Jiayu Liang and Pamela Worth
Executive producer: Rich Hayes
Host: Colleen MacDonald
Full transcript
Colleen: Naomi, welcome to the podcast.
Naomi: Thank you. Nice to be here with you.
Colleen: I had the pleasure of reading an advance copy of your book, and I found it super interesting and also really accessible to the non-scientist. So I wanted to start a little bit with the history. So when I think of how modern science is practiced, for me, it conjures the idea of the scientific method, some set of principles or a system that scientists adhere to and it's the primary way that we know science is trustworthy.
So I found the first chapter in your book really fascinating because it takes us through the progression of science from the individual scientist where the reputation of the man of science was what made us trust the science to our current system, which is more collaborative. So can you give us a brief history of that transformation and how things unfolded?
Naomi: Sure. So I think most of us were raised with the vision of science that you just described, that there was this thing called the scientific method, singular, and that science was reliable by virtue of the fact that scientists used that method. But what we've learned over the past hundred years of history and philosophy of science and also sociology of science, anthropology of science, is that that picture is incorrect. It's false both on a philosophical level but it's also just not … it's not what scientists actually do. So if we look at what scientists do, we see it's very diverse. There are lots of different methods that scientists use. But that creates a new question. Well, if scientists use a lot of different methods, if there isn't one unique singular scientific method, then how or why is science reliable? And that was the question that I wanted to answer in this book.
Colleen: So what did we start off with?
Naomi: it's an interesting thing that, many people think well, science used to just be individual scientists, typically men. But actually, that's not true. If we go back to the earliest days of what we could recognize as modern science, we find that science has always been collaborative. there have always been scientific societies, organizations like the Royal Society in England or the Académie des sciences in France, these collaborative organizations are places where people would bring their ideas and their evidence to bear and present their claims in front of a group of peers, other scientists, other men of learning, or savants as they were sometimes referred to in France.
And that was a crucial thing because it was in that process of bringing evidence into an audience of peers and having those colleagues look at the evidence, debate the claims, argue about them, that's where we see the emergence of what I would say is, in fact, science. And that's the piece that as historians, we can identify as the one consistent thing that we can follow all the way back to the 17th century.
Colleen: So I think I'll ask you the question that is the title of the book, how then can we trust science?
Naomi: Right. Well, so the question, why trust science, came out of a public lecture that I gave many years ago. For many years, I've been lecturing on the history of climate science. And so one time, I gave a lecture. It was a very finely crafted lecture with lots of good slides and lots of telling detail and at the end of it, a man stood up, put his fists on his hips and said, "Well, that's all very well and good, professor. But why should we trust the science?" And I stood there for a moment. I thought, "Well, that's a good question. Fair enough."
So this book is an attempt to answer that question. And the short version is, it's not because of a unique scientific method and it's not because of who scientists are as individuals. It's not because they're particularly smart or particularly moral or anything like that. Maybe they are, maybe they aren't. But it's because scientists have means of evaluating claims, of vetting claims, and subjecting those claims to critical scrutiny. And it's that collective or social process of critical scrutiny, which I argue, is the thing that yields reliable knowledge.
Colleen: So you give a good analogy in the book about how we decide we can trust a plumber and that maybe we could use the same sort of criteria for evaluating whether or not we should trust the scientists. And that sort of resonated with me. It really made sense. So can you tell us that analogy?
Naomi: Sure. Well, I'm glad it resonated because one of the things I try to do in this book, of course, is to think about examples from ordinary life that we're all familiar with. Because one of the problems of science is that there's this mystification of science. And that can be very alienating. It can make science seem like something we can't understand.
So if we think about it, there are many people in our lives whose expertise we trust. So whether it's your plumber, your dentist, your car mechanic, well, we'll get back to the plumber, right, because that's an example almost all of us have had. Why do we trust the plumber to do our plumbing? It's because we know the plumber has specialized training, we know he or she has specialized tools, and hopefully, if it's a decent plumber, has experience doing the job that needs to be done.
So what I'm arguing is that scientists are the plumbers of knowledge, right? These are the people who, in our society, have the training, the expertise, and the experience to answer questions about the natural world. Now, I'm not saying we should trust them blindly. So just as with a plumber, you would ask friends for advice, you would check the reputation, maybe go online and read the reviews. So we do judge people by their track records. And in the case of science, if we judge science by its track record, we see that the track record is very, very good.
Colleen: So what do we do in instances where science was wrong?
Naomi: So the question of track record of course invites us to look more closely at the track record of science. And so in the book I argue that, if we do that, we actually see that the track record of science is amazingly good. The cases where we can identify scientists, trained experts scientists going wrong are actually not that common, but it's very telling to look at them in detail. So in the second chapter in the book, that's what I do. And what we find in these cases that I look at here is that for the most part, what we discover is that actually, there wasn't a scientific consensus. That in these cases, there actually was informed debate and dissent even at the time. My favorite example in the case is this story of the limited energy theory.
Colleen: Yes.
Naomi: And this was a case that a student of mine, Kate Bateman, wrote her master's degree on many years ago. So the limited energy theory is an amazing story. I really wish some filmmaker would call me and we could make a movie about this because it's such a great story and almost no one knows about it. In the late 19th century, a physician here at Harvard, a professor of medicine, Edward Clark, wrote a book called "Sex and Education." And in this book, he claimed that if women were educated and particularly if they were subject to the rigors of higher education, that their reproductive organs would shrink and they would become infertile. You're smiling. The audience can't see you're smiling.
Right, I mean, so in hindsight from the perspective of today, the obvious sexism and gender bias of this is pretty clear to us. But it wasn't obvious and clear to everyone in the late 19th century. The book was very successful. It was a bestseller. It went through multiple editions. And it was so successful that women who were involved in pioneering higher education at that time, because this is the time when many women's colleges were founded in America, Smith, Vassar, Radcliffe, Bryn Mawr, the women who were involved in these colleges had to fight against a widespread popular belief that this theory was correct. And it was presented as a deduction from scientific theories, specifically from conservation of energy.
So most of us know, if you remember any of your high school science, conservation of energy says that in any closed system, there's a finite amount of energy. So if we use energy for one thing, like getting educated, we have less energy for something else, like reproduction. But looking back, there's an obvious flaw in this theory, which is, well, if that's true, why doesn't it apply to men? And particularly, Dr. Clark insisted that women's uterus would shrink but he never asked the question, well, what part of men's anatomy would shrink? So there's a kind of obvious asymmetry in this. But nevertheless, it could have been the case that all scientists accepted this theory in which case, we would have to say, "Well, here's an example of scientists saying something that in hindsight, we would think is ridiculous and obviously biased."
But the fact is, scientists didn't all think that. There were many people who objected at the time including an important woman physician, Mary Putnam Jacobi. Mary Putnam Jacobi pointed out many of the obvious flaws in this theory including that his sample size was tiny, seven patients, and a whole lot of other mistakes as well, a whole lot of other methodological problems.
Colleen: So maybe this is a naive question but were they able to look at a uterus and see if it was, in fact, smaller?
Naomi: No. Correct, exactly. So one of the methodological problems of this theory is that, in effect, what he was doing was using the hypothetical deductive model, which many people think of as being the scientific method, he had a hypothesis deriving from conservation of energy. He deduces a consequence, which is that women's reproductive organs would shrink. But if you're really going to follow the hypothetical deductive model, then you have to test to see whether that hypothesis is true. And he never did. He had no empirical evidence to support the claim that the uterus would shrink. So it was a very small sample, it was a biased sample, and he had no independent confirmation of the central claim.
Colleen: You know what's really interesting about that though is when you read one of the other case studies, the birth control one, where there was so much information from women who were experiencing depression from taking birth control pills but yet the mountains of stories that women were telling weren't really taken into account.
Naomi: Right. So one of the important lessons of this two of my cases involve the discounting of the views or evidence from women. And so in the first case, the limited energy theory, we have a woman physician pointing out these obvious, large methodological mistakes in Clark's work. But for some reason, it took a woman to do that even though one might have thought that an intelligent man could have seen those errors as well. In the second case, which is the contraceptive pill, there we have large amounts of evidence that the contraceptive pill was, in fact, causing depression in women who were taking it. But that evidence was discounted by predominantly male physicians who did not consider the evidence from patients, patients' self-reporting to be reliable.
So there's two lessons I draw from that. One of the arguments I make is that evidence comes in many different forms. And sometimes, evidence is not in the form that you would like to have. So physicians or biomedical researchers, they would prefer to have double blind clinical trials.
Colleen: So just tell us what a double blind trial is.
Naomi: Sure. A double blind trial means, let's say I have a new drug and I want to compare it to an existing drug or to a placebo. I take two groups of people who hopefully are similar in all other respects. One group gets the new drug, the other gets the old drug or the placebo. But they don't know which one they're getting, so there can’t be sort of wishful thinking bias effects. And moreover, the person giving them the drug doesn't know either because we know from lots of research that if I know who's getting which, I might subtly...there might be subtle cues that people would pick up on.
So the person giving the drug doesn't know, the person receiving the drug. If you can do it, that's a very good thing to do. Because we know psychological bias is real and, in some cases, can be a very large effect. But you can't always do a double blind trial. If a woman is taking the birth control pill, she has to know that, right? So you can't do a double blind trial. Similarly in nutrition. You can't do double blind trials in nutrition because people know what they're putting in their mouths. So if you can't do a double blind trial, it doesn't mean that you throw up your hands and say, well, we just completed. No, there are other things you can rely on and they may not be ideal but they still might be useful.
And so in this case, I argue that the women's self-reports, their own experience of this drug should have been taken as important information, possibly imperfect but not nothing. And instead, the vast majority of physicians, scientists discounted this evidence and kept insisting that the pill was safe, kept insisting that it did not cause mental health effects even though we now know, beyond any reasonable doubt that, in fact, it does. So that's a case for arguing for certain kind of open-mindedness about evidence and not succumbing to what I call methodological fetishism, to say that if I can't have it exactly this way, then I don't want it at all. Or if it's not a double blind clinical trial, then I just discount it completely. And that issue comes up in one of my other examples which is the dental floss case. So we saw, not that long ago, some listeners may remember, there was a lot of attention in the mass media to the claim that dental floss was no use but...
Colleen: The great dental floss scam. Right.
Naomi: Scam. Right. The military industrial dental floss complex. Right. So in that case, again, we actually had a lot of evidence about the efficacy of dental floss. But again, not in a double blind trial, because obviously, you know if you're flossing or not. But the reporters who reported on it claimed that there was no evidence or "no good evidence." But what they were missing was the idea that, okay, maybe it wasn't the best possible evidence but it's still evidence.
Colleen: And I did notice that you maybe were advocating for professional flossers that could come in and help people floss their teeth properly.
Naomi: Well, right. Because one of the things you learn in science is often, the devil is in the details. And so it turns out one of the reasons why dental flossing is confusing is because it's what dentists call a technique-sensitive intervention, which means it depends on how well you do it. And it turns out, a lot of us don't floss very well. We don't take enough time particularly...
Colleen: And we lie about it.
Naomi: And we lie about it. We're not honest. We claim we're flossing when we're not. And also children, for flossing to be most effective, you wanna start young because children who tend to get the most cavities and children often don't do it right just because their hands are small and they don't have good dexterity. But in one study where they actually had professional dental hygienists flossing the teeth of children, they saw dramatic positive effects. So I say it's slightly tongue in cheek but not entirely. What this tells us is that how you do something is often as important as whether or not you do it. So yes, I imagine a world where next to Starbucks, there could be like a little dental flossing bar where you could just pop in and for five bucks, have a quick flossing and we would have much better dental health.
[Break]
Colleen: So what is the role of diversity in science because the examples that we just talked about, obviously, as a woman, those examples might lead me to not be as trusting of science. And I think one of your other examples brings into question people of color with the eugenics example. So why should I as a woman or a person of color trust science?
Naomi: One of the things you have a right to ask is, is this community diverse? Because if it turns out that the scientific community is all white men, then as a woman or a person of color, you might actually have a right to be a little skeptical and to look closely and not just assume that the finding is robust. But if you see that the community is diverse, if there's good evidence that people have really had a chance to speak freely, that the processes of critical interrogation have really been taking place, then yes, we should trust science. And the good thing about that model, my opinion is those are questions we can ask.
So for example, I deal a lot with climate science. There are people who are skeptical of climate science but they could take my model and they could say, "All right, well, let's look at the IPCC," the Intergovernmental Panel on Climate Change. Is it diverse? Are there men and women? Are there people from all across the world, people from rich countries and poor countries? Have they looked at this problem from many different angles? Have they collected many different kinds of evidence?" So it actually gives you a set of criteria that you can apply in any case whether it's climate change, vaccinations, evolution.
And so my argument is, in a way, it's an empowering argument for citizens. It's giving us in a way of kind of checklist that we could use as ordinary people to say, "Okay, I'm gonna ask these questions and I have a right to ask these questions. But if the answer is yes, they are diverse, they have looked at it from many different angles, then I should say, 'Okay, then I'm probably justified to trust them.'" And it doesn't mean that science is perfect. It doesn't mean that there won't be cases where, for whatever reason, we learn in the future that things are different but it means that in a given moment, this is probably the best we can do.
Colleen: Well, you set me up for my next question here perfectly. So Naomi, if I asked you to create a handy card for me that would fit in my wallet, kind of like seafood watch where you could pull it out and see what kind of seafood you should eat, and it had bullet points on it that would remind me what to keep in mind when deciding if the science is trustworthy, what would you put on that list?
Naomi: Number one, is the scientific community diverse? So the community of experts involved in this, is it diverse? That, to me, is the single most important thing. Second, have they looked at this problem from a number of different angles? Have they collected data, different kinds of data? And have different groups of people been involved, independent groups of people involved in collecting that data?
And then the third thing I would say is, has this work been going on for a bit of time? Because the other thing we've learned through our research is that it does take time for scientists to sort out complicated things. Some of the issues that I've looked at were debated for 10, 20, 30 years before the scientific community came to agreement. So if something is a very new question, then I think we're right to possibly withhold judgment.
And the fourth thing is consensus. Is there general agreement now among the scientific community on this issue or is there still significant dissent within the scientific community? And that's a crucial modifier because there'll always be people outside the community who are disagreeing because their financial interests are threatened or their ideology is threatened. But if scientists, people inside the scientific community, are raising questions, then that's something we should listen to. But if scientists have come to agreement and there is a consensus, then that's telling us that again, it may not be perfect but it's probably the best information we have to hand right now.
Colleen: The dental floss example that you use and the looming climate crisis, it makes me want to end with Pascal's Wager. So can you tell our listeners what that is? And then I was hoping you might read a short excerpt from the book.
Naomi: Pascal was an important French philosopher and mathematician who, several centuries ago, raised the question, should we believe in God or not? And as a scientist, he wanted to try to come up with a scientific proof for the existence of God. And after many years and much thought on this issue, he concluded that that was not possible. But what you could do is you can think about it in probabilistic terms. And he came up with a concept that has come to be known as Pascal's Wager.
He argued, well, think of it this way. If I believe in God and it turns out there is no God, there's really not much harm done there. But if I choose not to believe in God and God really does exist, then I'm in deep, deep trouble and I'm going to burn in hell for eternity. So it's obvious if you think of it from a rational, logical, probabilistic standpoint, that we must believe in God and hope for the best.
So that notion of Pascal's Wager can be applied to any problem including the problem of climate change. And so in the book I come to a conclusion about how we can apply this to questions about our own decision making, whether it's dental floss, vaccination, or acting on climate change. And so I write, "No matter how well established scientific knowledge is, no matter how strong the expert consensus, there will always be residual uncertainty. For this reason, if our scientific knowledge is being challenged for whatever reason, we might take a lead from Pascal and ask, 'What are the relative risks of ignoring scientific claims that turn out to be true versus acting on claims that turn out to be false?' The risks of not flossing are real but not inordinate. The risks of not acting on the scientific evidence of climate change are inordinate."
Colleen: Well, thank you, Naomi for joining me. I really enjoyed the book and I'm hoping that all of our listeners will pick up a copy and read it because it's really accessible to the average person like me.
Naomi: Thank you. Thanks so much. It's been a pleasure speaking with you.
Subscribe:
Check out all of the Got Science? podcast episodes.