A few years ago, I was asked by a colleague to read a book she was considering as part of a leader development curriculum. The book’s clear condescension towards female leaders incensed me, and yet some of the statements were backed up by “studies.” Because I was so annoyed, and wanted to be sure I had the facts right when I argued against inclusion of the book – I looked up the cited studies. In several instances, the outcomes of studies were egregiously misrepresented, or the studies themselves were seriously flawed. In the end, I needn’t have rallied all my arguments, because my colleague decided against the book as well.
But the exercise left a nagging question. If I weren’t such a sensitive reader, and I didn’t have the resources or inclination to find the original studies, how would I have known? Might I have blithely included the book on a management development reading list?
That particular incident came up again for me last week when I ran across several posts related to Malcolm Gladwell’s new publication, David and Goliath. Gladwell’s books hit the best seller list, so it’s often important to know what he’s saying. But I found several articles and blog posts that took issue with the conclusions in the new book, arguing that the science is being extrapolated to unwarranted conclusions. Full disclosure: I haven’t read the book, and I haven’t myself followed up to see whose arguments are more compelling; I’m just pointing out the fact that there are clearly advocates and detractors here. (For a taste of the arguments, see here and here.)
My point in this post is that it’s not as easy as you might think to follow evidence-based recommendations for practice. Just because a book has pages of notes citing academic journal articles doesn’t necessarily mean that the conclusions and recommendations are sound. Articles and books with research evidence often sound plausible and seem to be fool-proof, so it is reasonable to want to latch on to those ideas that resonate and run with them. But maybe not so fast. What if the recommendations you are considering aren’t really in alignment with the science? What if the science is flawed or contradicted elsewhere? What is a practitioner supposed to do to ensure he or she is not following a popular fad instead of more robust evidence?
As an advocate for scholarly practice, these are things that keep me up at night.
I’ve been thinking about it, wondering what we can do to avoid this trap. Here are some steps I think we can take to validate the ideas before we drop that book or article on our boss’s desk:
> Read carefully. Good writers know how to capture your attention and focus on the stories and arguments that compel action – they tap into our emotions, not our rationality. But try to follow the flow of the argument and see if it indeed seems well-reasoned. For example, chapter notes might reveal that some pretty audacious claims are being made based on one non-peer-reviewed study from 20 years ago. Audacious claims should have pretty compelling evidence, or they should be accompanied by a caveat (which doesn’t make good headlines, but makes good science).
> Look for reviews. Spend considerable time reading what other people are saying about the ideas; look for blog posts and comments on online articles as well as official reviews. But take the time to try to ascertain the credentials of the person writing the reviews, too. For example, I have loosely followed a blogger/tweeter whose claim to fame is debunking popular neuroscience claims. The problem is, it isn’t clear who is writing the posts, and what his/her/their background is. And from the tone, the blog clearly has an agenda. Maybe that author’s views are not as objective as they could be.
> Look up the underlying studies. Do a little research and review the source of the recommendations to see for yourself what those studies say (and how they came to those conclusions). Poke around for follow-up research or for other researchers who may be following the same line of thinking, or who may be building a case against it. Look for the more recent studies, especially in rapidly developing areas of science like the neuroscience related to learning.
> Compare the ideas to what you know from historical theory and research. To what degree do these ideas fall into a stream of research that builds a good case? In what ways do they contradict the prevailing wisdom – and what is the basis for being able to contradict it? We should change our understanding of phenomenon over time, but we usually want to see multiple points of evidence rather than just one study before adjusting our approach.
> Use your good judgment. Even as I am advocating for looking for a stream of evidence, I also recognize that sometimes brilliant ideas are unique. You’ll have to use your experience and good judgment to try to separate the truly revolutionary from the snake oil. Carefully examine the ideas; play devil’s advocate; talk to other people to gather additional opinions and insights; and consider following up with authors.
What else would you add?
You are surely not going to follow up every article or book with this level of attention to its foundational research. But for those ideas that are going to form the basis of your entire strategy, or that will require a considerable effort to implement in your organization, due diligence is indeed necessary. Nate Silver introduced us to the idea of the difference between signal and noise. Bold headlines generate a lot of noise and it’s easy to be swayed by them. The good news is that these kinds of bold headlines also compel people to raise concerns when they have them. We just have to look around to see if there are voices urging caution.
Added 10/23: Here are some additional on-point links:
Everything you’ve ever been told about how you learn is a lie. By Shaunacy Ferro in Popular Science.
Trouble at the lab. From The Economist