When I was 34 years old, I decided to go back to college and then promptly got pregnant with my fourth child. No one was more shocked than I was when suddenly I found my science class was more appealing to me than my humanities classes. I think it was hormonal.
Many biology credits later, I am a science gal. I like to back myself up with valid sources of research, whether I am writing about the history of skullcap or neurotransmitters. I love playing in my lab. I adore my microscope.
Some herb people aren’t interested in scientific research. In fact, in some circles there is pushback against scientific research and having been around in the days when science was being used to refute the usefulness of herbs, I understand where it is coming from. But the boomers haven’t noticed that with the advent of integrative medicine, (and what some consider an impending attempt to co-opt herbal medicine) there is more biomedical research supporting the use of herbal therapeutics being published all the time.
So while I get it, I question the wisdom of falling too far behind the times. I decided to combine my love of the traditional and science by using modern research to support really old ideas which I pull from primary documents whenever possible. It makes me happy, but I still can’t help but be bothered by the amount of misinformation I am seeing out there.
I am definitely not alone in this concern. A while back, I read an article which reiterated many concerns I have about junk herbalism on the web. “Instead of trying to translate what the best-available research evidence tells us about how to live, we report on the latest studies out of context, with little focus on how they were designed, whether they were unduly conflicted by study funders, and whether they agree or disagree with the rest of the research.”[ii]
Many of us know that authors on sites like Natural News and Green Med Info are guilty of regurgitating research articles, without really understanding the contents, but many “more reputable” sites like Science Daily are also guilty of this approach. That article that is circulating this week about intermittent fasting is a prime example. The study is not great science but it seems to be great click-bait. It generates hits and often whether or not a health journalist gets a follow up assignment is based on the number of hits an article gets.
Writers relaying health research to the public seem unable to recognize statistical nightmare when they see it. It’s not entirely their fault. Biomedical researchers are cooking the books, a bit. When I was in college, I took a statistics class specific to biology majors, rather than calculus. (I am one of those mythical neurodivergents who does not love math.) When I began this class, the professor mentioned we could use statistics to make our results support our hypothesis and went on to teach us how.
Consequently, I was not at all surprised last year when Richard Horton, editor of the prestigious medical journal The Lancet, published his scathing commentary on research in biomedicine saying “Our love of “significance” pollutes the literature with many a statistical fairy-tale” [i] while lamenting the fact that none of the parties involved have any incentive to address the issue of bad scientific practices in biomedicine.
(I am interested in having a discussion about who passes as a health journalist? I spent just enough time soliciting writing assignments on Elance that I am pretty much appalled by the qualifications of those hired to write blog posts and eBooks on the subject. I absolutely disagree with the author of the Vox article who says health writers don’t need a science background, but that is the recent battle cry of online publications, because they hire unqualified people. My mother was an award winning education journalist, hired because she was an education minor. Successful political journalists also tend to have a background in political science. Sports journalists tend to be former athletes and so on and so forth. But I digress.)
This article’s rather limited solution to the problem was that health journalists should only use the systemic reviews that inform evidence based medicine, which is going to understandably put off some people. The research used by evidence based medicine has been critiqued heavily in recent years[iii].
I grudgingly tend to agree with this advice even though it runs counter to what you might hear from other herbal researchers who flip flop this pyramid. I maintain that without training, you aren’t going to be able to make head nor tails of the data presented in a study, as it is often designed to mislead and full of confounding variables. Telling a lay person to critique methodology is absolutely pointless if they don’t have any knowledge on which to base their critique. So I want to talk about research.
Image Source: Himmelfarb Health Sciences Libary.
I thought I would begin “Let’s Talk About Research Week” by defining and discussing the research pyramid I use to assess the reliability of scientific literature I come across. Almost every nursing program and medical program has one of these. This is just the one my school used which I am rather attached to. It doesn’t include expert opinion at the base, but you certainly could include sources like expert interviews and textbooks on the bottom. There are advantages, disadvantages and design traps that researchers fall into when using any of these study types, but that may be too much to cover on a blog.
The pyramid also doesn’t include the various types of research methodology. Most filtered sources include literature which uses a variety of these methods. We will get to that tomorrow when I break up the whole in vitro–in vivo binary, a little.
The tip of the pyramid is populated by what are called filtered information sources. That means that someone else has already looked at the body of research included in the references and drawn conclusions based on that research. You need to use your experience and critical thinking when looking at filtered information. They can also be based on “statistical fairytales” or more frequently exhibit researcher bias against complementary medicine.
Meta-Analysis and Systemic Review
When compiling a systemic review, researchers look at a many, many studies on a subject and draw conclusions based on that body of research. The reviews should tell you what search terms they used to acquire this literature, so you can perform a similar search. A meta-analysis is the most reliable type of systemic review as it has more statistical significance, but see my aforementioned concerns about statistics and biomedical research. Some sites where you can obtain reviews or their abstracts include the NIHR CRD Database, NHS EED, Cochrane Database, DARE, Campbell Collaboration Library and TRIP. (I could write a whole blog entry on how much I love TRIP, but suffice it to say it is absolutely worth $40 a year and I do not use PubMed. Ever.)
People rarely mention these types of literature to those studying herbal but there are more and more tailored specifically to integrative medicine interventions, all the time. They can also useful if you want to find up-to-date information on conventional clinical practice. Go to the National Guideline Clearinghouse or TRIP and do a search for “complementary”. You can also use Up-to-Date and Clinical Key. These are behind paywalls, but some information is available for free.
Now we will move on to unfiltered sources of information which basically means no one has applied their critical thinking skills to this literature for you. That doesn’t mean you can’t. As far as where to find them and how to assess them…more about that on Wednesday.
Randomized Controlled Trials
These studies randomly sort subjects into experimental or control groups. Blinding (or masking) occurs when the participants don’t know which group they are in. Double blinding occurs when the researchers don’t either. Consequently, double blind RCT’s are the most reliable, however there are ethical concerns that come with double blinding. The key here is that the researchers are actively experimenting on the subjects.
Cohort (Prospective) Study
A cohort is simply a population that shares similar traits. A cohort study is generally a longitudinal observational study. Researchers formulate a hypothesis and then watch the cohort for a pre-established amount of time. The Nurses’ Health Study is one example of a cohort study. Sometimes researchers compare one cohort to another. An example would be determining sexual function of 35 year-old males who smoke versus 35 year-old males who don’t smoke.
Case Control (Retrospective) Study
This study compares cases (subjects with an illness) with controls (healthy subjects) and looks at their past exposure to risk factors to try to determine a relationship. It relies on the accurate memories of all participants. One type of memory issue is called recall bias. People with a condition are looking for answers and therefor more likely to remember exposure to risk factors.
This is literature that describes the symptoms of a specific subject and the subject’s response to a particular intervention protocol. Case studies are awesome. In my opinion, they should be the foundational unit of research because they are where we generate new ideas about interventions, but they can’t be taken too seriously as far as evidence. If I can only cite one case report, that is at best an interesting anecdote. If I had ten cases with similar initial conditions, similar intervention protocols, and a similar outcome, that’s called a case series and conclusions based on a case series carry a little more weight. They also establish the need for further research on a subject. Unfortunately, you rarely see them.
Tomorrow, I will post something that breaks down some of the methodology you will see in this literature and next week I plan to talk about about how to get at the research and other tips for writers such as evaluating clinical questions.
[i] Horton, Richard. “Offline: What Is Medicine’s 5 Sigma.” The Lancet 385, no. 9976 (2015): 1380.
[ii] Belluz, Julia. “Health Journalism Has a Serious Evidence Problem. Here’s a Plan to Save It.” Vox, June 21, 2016. http://www.vox.com/2016/6/21/11962568/health-journalism-evidence-based-medicine.
[iii] Every-Palmer, Susanna, and Jeremy Howick. “How Evidence-Based Medicine Is Failing due to Biased Trials and Selective Publication: EBM Fails due to Biased Trials and Selective Publication.” Journal of Evaluation in Clinical Practice 20, no. 6 (December 2014): 908–14.