The Antidote

Counterspin for Health Care and Health News

Monday, November 20, 2006

What do I mean by high-quality research?

Let me start out by confusing you, just a bit. My readers have heard me talk about high-quality health research, and health care quality research, which overlaps with the former, and then, of course, there's high-quality health care quality research, which is a subset of both. Got it?

What I want to talk about today is high-quality health research, because I've alluded to it before, and because it's a key area of understanding for journalists who write about health, for health practitioners, for health policymakers, and even for plain old citizens. Learning to tell a good study from a not-so-good one isn't something typically taught in high school, though I would argue it should be. I'd also like to point out that there's a role for not-so-good studies; some of them, arguably, may not deserve to exist, but in a small way, pretty much every study that gets published could be a small stitch in the larger fabric of research, so it's of interest to other scientists in the same field.

In my book, a "good" study is one that avoids confounding - a common problem with observational (epidemiologic) studies. Let's say we're looking at the association between a predictor variable X (e.g., oat bran consumption) and an outcome variable Y (blood cholesterol). A confounder (C) is a factor that is related to both X and Y; a possible confounder in this example is milk consumption, since many people eat milk with their oatmeal, and it contains saturated fat, which might be related to cholesterol. As a reader, you want to think about whether the study authors have accounted for conceivable confounders in their analysis. You could say that epidemiologists' jobs revolve around keeping such extraneous explanatory factors out of their data. I like to think of bias as the process by which confounders are introduced into a study.

Another way to avoid bias, and hence confounding, is to randomly assign subjects to a treatment group or a control group. Using the above example, the treatment group could get oat-bran cereal and the control group could get wheat cereal; you could establish ahead of time that people used similar amounts of milk on both kinds of cereal, and also have them measure the amount of milk they used to make sure. But that's a little beside the point, which is that any other factors that might be associated with either oat-bran consumption (aspects of a healthy lifestyle, whatever that means) or with blood cholesterol are equally distributed between the two groups. If that is the case, then they can't affect the results.

Other important criteria:

* reports real health outcomes. This point probably speaks more to relevance than to actual study quality, but do pay attention to the conclusions that the authors draw, and think about how closely the reported outcome is related to the health outcome of interest. But don't eliminate an intermediate outcome from consideration just because it's not a health outcome - for example, we may actually want to know something about the delivery of diabetes care. I'll come back to this question in another post.

* generalizable. Say you're looking at a small, randomized trial of an intervention to prevent falling in a rehabilitation home for disabled veterans aged 35-50. Is the intervention equally useful for community-dwelling elderly people? The authors should address this; if they don't, you're entitled to raise an eyebrow.

* adequately controlled. This is a fundamental characteristic of scientific studies. You have to have a control group, and it's best if the controls are as similar as possible to the group with the disease or exposure of interest. The goal is to have the comparison groups differ only by the factor that you're testing. The best way to do this is, again, to randomly assign an exposure, but this is not always feasible, or ethical, in people - for example, when a factor of interest is considered to be harmful, like an environmental exposure.

* biologically plausible. It's very likely that plausibility will be addressed in the paper's introduction, and at some length in the discussion; scientists love to talk about it. However, there are some statistical types who don't get it - an association is an association, after all. To which I would answer, there is no statistical test for unmeasured confounding, so you'd better have a good understanding of how your predictor and outcome variables actually fit together.

I think these points cover the main things you should look for. There's more, but I do have to save something for another day: Adequate follow-up time. The study-design hierarchy. The role of the funders in the project. I'm happy to take requests here as well.



2 Comments:

At 7:26 AM, Blogger james gaulte said...

This is a great post.One of my ranting points has been the headline news aspects of medical reporting and the obviously misleading potential of that.I have also ranted on about the misleading potential of EBM and the over enthusiastic emphasis on RCTs to the point of ignoring other sources of the best evidence available and believing there is no evidence other than a RCT.I would love a real epidemiloglist to comment on some my naive epi thoughts found in Retired Doc's Thoughts.

 
At 7:36 AM, Blogger Emily DeVoto, Ph.D., said...

James, thanks so much for your comment. I'll be happy to look at your blog. Cheers!

 

Post a Comment

<< Home

Listed on BlogShares