Tricked by Treats: Bad Science and How to Deal with It

May 28, 2015

In late March, a press release for a nutrition study hit the web and announced a finding that made a lot of people very, very happy. A team of researchers found that eating chocolate daily was associated with weight loss, and a slew of articles sharing this fantastic news followed. This is great! Everybody, hurry up and go buy some chocolate!

Direct from that press release, here are a few salient points about the study:

Researchers divided volunteer human subjects aged 19 to 67 into three groups: One group followed a strict low-carbohydrate diet, another group followed the low-carbohydrate diet and also consumed 42 grams of dark (81%) chocolate per day, and a control group followed their status quo diet.

Sound reasonable so far? Okay, good. Here are a few points about the results:

As predicted, the low-carb group lost weight compared to the control. But surprisingly, the low-carb plus chocolate group lost 10% more weight. Not only that, but the weight loss persisted, compared to the low-carb group which saw a return of the weight after 3 weeks—a classic problem in dietary interventions known as the “yo-yo effect”.

To many folks, this might sound too good to be true. And, well…yup. Yesterday, in a post I highly encourage reading, the author of that study let us all know it was a hoax. That’s not quite fair, actually. It turns out that the study was not technically a hoax — the effect it found was real and the study was designed similarly to many nutrition studies. But it was intended to mislead, and as the linked article will support, it succeeded mightily in that respect.

The authors of this study created it to serve as a sort of sting operation designed to expose how poorly a lot of nutritional research is done and how easy it is to get shoddy data into both mainstream publications and actual industry journals. This is an ethically problematic thing to do, but (I think) I’m glad they did it. Here’s why.

Desperate Cravings

Nutrition is a huge business. We hear a lot about the “obesity epidemic” in the U.S. and in other wealthy nations. There are millions of people around the world who worry about their health and weight but remain unsure exactly what to do about it because information is coming at them as if from a firehose. Eat this! Don’t eat that! This causes cancer. That reduces the likelihood of cancer.

And the information is often contradictory. Why?

There are a lot of reasons why scientific results are often so misleading, but scientific journalism and the research industry itself bear at least part of the blame. As a researcher, there’s significant pressure to make an impact and a name for yourself, to get your results to a wide audience. Such pressure can lead to studies like the one referred to above. Relying on poor research design, studies can be set up to succeed at finding something, anything that can be passed off as significant. Meanwhile, science journalists are under the same sorts of deadlines and pressures as every other reporter. The omnipresent news cycle requires constant feeding, and even though it does not excuse it, it’s understandable why journalists cut corners at times. It’s often more important to be first than to be right.

And in that respect, I think we’re all to blame. We’ve incentivized these behaviors, even if we don’t realize it. We place a premium on being the first to know something, and we’re collectively pretty quick to discard what we’ve just read/watched/heard in favor of whatever pops up next. With respect to nutrition information, I think we’d be better off if a lot of the studies we hear about were never reported widely at all unless we’re going to make a major nationwide effort to better understand how research and statistics are used. Not bloody likely, right?

The study referred to above knowingly misled the public. Its participants did so in hopes that it would shed light on problems within nutrition research and scientific journalism. There are real questions to be answered about the ethics of such a thing, but I’m hopeful that having done it will at minimum start a conversation about reforms necessary to how publications treat scientific studies. I’m not sure we’d ever have the smoking gun necessary to trigger that kind of reflection without something like this.

A Few Simple Guidelines for Checking Understanding Scientific Studies

Via xkcd. Read closely. In the grid of panels, look at four columns over and one row up from the bottom.

But while we wait for such reforms, what do we do about misleading articles referring to scientific studies right now? Though there are many who are much more qualified, I have some training in statistics and have done statistics professionally. Even though I’m not doing this kind of work currently, I still consider myself to have a dog in this fight because I value statistics as a tool.

Without getting very technical, here are a few things to look for if you read about some incredible new nutritional study, or really any scientific studies that report statistical conclusions. These won’t make you an expert, but they might help you from getting sucked in by data that is not as meaningful as it appears at first blush. And if you want to get a little further into the weeds but still see things in generally plain English, check out Effect Size FAQ.

  • Check for sample size info: I’m talking about whether a publication even bothers reporting this information. Assuming the sampling is done properly, large sample sizes are better. But you needn’t get too carried a way on how large a sample is either. National political polls often use samples of around 1,500 despite the fact that they are trying to tell us about a population of millions. Again, assuming a proper sample, this is actually not ridiculous!
  • Statistical significance: how likely is it that the relationship in the data between the things studied is due to random chance? This alone does not make for a meaningful result, and it seems to me it’s the most abused concept in statistical reporting. We hear “significant” and think “important,” but that’s not quite true. If statistical significance is reported in a vacuum, it probably doesn’t tell you much about how the world works. It’s a signal to investigate further, not a conclusion.
  • Statistical power: how likely is the study to actually identify an effect if one exists? This should be reported but often isn’t. The higher the better, and I wouldn’t put much faith in a study with less than 80% statistical power. You cannot eliminate the possibility of an incorrect conclusion, but you can reduce its likelihood.
  • Effect size: how much does what was tested actually impact the thing researchers were studying? An effect can be significant but have such a small effect it’s not worth worrying about. That was part of the issue with the chocolate study.
  • Look for specificity in what was tested. Did the study use the “kitchen sink” model — looking at a gazillion factors simultaneously — as described in the misleading chocolate research, or was it narrowly designed to look at specific factors in isolation? Were other factors (age, gender, race, etc.) controlled for or not?
  • Have you seen any reporting on a similar result previously?  Good science is replicable, and if the article you’re reading has meaningful results, further research will follow. If not, I wouldn’t worry too much about throwing yourself into upheaval to account for single study. Even well-designed research done in isolation is unlikely to be conclusive enough on its own to merit lifestyle change. But if you’ve read about different studies revealing the same or similar results over and over and over? Well, they’re probably on to something.

Be a Skeptic, Not a Cynic

These problems with statistical reporting hit home for me because I’ve done some of this kind of work in the past. I know how useful statistics are for learning more about how the world works and how reliable they can be when applied well. I also understand how easy it is screw up even with the best of intentions. I remain convinced that the vast, vast majority of people performing any kind of scientific research have no desire to mislead the public or their fellow researchers. There are, of course, bad apples, but they’re the exception. I think it’s safe to assume that most who dedicate their time to studying anything with the level of commitment scientific research requires does so out of a genuine interest in learning more about their topic and does not wish to mislead or rush to hasty conclusions.

So while I encourage you to be skeptical — I mean this in the scientific sense especially — I want to simultaneously discourage cynicism. Scientific research remains our best tool for learning more about how the world works. The issues discussed here and elsewhere feel more like growing pains in adaptation to a new media landscape than a signal that the scientific community at large is failing us. I’ll continue to trust in science, but I’ll also redouble my own efforts to understand it properly.

Subscribe to Lost Caws via email

Did you like this post? Sign up to receive future ones by email. Did you hate it? Well, sorry.

You Might Also Like

No Comments

Tell me what you think, but be chill about it.