Read The World Turned Upside Down: The Second Low-Carbohydrate Revolution Online
Authors: Richard David Feinman
You can usually calculate absolute
risk (the number of cases
divided by the number of participants) so you can see if the report is
about
something that is very rare. The specific case in this chapter, the
effect of
red meat, fails to meet the criterion of meaningful effects and seems
to be
deployed to distract from the real culprit, the real elephant in the
room:
carbohydrate.
It's not over. Red meat is still a
target. The next Chapter
describes yet another case of how relative risk is used to paint an
exaggerated
picture of the effects of red meat consumption. It will also introduce
the idea
of confounders, factors that may influence the interpretation of the
outcomes
of an experiment. I will describe, too, the dangers in their misuse.
Chapter
19
Harvard.
Making
Americans Afraid of Meat
TIME
:
You're partnering with, among others, Harvard University on this. In an
alternate
Lady Gaga universe, would you have liked to have gone to Harvard?
Lady
Gaga
: I don't know. I am going to Harvard
today. So that'll do.
– Belinda Luscombe,
Time
Magazine
, March 12,
2012
"There was a sense of
déja-v
u
about the paper by Pan,
et
al
.
[99]
entitled 'Red meat consumption and
mortality: results from 2 prospective cohort studies.' that came out in
April
of 2012." That's what I wrote in May of 2012. The red meat problem was
described in the previous chapter and I had written a blogpost about it
but it
wasn't long before the Pan paper from Harvard was published. Other
bloggers
worked it over pretty well so I ignored it. Then, I came across a
remarkable
article from the Harvard Health Blog. It was entitled "
Study urges
moderation in red meat
intake
." It was about the Pan study and it
described how the "study
linking red meat and mortality lit up the media...Headline writers had a
field
day, with entries like 'Red meat death study,' 'Will red meat kill
you?' and
'Singing the blues about red meat."' This was too much for me and what
follows
is another blogpost, pretty much as I wrote it then.
What was odd about the post from the
Harvard blog was that
the field day for headline writers was all described from a distance as
if the
study by Pan,
et al
.
(and the content of the Harvard blogpost itself) hadn't come from
Harvard but
was rather a natural phenomenon, similar to the way every seminar on
obesity
begins with a graphic of the state-by-state progression of obesity as
if it
were some kind of meteorologic event.
The reference to "headline writers,"
I think,
was intended to conjure images of sleazy tabloid publishers
like the ones
who are always pushing the limits of first amendment rights in the old
Law &
Order
episodes.
The Harvard Blog post itself, however, is not any less exaggerated. (My
friends
in English Departments tell me that self-reference is some kind of
hallmark of
real art). It is not true that the Harvard study was urging moderation.
In
fact, the article admitted that the original paper "sounded ominous.
Every
extra daily serving of unprocessed red meat (steak, hamburger, pork,
etc.)
increased the risk of dying prematurely by 13%. Processed red meat (hot
dogs,
sausage, bacon, and the like) upped the risk by 20%." That is what the
paper
urged. Not moderation. Prohibition. "Increased the risk of dying
prematurely by
13%." Who wants to buck odds like that? Who wants to die prematurely?
It wasn't just the media. Critics in
the
blogosphere were also working over-time deconstructing the
study.
Among the faults that were cited, a fault common to much of
the medical
literature and the popular press, was the reporting of relative risk.
The limitations of relative risk or
odds ratio were
discussed before. Relative risk is relative. It doesn't tell you what
the risk
is to begin with. Relative risk destroys
information. The extreme
example: As before: you can double your odds of winning the lottery if
you buy
two tickets instead of one, or Alice has 30 % more money than Bob but
they may
both be on welfare. So why do people keep reporting it? One
reason, of
course, is that it makes your work look more significant. But, if you
don't
report the absolute change in risk, you may be scaring people about
risks that
aren't real. The nutritional establishment is not good at facing their
critics
but, in this case, Harvard admitted that they don't wish to contest the
issue.
Nolo Contendere.
"To err is human, said the
duck as it got off the
chicken's back" – Curt Jürgens in
The Devil's General
Having turned the media loose to
scare the American public,
Harvard now admitted that the bloggers are correct. The
Harvard Health
News Blog allocuted to having reported "relative risks, comparing death
rates in the group eating the least meat with those eating the most.
The
absolute risks... sometimes help tell the story a bit more clearly.
These
numbers are somewhat less scary." Why not try to tell the story as
clearly
as possible in the original article? Isn't that what you're supposed to
do in
science?
Anyway, there was a table in
Harvard's Health News Blog:
This is the raw data. This is what
you need to know.
Unfortunately, the Harvard Blog doesn't actually calculate the absolute
risk
for you. You would think that they would want to make up for Dr. Pan's
scaring
you; an allocution is supposed to remove doubt about the details of the
crimes.
Let's calculate the absolute risk. It's not hard. Risk is a
probability,
that is, number of cases divided by total number of participants.
Looking at
the data for the men first, the risk of death with 3 servings per week
is equal
to the 12.3 cases per 1000 people = 12.3/1000 = 0.1.23 = 1.23%. Now
going to 14
servings a week (the units in the two columns of the table are
different)
is 13/1000 = 1.3% so, for men, the absolute difference in risk is
1.3-1.23 =
0.07, less than 0.1%. Definitely less scary. In fact, not
scary at all.
Put another way, you would have to drastically change the eating habits
of 1,
429 men (from 14 down to 3 servings of red meat) to save one
life. In
statistics it's called effect size and it is, just as you would think,
just as
Bradford Hill told us, the most important value in any experiment
scientific or
financial, or whatever. Less than 0.1% is pretty poor.
Still, it's something, at least
according to the public
health professionals. For millions of people, it could add
up. Or could
it? We have to step back and ask what is predictable about
showing a
change of less than one tenth of 1% risk. Couldn't it mean that if a
couple of
guys got hit by cars in one or another of the groups that might
throw the
whole thing off. How many other things have a less than one tenth of 1%
risk
that weren't considered? Or, maybe a handful of guys in a upscale,
vegetarian
social circles lied about their late night trips to Dinosaur Barbecue.
When can
you scale up a small effect size?
If the effect size is small, and you
want to scale it up, it
must be secure, that is, it must not be have much room for error. The
Salk
vaccine had an absolute benefit of only about 0.02%. Of the 400, 000
people in
the test, the difference in the number of people who got the disease in
the
unvaccinated group compared to those who were treated was only 85
people.
Obviously, it was a good idea to scale things. Such a small number
could be
scaled up because it was real, that is, very accurate. You knew who got
the
vaccine and who didn't. Nobody "may" have gotten the vaccine. Pan,
et al
don't
really know,
for sure, who ate what. The error in a food questionnaire data is
tolerable if
the outcome is a knock-out, like cigarettes and lung cancer. The red
meat risk
means nothing at all. Effect size is the name of the game.
There is an underlying theme here.
There is the possibility
that the mass of epidemiology studies from the Harvard School of Public
Health
and other groups are simply not real. The whole thing. Poor
understanding of
science, cognitive dissonance or something else may be the cause.
Whatever it
is, the progression of epidemiologic studies showing that meat causes
diabetes,
that sugar causes gout, all with low hazard ratios, that is, small
effect size,
may be meaningless. Discouraging and almost impossible to believe. A
big piece
of medical research is a house of cards.
Observational Studies,
Again.
We made the point in the previous two
chapters that when you
compare two phenomena, when you do an observational study, you usually
have an
idea in mind (however much you keep it unstated). It is not bird
watching. Pan,
et
al
were
testing
the hypothesis
that red meat increases mortality. If they had done the right
analysis,
they would have admitted that the test had failed, that the hypothesis
was not
true. To be precise, technically speaking, they could not reject the
null
hypothesis, that there is no discernible connection between eating red
meat and
premature death. The association was very weak and the
underlying
mechanism was, in fact, not borne out. As suggested before,
in
experimental research there really is only association. God does not
whisper in
our ear that the electron is charged. We make an association between an
electron source and the response of a detector. Association
does not
necessarily
imply
causality, however; the association has to be strong and the underlying
mechanism that made us make the association in the first place, must
make
sense.
What is the mechanism that would make
you think that red
meat increased mortality. One of the most remarkable
statements in the
paper:
"Regarding CVD mortality, we
previously reported that red
meat intake was associated with an increased risk of coronary heart
disease
2,
14
and saturated fat and cholesterol from red meat may
partially explain
this association. The association between red meat and CVD
mortality was
moderately
attenuated
after further adjustment for saturated fat and cholesterol, suggesting
a
mediating
role for these
nutrients." (my italics)
This bizarre statement – that
saturated fat in the red meat
played a role in increased risk because including saturated fat
reduced
risk – was
morphed in the Harvard News Letters plea bargain to "the authors of the
Archives paper suggest that the increased risk from red meat may come
from the
saturated fat, cholesterol, and iron it delivers" although the blogger
forgot
to add "...although the data show the opposite. Reference (2) cited
above had
the conclusion that "consumption of processed meats, but not red meats,
is
associated with higher incidence of CHD and diabetes mellitus." In
essence, the
hypothesis is not falsifiable – any association at all will be accepted
as
proof. The conclusion may be accepted if you do not look at the data.
The Data
In fact, the data are not available.
The individual points
for each people's red meat intake are grouped together in
quintiles (broken
up into five groups) so that it is not clear what the individual
variation is
and therefore what your real expectation of actually living longer with
less
meat is. Quintiles are, in my view, some kind of anachronism
presumably
from a period when computers were expensive and it was hard to print
out all
the data (or a representative sample). If the data were
really shown, it
would be possible to recognize that it had a shotgun quality, that the
results
were all over the place and that whatever the statistical correlation,
it is
unlikely to be meaningful in any real world sense. But you
can't even see
the quintiles, at least not the raw data. The outcome is corrected for
all
kinds of things, smoking, age, etc. This might actually be a
conservative
approach – the raw data might show more risk – but only the computer
knows for
sure.
Confounders
"...mathematically, though, there
is no distinction
between confounding and explanatory variables."
– Walter Willett,
Nutritional
Epidemiology, 2o edition
.
A "multivariate adjustment for major
lifestyle and dietary
risk factors" has many assumptions. Right off, you assume that what you
want to
look at, red meat in this case, is the one that everybody wants to look
at, and
that other factors are to be subtracted out. However, the process of
adjustment
is symmetrical: a study of the risk of red meat corrected for smoking
might
alternatively be described as a study of the risk from smoking
corrected for
the effect of red meat. Given that smoking is an established risk
factor, it is
unlikely that the odds ratio for meat is even in the same ballpark as
what
would be found for smoking.
Figure
19-1
shows how
risk factors follow the quintiles of meat consumption. If the
quintiles
had been broken up according to the factors themselves, we would have
expected
even better association with mortality.