The World Turned Upside Down: The Second Low-Carbohydrate Revolution (36 page)

BOOK: The World Turned Upside Down: The Second Low-Carbohydrate Revolution
10.16Mb size Format: txt, pdf, ePub
ads

Chapter
21

The
Seventh Egg.

What did you have for
Breakfast?

Stepping back and looking at the
recent nutritional
literature, I am struck by the miracle of life. How could humans have
evolved
in the face of threats from red meat, from eggs, even from the dangers
of
shaving? (If you write about nutrition you have to create a macro that
types
out "I'm not making this up:" the Caerphilly Study
[104]
shows you the dangers of shaving..or is it the dangers of not shaving?).
With
28% greater risk of diabetes here, 57% greater risk of heart disease
there, how
could our ancestors have ever come of child-bearing age? With daily
revelations
from the Harvard School of Public Health showing the Scylla of
saturated fat
and the Carybdis of sugar between which our forefathers sailed, it is
amazing
that we are here.

These studies that the media writes
about, are they real?
They are, after all, based on scientific papers. Although not all the
media can
decipher them, reporters generally talk to the researchers. The papers
must
have gone through peer review. The previous chapters suggest that the
gatekeepers, as we think of peer reviewers, are less vigilant than they
should
be. In fact, many papers that are published in the major medical
journals defy
common sense. Is this possible? Can the medical literature have such a
high
degree of error? Could there be a such a large number of medical
researchers
who are not doing credible science? How can the consumer decide? I am
going to
try to give an additional example that may help. When people ask
questions like
"could the literature be wrong?," the answer is usually "yes." I will
try to
explain what's wrong and how to read the nutritional literature in a
practical
way. I will make it simple. It is science, but it is accessible
science. I am
going to illustrate the problem with the example of a paper by Djoussé,
"Egg
consumption and risk of type 2 diabetes in men and women"
[105]
.
But first, a joke.

It
was a dumb joke. In my
childhood, there was the idea, undoubtedly politically incorrect, that
Indians,
that is, Native Americans, always said "how" as a greeting. The joke
was about
an Indian reputed to have a great memory. He is asked what he had for
breakfast
on New Years day the previous year. He says "eggs." They are then
interrupted
by an earthquake or some natural disaster. The interviewer and the
Indian do
not meet again for ten years. When they meet, the interviewer says
"how." The
Indian answers "scrambled."

If the interviewer had been an
epidemiologist, he might have
asked if he had developed diabetes. In the study by Djoussé,
et al
.
[105]
participants were
asked how many eggs they ate and
then, ten years later, it was determined whether they had developed
diabetes.
If they had, it was assumed to be because of the number of eggs. Is
this for
real? Do eggs cause changes in your body that accumulate until you
develop a
disease, a disease that is, after all, primarily one of
carbohydrate
intolerance?
Type 2 diabetes
,
recall, is due to impaired response of the body to the insulin produced
by beta
cells of the pancreas as well as a progressive deterioration the
insulin-producing cells. Common sense says that it is a suspicious idea
that
eggs would play a major role. It is worth trying to understand the
methodology
and see if there is a something that justifies this obvious departure
from
common sense. Again the principles may be generally useful.

What did the experimenters actually
do? First, people were
specifically asked "to report how often, on average, they had eaten one
egg
during the past year," and "
classified
each
subject into one the following categories of egg consumption: 0,
< 1 per
week, 1 per week, 2-4 per week, 5-6 per week, and 7+ eggs per week."
They
collected this data every two years for ten years. With this baseline
data in
hand they then followed subjects "from baseline until the first
occurrence of
one of the following a) diagnosis of type 2 diabetes, b) death, or c)
censoring
date, the date of receipt of the last follow-up questionnaire" which
for men
was up to 20 years. Thinking back over a year: is there any likelihood
that you
might not be able to remember whether you had 1 vs. 2 eggs on average
during
the year? Is there any possibility that some of the men who were
diagnosed with
diabetes ten years after their report on eggs changed their eating
pattern in
the course of ten years? Are you eating the same food you ate ten years
ago?
Quick, how many eggs/week did you eat last year?

the Golden Rule again.

So right off, there is a problem in
people reporting what
they ate. This is a limitation of many, probably most nutritional
studies and,
while it can be a source of error, it is really a question of how you
interpret
the data. All scientific measurements have error. You simply have to be
sure
that the results that you are trying to find do not depend on any
greater
accuracy than the data that you have collected.

Eye-balling the paper by Djoussé, et
al., we see that there
are no figures. A suspicious sign. A graph of the number of eggs
consumed vs
the number of cases of diabetes is what would be expected. The results,
instead, are stated in the Abstract of the paper as this mind-numbing
conclusion. (Don't try to read this):

Compared with no egg
consumption, multivariable
adjusted hazard ratios (95% CI) for type 2 diabetes were 1.09
(0.87-1.37), 1.09
(0.88-1.34), 1.18 (0.95-1.45), 1.46 (1.14-1.86), and 1.58 (1.25-2.01)
for
consumption of <1, 1, 2-4, 5-6, and 7+ eggs/week, respectively,
in men (p
for trend <0.0001). Corresponding multivariable hazard ratios
(95% CI) for women
were 1.06 (0.92-1.22), 0.97 (0.83-1.12), 1.19 (1.03-1.38), 1.18
(0.88-1.58),
and 1.77 (1.28-2.43), respectively (p for trend <0.0001).

What does all this mean. In fact, it
means very little.
These "statistical shenanigans" are, in fact an argument against a
correlation.
If you look at the paragraph, almost every number that you see is very
close to
1. Without going through a detailed analysis, you can simply extract
from the
tables some simple information. There were 1, 921 men who developed
diabetes.
Of these, 197 were in the high egg consumption group, or about 1 in 10.
For
women, there were 2,112 cases of whom 46 were high egg consumers or a
little
more than 2 % of the diabetes cases were big egg-eaters. To me this
suggests
that diabetes is associated with something else than eggs and it is
probably
unjustified of the authors to conclude: "These data suggest that high
levels of
egg consumption (daily) are associated with an increased risk of type 2
diabetes in men and women."

What I described are the raw data and
as we saw in Chapter
19, we have to consider confounders.
In fact, if we analyzed the data
in detail, we would find that the conclusion is actually poorly
supported by
the data but let's take the authors' conclusion at face value.

The Seventh Egg.

If the authors' conclusion is
correct, this means that there
was no risk of diabetes from consuming 1 egg/week compared to eating
none.
Similarly, there was no risk in eating 2-4 eggs/week or 5-6 eggs/week.
But when
if you upped your intake to 7 eggs or more per week, that's it. Now,
you are at
risk for diabetes.

Since I like pictures, I will try to
illustrate this with a
modified still from the movie, the Seventh Seal directed by Ingmar
Bergman.
Very popular in the fifties and sixties, these movies had a captivating
if
pretentious style: they sometimes seemed to be designed for Woody
Allen's
parodies. One of the famous scenes in the Seventh Seal is the
protagonist's
chess game with Death. A little PhotoShop and we have a good feel for
what
happens if you go beyond 5-6 eggs/week.

Figure 22-1
.
The
Seventh Egg.

Summary:

A study of 20,703 men and 36,295
women makes a very weak
case that eggs have anything at all to do with type 2 diabetes. Few of
the
people who developed diabetes were big egg eaters. Correction for
confounders
showed greater risk for this group but common sense says that this is
absurd
and that if you have to do so much work to show risk, it is not
important. The
problem is the mindless use of statistics. If the statistics goes
against
common sense, then the authors should explain why. In detail. In the
Abstract.

Sometimes, you can get a sense of how
real the statistics
are by looking for simple things. How many people were in the study and
how
many got sick, that is, are we talking about a rare disease or one that
had low
probability, that is, in the experiment: diabetes is a major health
risk but if
you take 1, 000 men for ten years, only 1 in 10 may develop diabetes so
you
need to be sure there is a big difference between the group that
followed the
behavior you are looking at and those who didn't.

The cases that we have looked at are
from major institutions
and well-known researchers. The studies had an impact when first
published. We
do have a problem in the medical literature. The best and the brightest
are
doing dumb things. Nothing shows how bad things are in academic
medicine than
intention-to-treat. That's next.

 

 

 

Chapter
22

Intention-to-Treat. What it is
and why you should care

The medical literature has some
strange
things but nothing beats
intention-to-treat (ITT), the strange and mostly not amusing
statistical method
that has appeared recently. According to ITT, the data from a subject
assigned
at random to an experimental group must be included in the reported
outcome
data for that group even if the subject does not follow the protocol,
or even
if they drop out of the experiment. In other words, it doesn't
matter if
you eat what the experimenter told you to eat, your lipid profile has
to be
included in the final report. At first hearing, the idea is
counter-intuitive
if not completely idiotic – why would you include people who
are not in
the experiment in your data? – suggesting that a substantial burden of
proof
rests with those who want to employ it. No such obligation is
felt and,
particularly in nutrition studies, such as comparisons of isocaloric
weight
loss diets, ITT is frequently used with no justification at all.
Astoundingly,
the practice is sometimes actually demanded by reviewers in the
scientific
journals. As one might expect, there is a good deal of controversy on
this
subject. Physiologists or chemists, hearing this description,
usually walk
away shaking their head or immediately come up with one or another
obvious
reductio
ad absurdum
,
e.g. "You mean, if nobody takes the pill, you report whether or not
they got
better anyway?" That's exactly what it means.

On the naive assumption that some
people
really didn't understand
what was wrong with ITT – I've been known to make a few elementary
mistakes in
my life – I wrote
a paper on the
subject
. It
received negative, actually hostile, reviews from two public health
journals –
I include an amusing example at the end of this chapter.  I
even got
substantial grief from reviewers at
Nutrition
& Metabolism
, where I was the editor at
the time, but where it
was finally published. I'll describe a couple of interesting
cases from
the medical literature and one relatively new instance – Foster's two
year
study of low-carbohydrate diets – to demonstrate the abuse of common
sense that
is the major characteristic of ITT.

The title of my paper was "Intention
to Treat. What is the
Question?" The point was that there might be nothing inherently wrong
with ITT
if you are explicit about what you are trying to find out. If you use
ITT, you
are asking: what is the effect of
assigning
subjects to an experimental protocol. If you are very circumspect about
that
question, then there is little problem. But is anybody really
interested in
what the patient was
told
to do rather than what they actually did? The practice comes from
clinical
trials where you can't always tell whether patients have taken the
recommended
pills, just as in the real situation where you never know what people
will do
once they leave the doctor's office. In that case, you do an analysis
based on
your intention, that is, you have no other choice than to call the
experimental
group those who were assigned to the intervention. That's what we
always did
without giving it a special name. When you do know, however, there are
two
separate questions: did they take the pill and is the pill any good?
That's the
data. You have to know both and if you want to collapse them
into one
number, you have to be sure you make clear what you are talking about.
You lose
information if you collapse efficacy and adherence into one number. It
is
common for the
Abstract
of
a paper to correctly
state that the results are about "assigned to a diet" but by the time
the
Results
are
presented, the independent variable has
become not "assignment to the diet," but "the diet" which most people
would
assume meant what people ate rather than what they were told to eat.
Caveat lector
.

My paper on ITT was a kind of
over-kill and I made several
different arguments. The common sense argument gets to the heart of the
problem. I'll describe that first and also give a couple of
real examples.

Common sense argument
against
intention-to-treat

Consider an experimental comparison
of two
diets in which there
is a simple, discrete outcome, e.g. a threshold amount of weight lost
or
remission of an identifiable symptom. Patients are randomly assigned to
two
different diets: diet A or diet B and a target of, say, 5 kg weight
loss is
considered success. As shown in
Table 23-1
,
half of the subject in experiment A are
"compliers," able to stay on the diet. For whatever reason, half are
not. The
half of the patients in diet A who were compliers were all able to lose
the
target 5 kg, while the non-compliers did not. In experiment B,
on the
other hand, everybody stays on the diet but, somehow, only half are
able to
lose the required amount of weight. An ITT analysis shows no difference
in the
two outcomes – half of group A stayed on the diet and all lost weight,
while in
study B, everybody complied but only half had success.

Table 23-1
.
Hypothetical results for the thought experiment for analysis of diets
A
and
B
.

Now, you are the
doctor. With
such data in hand, should you
advise a patient: "well, the diets are pretty much the same. It's
largely
up to you which you choose," or, looking at the raw data (both
compliance
and success), should the recommendation be: "Diet A is much more
effective
than diet B but people have trouble staying on it. If you can stay on
diet A,
it will be much better for you so I would encourage you to see if you
could
find a way to do so." You are the doctor. Which makes more
sense? 

Diet
A
is obviously better but hard to get people to stay on it. This is one
of the
characteristics of ITT: it always makes the better diet look worse than
it is.

In the manuscript, I made several
arguments trying to explain
that there are two factors, only one of which (whether it works) is
clearly due
to the diet. The other (whether you follow the diet) is under control
of other
factors (whether WebMD tells you that one diet or the other will kill
you,
whether the evening news makes you lose your appetite, etc.) I even
dragged in
a geometric argument because Newton had used one in the
Principia
:
"a 2-dimensional
outcome space where the length of a vector tells how every subject
did.... ITT
represents a projection of the vector onto one axis, in other words
collapses a
two dimensional vector to a one-dimensional vector, thereby losing part
of the
information." Pretentious?
Moi
?

Why you should
care. Surgery or Medicine?

Does your doctor actually read these
academic studies using
ITT?  One can only hope not.  Consider the analysis
by David Newell
of the Coronary Artery Bypass Surgery (CABS) trial.  This
paper is
fascinating for the blanket, tendentious insistence, without any
logical
argument, on something that is obviously fundamentally foolish
[106]
.  Newell
considers that the method of

"the CABS research team was
impeccable. They refused to do
an 'as treated' analysis:' We have refrained from comparing all
patients
actually operated on with all not operated on: this does not provide a
measure
of the value of surgery."

You read it right. The results of
surgery
do not provide a
measure of the value of surgery. So, in the
CABS
trial, patients were assigned to be
treated with Medicine or Surgery. The actual method used and the
outcomes are
shown at
Table 23-2
below.

Table 23-2
Results of the CABS trial
from Newell
[106]
.

The ITT analysis was described by
Newell
as having been
"used correctly." Looking at the table, you see that a 7.8% mortality
was found in those
assigned
to receive medical treatment (29 deaths out of 373), and 5.3% mortality
for
assignment to surgery (21 deaths of 371).  If you look at the
outcomes
of each
treatment as actually used, it turns out that medical treatment led to
33
deaths or a rate = 9.5% (33/349), while among those who actually
underwent
surgery, the mortality rate was only 4.1% (17/419). Mortality was less
than
half in surgery compared to medical treatment. Making such a simple
statement,
that surgery was better than medicine, Newell says, "would have wildly
exaggerated the apparent value of surgery."

The "apparent value of surgery?"
"Apparent?" Common sense
suggests that appearances are not deceiving. If you were one of the
33-17 = 16
people who were still alive, you would think that it was the
theoretical report
of your death that had been exaggerated. The thing that is
under the
control of the patient and the physician, and which is not a feature of
the
particular modality, is getting the surgery actually implemented.
Common sense
dictates that a patient is interested in surgery, not the effect of
being
told
that surgery is
good. The patient has a right to expect that if they comply with the
recommendation for surgery, the physician would try to avoid any
mistakes from
previous studies where the patient did not receive the operation.
In another
defense of ITT, Holli
s
[107]
has the somewhat cryptic statement: "most types of
deviations from protocol would continue to occur in routine practice."
This seems to be saying that the same number of people will always
forget to
take their medication and surgeons will continue to have exactly the
same
scheduling problems as in the CABS trial. ITT assumes that practical
considerations are the same everywhere and that any practitioner is
locked into
the same ability or lack of ability as the original experimenter in
getting the
patient into the OR.

One might also ask what happens when
two
studies give different
values from ITT analysis. In the extreme case, one might suggest that
if the
same operation were recommended at a hospital in Newcastle-upon-Tyne as
opposed
to a battlefield in Iraq, the two ITT values would be different. Which
one is
the appropriate one to be attributed to that surgical procedure?

What is the take home
message? One general piece of advice
that I would give based on this discussion in the medical literature:
don't get
sick.

Why you should
care. vitamin E supplementation

A clear cut case of how off-the-mark
ITT can be is a report
on the
value of
antioxidant supplements
.
The Abstract of the paper concluded that "there were no overall effects
of
ascorbic acid, vitamin E, or beta carotene on cardiovascular events
among women
at high risk for CVD." The conclusion that there was no effect of
supplements was based on an ITT analysis but, on the fourth page of the
paper,
is this remarkable effect of not counting subjects who didn't comply:

"noncompliance led to a
significant 13%
reduction in the combined end point of CVD morbidity and mortality..
with a 22%
reduction in MI ..., a 27% reduction in stroke .... a 23% reduction in
the
combination of MI, stroke, or CVD death."

The media universally reported the
conclusion from the Abstract,
namely that there was no effect of vitamin E. This conclusion is
correct if you
think that you can measure the effect of vitamin E without taking the
pill out
of the bottle or, as in the old joke about making a really dry martini,
you
don't remove the cap from the Vermouth bottle before pouring. 
Does this
mean that vitamin E is really of value? The data would certainly be
accepted as
valuable if the statistics were applied to a study of the value of,
say,
replacing barbecued pork with whole grain cereal. Again, "no effect"
was the
answer to the question: "what happens if you are told to take vitamin
E" but it
still seems most reasonable that the "effect of a vitamin" means, to
most
people, what happens when you actually take the vitamin.

BOOK: The World Turned Upside Down: The Second Low-Carbohydrate Revolution
10.16Mb size Format: txt, pdf, ePub
ads

Other books

The Young Rebels by Morgan Llywelyn
Velveteen by Daniel Marks
A Very Merry Guinea Dog by Patrick Jennings
November Mourns by Tom Piccirilli
Perfectly Kissed by Lacey Silks
Three Balconies by Bruce Jay Friedman
The Wicked Flea by Conant, Susan
Three Little Secrets by Liz Carlyle