The World Turned Upside Down: The Second Low-Carbohydrate Revolution (38 page)

BOOK: The World Turned Upside Down: The Second Low-Carbohydrate Revolution
12.7Mb size Format: txt, pdf, ePub
ads

The previously mentioned
longitudinal models
preclude the use of less robust approaches, such as
fixed
imputation
methods (for
example, last observation carried
forward or the analysis of participants with complete data [that is,
complete
case analyses]). These alternative approaches assume that
missing data
are unrelated to previously observed outcomes or baseline covariates,
including
treatment (that is,
missing
completely at random
).

Missing data? Missing completely at
random? What's going on
here? In a nutshell, this is another implementation of ITT. In the
study, the
authors used "data" from people who dropped out of the experiment. To
do this,
all they had to do was "assume that all participants who withdraw would
follow
first the maximum and then minimum patient trajectory of weight."
Whatever this
means, if anything, the key words are "withdraw" and "assume." In other
words,
this is really a step beyond ITT where you would include, for example,
the
weight of people who showed up to be weighed but had not actually
followed one
or another diet. Here, nobody showed up. There is no data. A pattern of
behavior is assumed and data is – let's face it – made up.

The world of nutrition puts big
demands on irony and
tongue-in-cheek but the process in Foster's paper suggests that the
results
could, in theory, be fit to a model for a three-year study, or a
ten-year
study. As people dropped out you could "impute" the data. In some
sense, you
could do without any subjects at all. Nutrition experiments are
expensive;
think of the money that could be saved if you didn't have to put
anybody on a
diet and you could make up all the data. This is a joke.

The diet or the lack of
compliance?

It is odd that ITT is controversial,
by which I mean that it
is odd that it exists at all. A reasonable way to deal with dropouts,
however,
that would satisfy everybody is simply to publish both the ITT data and
the
data that includes only the compliers, the so-called "per protocol"
group, that
is, the group that were actually in the experiment. This is what was
done in
the
Vitamin
E study
described above. There, it made the
authors'
point of view look bad, but they did the right thing. Such data are
missing
from Foster's paper. Given the high attrition rate, one could guess
that the
decline in performance in both groups was due to including "data" from
the
large number of people who failed to complete the study. We do know
this
number, the number of people who dropped out. That's in the paper. So
we can do
something with Foster's data. To find out whether the decline in
performance is
due to including the made-up data from the drop outs, we can plot the
difference in triglycerides between the two groups (the double-headed
arrow in
Figure 23-3
,
above), for each time point, against the
number of people who discontinued treatment.

Figure
23-4
gives you the
answer. You can see a direct correlation between the number of dropouts
and the
group differences. "Decreases in triglyceride levels were greater in
the low-carbohydrate
than in the low-fat group at 3 and 6 months but not at 12 or 24 months"
was
almost surely due to the fact that the differences were diluted by
people who
weren't on the low-carbohydrate diet, or any diet; ITT, or whatever
this was,
always makes the better diet look worse than it is.

Figure
23-4
. The differences in the reduction in TAG
(vertical axis)
between the low-carbohydrate and low-fat arms as a function of the
number of
subjects who dropped out of the study. The TAG differences were
measured from
the double-headed arrow shown in Figure 23-3. Data from Foster
et
al
.
[77]
.

This serves as an example of a case
where correlation
strongly implies causality. The declining difference between the
triglyceride
values of the two protocols is unlikely to be due to anything that
people ate –
if they had stayed with their diet, triglycerides would have been much
lower on
low-carb – but rather to whether they stayed with the diet. The
association is
testing
that hypothesis.
In understanding the impact of this kind of experiment, the take-home
message
is that ITT and imputing data will reduce the real effect of the
intervention.
In a diet experiment, the nature of the diet and compliance may be
related – if
the diet is unpalatable, people might not stay on it – but you have to
show
this. "Might" is not data. Along these lines, however, it is likely
that the
major reinforcer, the major reason people will stay on a diet, is that
it
works. In any case, one cannot assume that the two are linked without
specifically
testing the idea.

Summary

Intention-to-treat comes from the
realization that, in some
experiments, you don't know who followed the protocol and who didn't.
In a
clinic, you may write a prescription and not know if it's been filled.
In this
case, you have no choice but to include everybody's performance. The
mechanical
application of the idea in situations where you
do
know who complied and who didn't is
another misunderstanding and dogmatic application of statistics.

ITT usually makes the better diet
look worse than it
actually is. Awareness of whether this method has been used is
important for
evaluating a scientific publication. In combination with the previous
errors in
the practice of current nutrition, things look grim for getting much
information.
The next chapter summarizes the various sources of error and suggests
that the
medical nutrition literature is deeply flawed and the product of poor
methodology and poor scientific thinking. Because of the applications
in health
and disease, it becomes important to find a dispassionate body which
can
provide real peer review. Whether such a group can be found in the
current
social situation remains unknown. Some of the problems that they would
have to
deal with are summarized in the next chapter.

 

 

 

chapter
23

The
Fiend that Lies
like truth.

I pull in resolution and begin

To doubt th'equivocation of the
fiend

That lies like truth.

– William Shakespeare,
Macbeth
.

Errors, inappropriate use of
statistics and misleading
presentations are everywhere in the medical literature. Specific
examples were
described in previous chapters. Let me summarize these and offer a few
principles for dealing with the deficiencies. I think that you will
need them.

The first principle:
teach me.

You have a right to demand, and the
author of a scientific
paper has an obligation to provide, a clear explanation of what the
results of
their study really mean. A good test of whether or not the authors are
holding
up their end of the arrangement is the number and clarity of the
figures.
Visual presentation is almost always stronger than long tabulation of
numbers.
This principle is simultaneously so reasonable and, at the same time,
so widely
violated that a whole book has been written on how scientific papers
need more
figures instead of these dense tables
[109]
.
The
tables make papers hard to read and ensure that the popular press will
have to
take the authors' conclusion at face value.

Scientific publication is changing,
and increasingly, as
more and more journals become open access and available on line,
results of
scientific studies will be universally available. An advantage to an
open
access on-line journal is that there is no longer a limit on pages or
number of
figures. Neither is there an extra charge to the authors for color. (In
open-access, the author generally has to pay for the publication;
that's why it
is free to you). Whether authors will take advantage of this
opportunity is
unknown. Whatever the journal, however, clear presentation of the
results is
still the
Golden
Rule of Statistics
: Let us see the data
completely and clearly and
in as many figures as it takes.

Science, but not rocket
science.

"Eating breakfast reduces obesity" is
not a principle from
quantum electrodynamics. Most of us know whether or not eating
breakfast makes
us eat more or less during the day. And your first reaction that eating
anything is good for losing weight is appropriate and relevant. You
don't need
to have a physician, one who may have never studied nutrition, to tell
you that
your perception is right or wrong. A degree in biochemistry is not
required to
understand the idea that adding sugar to your diet will increase your
blood
sugar. And the burden of proof is on anybody who wants to say that
sugar is
okay for people with diabetes. Anything is possible but we start from
what
makes sense. Of course, there is technically sophisticated science and
there
are principles that require expertise to understand, but you should not
assume
this is the case. How do you deal with publications that are trying to
snow you
or that have a hard sell?

Be suspicious of
self-serving descriptions.

If the paper is about a diet that is
described as "healthy,"
your appropriate answer would be "that's for me to decide." That the
media can
refer to "healthy" is bad enough but in a scientific paper, it has to
be
considered an intellectual kiss of death. Nothing that you read after
that can
be taken at face value. If we knew what was healthy we would not have
an obesity
epidemic and we would not need another paper to describe it.

Guidelines, data or analyses that
describe themselves as
"evidence-based medicine," are likely to be deeply flawed. By analogy
with a
court of law, there must be a judge to decide admissibility of
evidence. You
can't pat yourself on the back and expect to be considered impartial.
And the
courts have ruled that testimony by experts has to make sense.
Credentials are
not enough. As in the first principle, experts have to be able to
explain
things to the jury.

Be suspicious if the authors tell you
how many other people
agree with their position. Science does not run on "majority rules" or
consensus.

Leaping tall buildings.

As indicated in Chapter 16, the new
grand
principle of doing
science is:
habeas corpus
datorum
, let's see the body of the data. If the
conclusion is
non-intuitive and goes against previous work or common sense, then the
data
must be strong and clearly presented.

As usually described: if you say you
can
jump over the chair, I
can cut you a lot of slack. If you say you can jump over the building,
I need
to see you do it. And my daughter, at age nine, suggested an additional
requirement. In a discussion of superheroes, I pointed out that
Superman used
to be described as being "able to leap tall buildings in a single
bound." She
pointed out that if you try to leap tall buildings, you only
get
a single bound. You
can't say your hundred million dollar, eight-year long random control
trial was
not a fair test. The fat-cholesterol-heart hypothesis was sold as an
absolute
fact. None of the big clinical trials should have failed. Not one. In
the end,
almost all of them failed. One failure should have done it.

"Let's see the body of the data,"
that is, show me what was
done before you start running it through the computer. Statistics may
be
important, but in a diet experiment, where one has to assume that even
a
well-defined population is heterogeneous, you want to see what all the
individuals did.

The compelling work of Nuttall and
Gannon, showing that
diabetes can be improved even in the
absence of weight loss,
is increased in impact by the presentation of the individual
performance.
Figure 24-1
illustrates the benefits of a
low-carbohydrate diet. Not only is there general good response in
reduction of
blood glucose excursions but all but two of the individual subjects
benefitted
substantially and all but one got at least somewhat better.

Understand
observational studies.

The usual warning offered by bloggers
and
others is that
association does not imply causality, that observational studies can
only
provide hypotheses for future testing. A more accurate description, as
worked
out in
Chapter
17
, is that observational studies do not
necessarily
imply causality. Sometimes they
do. The association between cigarette smoke and lung disease has a
causal
relation because the associations are very strong and because the
underlying
reason for making the measurement was based on basic physiology,
including the
understanding of nicotine as a toxin.

Figure
24-1
. 24
-hour
responses for 8 individual subjects
on diet of 30% bioavailable glucose. Letters: Breakfast, Lunch, Dinner,
and
Snack.

In this sense, observational studies
test
hypotheses rather
than generating them. There are any number of observations of different
phenomena but when you try to make a specific comparison, you usually
have an
idea in mind, conscious or otherwise. When a study tries to find an
association
between egg consumption and diabetes, it is testing the hypothesis that
eggs
are a factor in the generation of diabetes. There
is
an hypothesis. It is just not a sensible,
one. It is not based on any sound fundamental scientific principle.

It is true, however, that if you do
find a strong
association with an unlikely hypothesis (definition of intuition) then
you have
a new plausible hypothesis but that is true of all experiments. And it
must be
further tested.

It is important to question the
hypothesis being tested. If
the Introduction section to the published study says eggs have been
associated
with diabetes, you can at least check whether the reference is to an
experimental study rather than to the previous recommendations of some
health
agency.

Meta-analysis and the
end of science.

Doctors prefer a large study
that is bad to a small
study that is good. –
Anon
.

While intention-to-treat is the most
foolish activity
plaguing modern medical literature, the meta-analysis is the most
dangerous. In
a meta-analysis you pool different studies to see if more information
is
available from the combination. As such, it is inherently suspicious:
if two
studies have different conditions, the results cannot sensibly be
pooled. If
they are very similar, you have to ask why the results from the first
study
were not accepted. And, if the pooled studies give a different result
than any
of the individual studies, the authors are supposed to point out what
the
original study did wrong.

Simply adding more subjects is not
usually considered a
guarantee of reliability although papers in the medical literature
frequently
cite the large number of subjects as one of their strengths. If a
meta-analysis
is good for anything, which is questionable, it is for the originally
intended
role of evaluating small under-powered studies where you hope that
putting them
together might point you to something that you didn't see. It is a kind
of
"Hail Mary," last ditch play. It was not intended for appropriately
powered
studies with a large number of subjects.

A number of important meta-analyses
have
examined the effect of
saturated fatty acids (SFA) on cardiovascular risk. One such, from
Jakobsen, is
shown in
Figure 24-2
.
It is usually described as showing a small benefit in replacing SFA
with PUFA
and an increase in risk if the SFA is replaced with carbohydrate (CHO).
Examination of the figure, however, reveals that almost all of the
included
studies failed to show any significant effect of replacing saturated
fat. (The
statistical rule is that if the error bar (horizontal line) crosses 1.0
(even
odds) then there is no effect of the intervention) and yet the authors
came up
with an answer. How is this possible? It is the consequence of
group-think. If
everybody, the editors, the reviewers, assumes that a meta-analysis is
an
acceptable method, then peer review will be meaningless.

Figure
24-2
. Hazard ratios and for coronary events
and deaths in
different studies in the meta-analysis of Jakobsen,
et al
.
[6]
. Green: lower risk if
SFA is substituted by indicated
nutrient. Red: increased risk by substitution for SFAs. Figure redrawn
from
reference
[6]
.

One of the benefits, conscious or
unconscious, that keeps the
practice going is that it is perfect for current medical research. With
meta-analysis, no experiment ever fails, no principle is ever
disproved– sugar
causes heart attacks, cholesterol causes heart attacks, red meat causes
heart
attacks and statins prevent heart attacks – it doesn't matter how many
studies
show no effect. One winner and you can do a meta-analysis. Just one
more
expensive trial and we'll show that saturated fat is bad. And you don't
even
have to explain what the other guy did wrong as you might in a real
experiment.

The idea that simply adding more
subjects
will improve
reliability is not reasonable. Most of us think that if the phenomenon
has big
variability, then mixing studies will reduce predictability although it
may
sharpen statistical significance. There is a specific treatment, called
the Bonferroni
correction, to take account of the fact that as n gets larger, the
mathematical
difference between experimental and controls will sharpen, which is
recognized
as misleading. And again, in science, it is expected that if your
results
contradict previous experiments, you will provide evidence as to the
cause of
the differences. What did previous investigators do wrong? And do those
investigators now agree that you've improved things? Probably not if
you don't
ask them.

BOOK: The World Turned Upside Down: The Second Low-Carbohydrate Revolution
12.7Mb size Format: txt, pdf, ePub
ads

Other books

No Longer at Ease by Chinua Achebe
Criminal Revenge by Conrad Jones
Scandal of the Season by Christie Kelley
Dead End Job by Ingrid Reinke
Blood and Guts by Richard Hollingham
Money To Burn by Munger, Katy
Vampiris Sancti: The Elf by Katri Cardew