RAW-FOOD Archives

Raw Food Diet Support List

RAW-FOOD@LISTSERV.ICORS.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Thomas E. Billings" <[log in to unmask]>
Reply To:
Raw Food Diet Support List <[log in to unmask]>
Date:
Thu, 19 Nov 1998 08:37:21 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (163 lines)
Comments on "The Great Health Hoax"


This post presents my comments on the article, "The Great Health Hoax",
written by Robert Matthews.  As the article by Matthews deals with statistics,
readers should be aware that I have an M.S. in Mathematical Statistics,
and a B.S. in Mathematics.

As my time is short, my comments will be brief. As an overall view of
the article, I would say that the article is interesting, and much of
the information there is accurate. However, the author blaming all
of the weakneses of clinical research on the use of significance
levels (P-values) in statistical tests, is both a major exaggeration and
incorrect.

Clinical Studies Don't Work in the Real World?

Matthews begins by pointing out that clinical studies that indicate major
breakthroughs in the control of disease typically don't work when they
are disseminated and used widely. As Kirt pointed out, the only
reason we know this, is the self-correcting/self-reviewing nature of
science. This aspect of science, that it changes to reflect new
information, is a strength of science, and not a weakness.

This of course is in sharp contrast to the alternative diet promoters,
who advertise their diets as being based on "eternal health truths"
(which usually have virtuallly no scientific evidence to support them),
or strictly anecdotal evidence. We have seen on this list how many
alternative diet promoters resort to hostility (or wacko crank science)
to support their sometimes bizarre views. Also, as Kirt reminded us, the
alternative diets frequently don't work when put into practice.

There is another relevant aspect here - communications vs. security. Consider
that part of the problem is that scientific research, when reported
in the popular press, often bears little resemblance to the papers
as written in the actual journals. Further, there are a lot of sick
people, who are "hungry" for a cure or healing, and who will latch
on to anything that gives them hope. The result of such an environment
may be unrealistic expectations, combined with inaccurate information
on the new therapy. Needless to say, this is a potentially dangerous
situation.

Bayesian vs Frequentist Statistics

Matthews goes on to identify the culprit he believes is responsible
for the sad state/unreliability of clinical trials: the use of
statistical tests that report a significance level, or "P-value".
In particular, he identifies the work of R.A. Fisher, whose work
is referred to as "frequentist", because it uses observed frequencies.

Matthews also praises the statistical methods of Thomas Bayes, founder
of the Bayesian school of statistics. The works of Bayes rely on an
assumption of a prior distribution*, and are regarded as "subjective"
because of this fact.

[* distribution = function that explains the occurrence of the data ]

The Bayesian vs. frequentist debate is old news in statistics, and Matthews
adds nothing to the debate. However, the big picture here is that the problem
is not theory, but in applications.  This point deserves some clarification.

First, both Bayesian and frequentist statistical procedures are established
in the same way other math/stat theory is: via a set of theorems and
proofs (though the choice of P=0.05, is in fact arbitrary, as Matthews
reports). That is, both Bayesian and frequentist approaches are
sound, logically correct theories.

Second, the real problem comes when you try to apply the theory in the
real world. Both Bayesian and frequentist approaches depend on assumptions,
assumptions which are often not met, or hard to determine if they are met,
in the real world.

Statistics in the Real World

For example, all Bayesian procedures, and many common frequentist procedures,
make assumptions about the distribution of the data, or the errors in
the data. That assumption, however, may be extremely hard to test.

One problem that is given to advanced graduate students in statistics,
is as follows. You are given a data set, and are asked to determine
exactly what distribution the data set comes from. Sound easy? It's
not!  The student takes the data set, and estimates the "best fit"
for the data set, for each likely distribution. Then, to find out
which is the best fit of all the distributions, you do a test -
you compute the distance between the raw data and the estimated
distributions (Kullback-Liebler distance between CDFs: cumulative
distribution function). What do you find?  That you can't tell the
difference between the various symmetric, unimodal (= 1 peak)
distributions, unless you get a whole LOT more data! The point here
is that one usually cannot prove that a certain data set actually
comes from the assumed distribution, or not.

Another point that Matthews discusses is that the actual P-value
may be higher than expected. He does not make it clear in his article that
this usually occurs under conditions of multiple tests (or other
unusual conditions). It is well known in statistics that if one makes
multiple tests at P-value of 0.05, the OVERALLL P-value of the procedure
is much higher than P=0.05. There are techniques to avoid these problems,
including using much lower values of P for each test (the Bonferroni
correction). However, most clinical trials don't bother with this,
and make numerous, separate tests at P=0.05, ignoring the underlying
problems.

Thus I would argue that the fault for the sad state of clinical trials,
rests in application areas.  Some of the errors one may find in clinical
trials:

* failing to measure/control likely covariates and externalities

* lack of a control group

* using incorrect or inappropriate statistical methods; e.g., failing to
check that data meets the assumptions of the procedures, censoring or
not censoring the data as appropriate, making multiple tests without
trying to control overall P-value (or failing to advise readers of
the study that overall P-value was NOT controlled, hence some results
may be spurious), .etc.

With careful design of the experiment, and proper statistical analyses,
one can minimize the above problems. However, it is not possible to
totally eliminate the problems - there is too much variability in the
real world and people, too many uncontrolled or unmeasured covariates/
extrenalities, etc. The results of statistical procedures are no better
than the input data, a point ignored by many.

Are All Clinical Trials Worthless?

A short answer: no. Some are reasonable. However, due to the limits on them,
one should rely on multiple trials (review articles are nice), meta-analysis
studies (which have their own set of theoretic problems). Further, studies
should be interpreted with common sense and logic; ideological filters
are often (but should not be) used in conjunction with the interpretation
of clinical trials. The use of ideological filters is why most alternative
diet advocates have clinical trials to support their views, even
though other advocates who preach the opposite, can cite clinical trials
as well.

So, don't abandon clinical trials, but don't look at them as "certain
proof" of your conjectures, either. Recognize their limits, and use
them accordingly, in an intelligent manner.

The Need for Security

Once again, one wonders if the innate need for security, is an underlying
factor in the mis-interpretation of clinical trials (or the China
project, a separate subject). People really, truly want to believe in
their diet, and they use ideological filters, rather than caution
and skepticism. Of course, it is that innate need for security and
simplicity, that the alternate diet promoters exploit, in peddling their
simplistic dietary dogma. It's VERY comforting to think the miracle drug
or miracle diet will be the answer to your problems. It is so very easy
to "hide" in simplistic dietary dogma, and much harder to face reality and
admit that reality is complex, and we don't fully understand it. That is
why people may be so eager to embrace the exaggerated claims of "miracle
drugs" or "miracle diets".

I hope the above was of interest to you. My time is limited, so I
cannot engage in long discussions on the above. Others are of
course free to discuss the above if they wish.

Tom Billings
http://www.beyondveg.com

ATOM RSS1 RSS2