Standards of evidence for disease causation are not new. Sir Austin Bradford Hill, a
distinguished British biostatistician and epidemiologist, published in 1965 his nine criteria required to establish causation, including:
• Consistency – findings must be observed multiple times with different study participants under different circumstances and with different measurement instruments.
plied that factor, your lifetime risk of developing colorectal cancer increases by a
mere 1.1% from 5.3% percent to 6.4%—
a neglible increase.
“As nitrite’s human health benefits
are uncovered, … it’s critical to
remember why nitrite is added to
cured meats in the first place: food safety.”
• Biological plausibility – for example, the biological theory of smoking causing tissue damage, which over time results in cancer in the cells, was highly plausible.
• Dose-response – as the dose of a suspected substance increases, the disease risk
should increase as well. Unfortunately, many studies with roller-coaster like dose-response curves are often reported as if they clearly show cause and effect.
• Strength of association – for example, a variety of studies show that the lung cancer
rate for smokers is about 10–30 times higher than for non-smokers.
The strength, or magnitude, of an association is commonly referred to as a point
estimate, which usually represents the rate of disease among persons with the exposure
of interest compared with the rate of disease among persons without the exposure of
interest. This is labeled the rate ratio or relative risk (RR) of disease. A study with an
RR of 1 indicates that the outcome was neutral, or null—a factor neither increased nor
decreased risk. An RR of 1.2, for example, indicates there is a 20% increase in risk.
Generally speaking, associations less than 1.5 may be viewed as weak, while associations between 1.5 and 2.0 may be considered moderate and those above 2.0 are commonly referred to as strong in magnitude. And it is widely accepted that results from a
single study are never enough to be considered causal, no matter how large the RR.
While serving as editor of the New England Journal of Medicine, Marcia Angell,
M.D., told science writer Gary Taubes, “As a general rule of thumb, we are looking for
a relative risk of 3.0 or more before accepting a paper for publication, particularly if it
is biologically implausible or if it’s a brand-new finding.” Weak and modest associations may contribute to the total picture, but often these associations can be explained
by a number of confounders or by issues like inaccurate recall of consumption or behavior (called “recall bias”) and should be evaluated critically.
Cancer epidemiology papers with very small relative risks (well below 2) often are
the cause of many of the most frightening health headlines. In the case of the
WCRF/AICR report, of the five studies cited as showing a relationship between
processed meat and colon cancer, only two were statistically significant. The highest
RR was only 1.69 and the summary estimate of relative risk was a mere 1.21.
Klurfeld pointed out that the companion colorectal cancer SLR includes a chart
that shows a statistically significant, 26% protective effect against rectal cancer for the
highest meat consumption. This, however, was not mentioned in the summary report
or in the press release, and it is unclear why that did not deserve to be highlighted in
the summary report.
Unfortunately, relative risk and absolute risk are often confused. According to Dr.
Klurfeld, WCRF/AICR’s dire warnings about meat failed to put the absolute risk of
colorectal cancer in perspective. For instance, a person’s absolute risk of developing
colorectal cancer during their lifetime is 5.3%, according to the National Cancer Institute. So, even if you were to accept the WCRF estimate of relative risk of 21% and ap-
Null Findings and Publication
Unfortunately, all too often, “null
findings” (those not showing an increased or decreased risk) go either unpublished or unpublicized. A recent
article in the Journal of the National Cancer Institute affirmed the importance of
publishing null findings: “Caution
should be applied in the communication of results to the media and the general public, because ‘positive’ findings
tend to attract the media and public attention, whereas findings that do not
confirm a previously reported association or do not indicate a new association receive no attention.”
The authors concluded that epidemiology is particularly prone to generating
false-positive results and noted that epidemiology has been increasingly criticized for producing findings that are
often sensationalized in the media and
fail to be upheld in subsequent studies.
They urged their epidemiology colleagues to have increased humility regarding their findings, concluding this
“…would go a long way to diminishing
the detrimental effects of false-positive
results on the allocation of limited research resources, on the advancement of
knowledge of the causes and prevention
of cancer, and on the scientific reputation of epidemiology and would help to
prevent oversimplified interpretations of
results by the media and the public.”
The Media Factor
Today, embargoed press releases from
journals announcing “landmark” findings are provided in advance to news
media. Because these stories are often
written during an embargo, they rely
heavily—and sometimes exclusively—on
press releases and comments from the
researchers themselves. Certainly, both
scientists and journals benefit from publicity, especially if findings are cast as
dramatic or landmark.
Even Walter Willett, M.D. of Har-
vard’s School of Public Health, who has