Statistical Medicine, pt1

March 6, 2007

I have been meaning to write a piece on statistical medicine for a while now. Since I just got a comment on my post “Should we all be worried by the HIV-AIDS hypothesis?” and also since much of the marketing and, more disturbingly, research into AIDS is done via statistical medicine I thought now might be the time.

I have spilt this into two parts the first is not my writing but two other texts I copied for my own interest some time ago and do not know exactly who to attribute them to.

One is a real medical study showing that “Leos” are 15% more likely to be admitted to hospital with gastric bleeding and “Sagittarians” are 38% more likely than others to land up there because of a broken arm. The second is a flippant piece about the dangers of bread. It is thought-provoking nonetheless.

WHY STATISTICAL MEDICINE IS DANGEROUS, TEXT1 (comedy):

!!! BREAD IS DANGEROUS !!!

Research on bread indicates that:

1. More than 98 percent of convicted felons are bread users.
2. Fully HALF of all children who grow up in bread-consuming households score below average on standardized tests.
3. In the 18th century, when virtually all bread was baked in the home, the average life expectancy was less than 50 years; infant mortality rates were unacceptably high; many women died in childbirth; and diseases such as typhoid, yellow fever, and influenza ravaged whole nations.
4. More than 90 percent of violent crimes are committed within 24 hours of eating bread.
5. Bread is made from a substance called “dough.” It has been proven that as little as one pound of dough can be used to suffocate a mouse. The average American eats more bread than that in one month!
6. Primitive tribal societies that have no bread exhibit a low incidence of cancer, Alzheimer’s, Parkinson’s disease, and osteoporosis.
7. Bread has been proven to be addictive. Subjects deprived of bread and given only water to eat begged for bread after as little as two days.
8. Bread is often a “gateway” food item, leading the user to “harder” items such as butter, jelly, peanut butter, and even cold cuts.
9. Bread has been proven to absorb water. Since the human body is more than 90 percent water, it follows that eating bread could lead to your body being taken over by this absorptive food product, turning you into a soggy, gooey bread-pudding person.
10. Newborn babies can choke on bread.
11. Bread is baked at temperatures as high as 400 degrees Fahrenheit! That kind of heat can kill an adult in less than one minute.
12. Most American bread eaters are utterly unable to distinguish between significant scientific fact and meaningless statistical babbling.

In light of these frightening statistics, it has been proposed that the following bread restrictions be made:

1. No sale of bread to minors.
2. A nationwide “Just Say No To Toast” campaign, complete celebrity TV spots and bumper stickers.
3. A 300 percent federal tax on all bread to pay for all the societal ills we might associate with bread.
4. No animal or human images, nor any primary colours (which may appeal to children) may be used to promote bread usage.
5. The establishment of “Bread-free” zones around schools.

WHY STATISTICAL MEDICINE IS DANGEROUS, TEXT2 (actual study):

PEOPLE born under the astrological sign of Leo are 15% more likely to be admitted to hospital with gastric bleeding than those born under the other 11 signs. Sagittarians are 38% more likely than others to land up there because of a broken arm. Those are the conclusions that many medical researchers would be forced to make from a set of data presented to the American Association for the Advancement of Science by Peter Austin of the Institute for Clinical Evaluative Sciences in Toronto. At least, they would be forced to draw them if they applied the lax statistical methods of their own work to the records of hospital admissions in Ontario, Canada, used by Dr Austin.

Dr Austin, of course, does not draw those conclusions. His point was to shock medical researchers into using better statistics, because the ones they routinely employ today run the risk of identifying relationships when, in fact, there are none. He also wanted to explain why so many health claims that look important when they are first made are not substantiated in later studies.

The confusion arises because each result is tested separately to see how likely, in statistical terms, it was to have happened by chance. If that likelihood is below a certain threshold, typically 5%, then the convention is that an effect is “real”. And that is fine if only one hypothesis is being tested. But if, say, 20 are being tested at the same time, then on average one of them will be accepted as provisionally true, even though it is not.

In his own study, Dr Austin tested 24 hypotheses, two for each astrological sign. He was looking for instances in which a certain sign “caused” an increased risk of a particular ailment. The hypotheses about Leos’ intestines and Sagittarians’ arms were less than 5% likely to have come about by chance, satisfying the usual standards of proof of a relationship. However, when he modified his statistical methods to take into account the fact that he was testing 24 hypotheses, not one, the boundary of significance dropped dramatically. At that point, none of the astrological associations remained.

Unfortunately, many researchers looking for risk factors for diseases are not aware that they need to modify their statistics when they test multiple hypotheses. The consequence of that mistake, as John Ioannidis of the University of Ioannina School of Medicine, in Greece, explained to the meeting, is that a lot of observational health studies—those that go trawling through databases, rather than relying on controlled experiments—cannot be reproduced by other researchers. Previous work by Dr Ioannidis, on six highly cited observational studies, showed that conclusions from five of them were later refuted. In the new work he presented to the meeting, he looked systematically at the causes of bias in such research and confirmed that the results of observational studies are likely to be completely correct only 20% of the time. If such a study tests many hypotheses, the likelihood its conclusions are correct may drop as low as one in 1,000—and studies that appear to find larger effects are likely, in fact, simply to have more bias.

So, the next time a newspaper headline declares that something is bad for you, read the small print. If the scientists used the wrong statistical method, you may do just as well believing your horoscope.

Part two to follow…

Advertisements

One Response to “Statistical Medicine, pt1”

  1. Solnushka Says:

    We’re into friend of a friend territory here but…

    A friend of a friend works in medical research and he says (apparently) that you can always tell who needs funding by the latest health scares doing the rounds. In point of fact, he claims that pretty much _all_ health scares are a result of this.

    His example was MRSA, which is some kind of superbug you get from hospitals in the UK. In fact, I only I know about it because it was all over the news for abot a year.

    He says it’s just one of many of the specialised hospital viruses, and not even the most dangerous, but that it happened to be what department X had picked on for its next research project…


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: