Flower

Posts Tagged ‘statistics’

Abortion and breast cancer: Facts, lies, and statistics

Among all the pro-abortion myths, the assertion that the abortion-breast cancer (ABC) link has been “disproven” is among the easiest to debunk.  But you have to have your references with you.  And to really close the sale, you have to understand some basic statistics.

Three factors make this discussion more complex:

  1. Statistical analysis doesn’t “prove” anything, it only manages uncertainty.  An analysis that “shows a statistically significant relationship” between abortion and breast cancer doesn’t definitively prove a relationship exists, and a study that “fails to show a statistically significant relationship” certainly doesn’t prove that it doesn’t exist.  There is a big difference between failing to find something and proving that it’s not there.
  2. Even if you find an independent statistical relationship in the data, that statistical link doesn’t prove that a causality link exists.  So even if the statistical link were undisputable, it would be wrong for us to say that abortion definitely causes breast cancer until the biological causal mechanism is established.  However, plausible causal mechanisms have been proposed.
  3. The effect of delaying childbirth is also a risk factor.  Because abortion, by its very nature, causes a delay in childbirth, it is easy to see  how some might believe that the delayed-childbirth effect is the real culprit, and abortion is no more a risk factor than simply failing to get pregnant.  However, you need to know that the abortion effect has been measured independently of the delayed-childbearing effect.
  4. The most self-assured antagonists in your audience are sometimes the ones who don’t have a clue about statistical analysis.  All they know is the party line, but they are quick to tell you how smart they are and how stupid you are. However, others in your audience are listening, and they are the ones you are patiently trying to reach.  Reach them with reason, not anger.

To paraphrase Alexander Pope, a little knowledge of statistics is a dangerous thing.  I had a boss once who had taken one class in statistics, and his statistical conclusions were downright horrific.  I got a PhD minor in experimental statistics, and the more I learned about it, the more I learned to be careful.  Therefore, I actually had one of my long-time-ago statistics professors review this post.

Here’s how to respond to the assertion that the ABC link has been “disproven”:

Step 1.  Show your audience a recent study that shows the statistical link; it helps if the paper is co-authored by a person who has previously denied the link.  Here is a paper that is important for two reasons: (a) it is recent and (b) it was co-authored by Dr. Louise Brinton, the chairperson of the 2003 NCI workshop that declared abortion not to be a risk factor for breast cancer.   This paper, which she co-authored in 2009, reported that abortion was indeed associated with a 40% increase in cancer risk.  (See the occurrence ratio of 1.4 reported for abortion at bottom of page 1158.)

The increase in cancer risk measured in this study was statistically significant at the 95% level, which means that there is less than a 5% probability of a “false positive.”   (A false positive, in this case, would mean that you detected a difference in cancer risk due to abortion that doesn’t actually exist, a difference that is based solely on random sampling error.)

Keep in mind that if there is no difference in cancer risk due to abortion, and you test at the 95% level of confidence, you will detect the non-existent difference (i.e., get a false positive) 5% of the time, or 1 time in 20, based solely on random sampling error.  Because of the possibility of false positives, we would want to see more studies.

Step 2.  Show your audience a compilation of studies.  The Breast Cancer Prevention Institute (BCPI) has assembled a list of 68 studies that tested for the link.  Nearly half of the studies cited (31 of 68) found a statistically significant increase in cancer risk associated with abortion.  In other words, in 31 studies, the data shows that the abortion group has a higher risk than the non-abortion group.  That’s still not enough to prove causality, but we can be confident that the statistical link is real.

When you produce this list for your audience, be sure to disclose that the BCPI has a pro-life agenda, and that your audience should read the studies and decide for themselves.  I say to students, “Don’t let people with opinions, including me, tell you what to believe; you have to do the research yourself.”

The other 37 studies “fail to show that the cancer risk is elevated due to abortion.”  But “failing to show” an elevated risk is not equivalent to “proving” that there is no elevated risk.  A statistical analysis can’t prove that the risk is exactly the same, it can only “fail to show” that the risk is elevated.  Until you understand this point, do not attempt to explain Steps 3 and 4, just go directly to the Conclusion (below).

Step 3 (optional).  Explain the concepts of statistical significance, false positives and false negatives.  This is tricky to explain, but data can “show that the cancer risk due to abortion is elevated at a statistically significant level,” or they can “fail to show that the cancer risk is elevated at a statistically significant level,” but the data can never show that the cancer risk is not at all elevated.

Before a researcher performs the statistical test, he must first set the “level of significance” for that test, usually 1%, 5% or 10%. This is the level of “false positives” he is willing to accept. In other words, if he sets the level of significance at 5% (alpha=0.05), then that means if he finds a difference between the “control” group and the “test” group (e.g., between the non-abortion group and the abortion group) that is statistically significant, he can be 95% sure that the difference actually exists and is not due to random sampling error. There is only a 5% chance that he will measure a positive difference that isn’t really there (i.e., a false positive).

But the lower he sets the likelihood of false positives, the greater the opportunity for false negatives (i.e., stating “no difference” when one exists). If his data shows an increase in risk within the abortion group, but he can be only 89% confident that the measured increase is real and not due to random sampling error, he still has to report that the elevated risk from abortion is “not statistically significant at the 95% level.”  He might be 89% sure, but he’s not 95% sure, so he has to report “no difference.” Consequently, a failure to find a statistically significant risk elevation is not proof that the risk isn’t elevated.  It might only mean that he does not have enough data to confirm that the measured risk elevation is “statistically significant.”

Step 4 (optional).  Explain that the difference between the groups is difficult to show at a statistically significant level because the difference is not all that big.  The ambient cancer risk among women is about 10%.  Abortion appears to increase the risk of breast cancer to about 13% or 14%, which is a 30% or 40% increase in cancer risk.  A difference of only 3 or 4% is difficult to measure statistically—you need a large dataset to do it—but it’s an important difference to the estimated 300,000 or 400,000 women who got cancer because of their abortions.

An abortion-related increase in cancer risk from 10% to only 13% is enough to kill more than 300,000 deaths since Roe (source), which is about 8,000 women per year.  But because measuring an increase of this magnitude in an individual study is difficult, the opportunity for false negatives is high, which could explain why some of the studies in the BCPI compilation fail to find the increase.

Conclusions.

  1. We can never be fairly criticized for saying that abortion is a possible risk factor for breast cancer.  According to a recent compilation, 31 of 68 studies  have shown a statistical relationship, even if the causal mechanisms have not been established.
  2. The accusation that the ABC link has been “proven” false is made by people who don’t understand how science and statistics work.  The lower a researcher establishes the likelihood of a false positive (normally 1%, 5%, or 10%), the greater the opportunity for a false negative (i.e., stating “no difference” when one exists).
  3. There are many in the medical community who believe there is more evidence for the link than against it.

More Information: