When better surveillance confounds public health statistics

In public health and epidemiology settings, ‘surveillance’ refers not to sinister and intrusive monitoring of the population, but keeping records on trends in health. Some infectious diseases, for example, are subject to mandatory reporting, so public health agencies can follow their rise and fall over the course of time to determine if education, treatment, and outreach campaigns are working. Surveillance is improving all the time, and it can have a profound effect on public health statistics — something journalists often don’t account for, particularly when they are looking to work an angle for a story.

Public health agencies use a variety of surveillance tools that can alter statistics when they’re put in place, at least for several years while reporting and statistics have a chance to stabilise. One is mandatory reporting for something that wasn’t mandatory before, or inclusion of a specific area on reports to discuss something — a classic example was an addition to the forms used for death certificates. Doctors were asked to indicate whether patients were pregnant at the time of death or had been pregnant/recently delivered in the area surrounding the time of death, which meant that deaths directly caused by labour and delivery appeared alongside, say, deaths of new mothers killed in car accidents. Historically, deaths associated with pregnancy were harder to track because they were subject to poring through anonymised medical records or trying to scan death certificates, hoping that physicians had jotted down notes.

Suddenly, the pregnancy-associated death rate seemed to go up, a subject of considerable distress in late 2015. Was the United States failing pregnant patients? With an increase of several points from 1990, what was the US doing wrong? Perhaps nothing: Perhaps deaths that had been hidden before were coming out. Or perhaps the death rate really was increasing, in which case the trend will become apparent over the course of several years of mandatory reporting on death certificates. Likewise, public health departments may start asking providers in their area to start reporting given conditions as health initiatives or for research. The inevitable immediate result is that rates go up — or perhaps I should say ‘go up.’

Public health agencies may also reclassify various categories, which also causes statistical changes. One of the most well known examples is the CDC’s reframing of ‘obesity,’ which shifted millions of people in the United States into the obese category overnight. These people went from ‘normal weight’ to ‘obese’ without any actual weight change, but the net effect made it seem as though ‘obesity’ rates were increasing in the United States. Every time agencies redefine something, it can cause a similar effect.

Those casually skimming health trends and outcomes might jump on things like this. More people are obese than ever before! Well, yes, technically, according to the raw statistics. But if you examine them more closely and control them for factors like the redefinition, you suddenly see a very different picture. If your goal is to scaremonger about fat in the United States, of course, you don’t do that, and have an active goal of shying away from a closer look because you don’t want to see something that might disprove your pet theory that people are getting fatter. If your goal is to report with more nuance, maybe you look more closely, but if you don’t know what you’re doing, you can still miss important signs.

Similarly, public health agencies typically don’t make huge public announcements about changes in surveillance policy, since they’re tweaking it all the time. That means that if you see, say, syphilis rates going up, you might be under the impression that more people are being infected with and treated for syphilis, without realizing that a public health agency made the infection reportable — incidentally, most state departments of public health actually already have syphilis listed on their reportable disease database along with a number of other STIs. You may also not realise that the background rate of infection quietly ebbs and flows. For example, you might see that the rate of infection is decreasing or staying stable, but more people have an illness. How does that work? Because treatment advances are letting people survive longer, which means that more people overall are infected, but that doesn’t necessarily translate to a bad public health outcome. In fact, it’s a good one.

Public health agencies often throw you a bone by indicating that they’re using ‘new’ reporting and analysis methods for a given disease, which is a tipoff that the statistics are likely skewed and will remain so for some time. Generally, that skew is almost always upwards, because better tracking pulls more things to light. (Think about how you put on a black shirt and suddenly all of your white cat’s hair shows up. It was always there, but your shirt made it more obvious.) With a little sifting, you can see when these ‘new’ techniques were put in place and how much they’ve changed over the decades. 1990 statistics might not be fairly comparable with 2002 might not really be comparable to 2011. You have to be able to account for or adjust for these things for commentary on health statistics to be useful.

Casual readers don’t realise this, something unscrupulous people shamelessly take advantage of, because it makes it easier to promote an argument when you think your audience is ignorant. This is insulting to readers, and it doesn’t benefit them in the long term, instead trapping them in a landscape of not understanding public health issues. For readers who are well informed, meanwhile, the abuse of statistics makes such pieces less effective, and can even undermine otherwise excellent points about a given issue, making it a failure for journalism.

Image: Public Health Dentistry, Trinity Care Foundation, Flickr