?:reviewBody
|
-
A surge in coronavirus cases in September pushed Wisconsin near the top of a dubious national ranking — most new COVID-19 cases per capita . Predictably, that spurred a related jump in the percentage of COVID-19 tests yielding positive results in the state. The rise has prompted renewed scrutiny of the state’s COVID-19 response. Some claim this is proof Gov. Tony Evers’ mask mandate isn’t working, while others say it means we aren’t taking the threat — or the mask mandate — seriously enough. But the conservative MacIver Institute took another tack, zeroing in on the formula used to establish the positivity rate to claim that errors by the state Department of Health are what is actually causing the increase in that key measure. Bad Math Driving Wisconsin’s Exploding Positive Test Rate, declared a Sept. 23, 2020, headline that was shared widely on social media. The MacIver story went on to make a series of claims building on that thesis: A fundamental flaw in how the Evers’ Administration calculates Wisconsin’s daily COVID-19 positive test rate has excluded hundreds of thousands of test results and led to a wildly distorted picture of the state’s progress in confronting the virus, said the story’s first paragraph. It later quoted Ryan Westergaard, DHS chief medical officer, as explaining the positivity rate is calculated by dividing the number of positive cases by the number of people tested — not the total number of tests. The MacIver piece calls this a shocking admission. But it’s only shocking if you haven’t been paying close attention. This is how DHS has consistently calculated test positivity, and since mid-August the department has even explicitly laid this out in the COVID-19 dashboard . What’s more, it’s an acceptable way — even the preferred way, by some — to do the calculation, since it weeds out multiple tests of the same person. Here’s why this bad math claim is ridiculous. Rate calculation hasn’t changed amid spike The most obvious error in this claim is blaming math for the exploding positive test rate. The seven-day average of tests with positive results — a more reliable marker than daily totals since it smooths out daily jumps caused by changes in testing volume — rose slowly through August, from about 6% to 8%. It then spiked starting in September, pushing the seven-day average close to 20% by the end of the month. The MacIver piece posits this is due to errant calculations. But this rate has been calculated the same way throughout this entire stretch. So whatever one’s objections to the methodology, if the same formula is used at the beginning and the end of a given time period, and the rate increases dramatically in that span, the increase is real. The MacIver piece notes in its critique that test positivity is important because it’s one of six so-called gating criteria Evers is using to shape state policy on the pandemic. If test positivity is being used incorrectly, that would make it harder to achieve the thresholds set by Evers, and for the state to reopen fully. Test positivity is, indeed, one of the criteria. But the MacIver implication that it lacks legitimacy falls flat. The criteria doesn’t call for a specific percentage level to be met. Rather it calls for a downward trajectory for 14 days. And, as we noted, the same measuring stick — the formula — has been used throughout. State approach to test positivity is widely used, even preferred That brings us to the heart of this claim, that DHS is using bad math in calculating the percentage of positive tests. The MacIver piece itself elaborates on this, asserting, If the goal is to calculate the daily positive test rate, then DHS is using the wrong numerator and denominator. DHS calculates percent positivity by dividing the number of people with positive test results by the number of people tested in a given span. (Their dashboard includes daily counts as well as a seven-day and 14-day average.) The MacIver piece asserts this is incorrect, saying the state should instead be basing the calculation off the raw number of tests, which would deliver a lower percentage. But the state’s approach is actually both widely used and preferred because it prevents people who are tested often from skewing the data. These aren’t people who feel sick or exhibit symptoms and seek a test — the typical sort of person tested. Rather, these are people who are tested regularly based on their position (such as a front-line health care worker) or situation (such as living in a nursing home where an outbreak has occurred). Counting each of those negative test results would give an unrealistic picture of how frequently positive tests are occurring in the population — the core question all testing is trying to answer. Our data report individuals tested, Westergaard said in a June 11, 2020, media briefing . So, if an individual was tested more than once because they were being followed to see if they cleared the infection or if they were tested a couple times weeks apart, they would be considered a single case and not multiple cases in our data. Johns Hopkins University, which operates a COVID dashboard has become a go-to national resource in the pandemic, endorses this approach. Officials there said some places aren’t using it only because their data doesn’t allow them to break it down this way. We feel that the ideal way to calculate positivity would be number of people who test positive divided by number of people who are tested, says an explainer posted on the Johns Hopkins dashboard . We feel this is currently the best way to track positivity because some states include in their testing totals duplicative tests obtained in succession on the same individual, as well as unrelated antibody tests. However, many states are unable to track number of people tested, so they only track number of tests. The CDC also notes this is a standard way of approaching test positivity calculations. Their website lists it as one of three formulas used by various agencies. The CDC calculates positivity by dividing the number of positive tests by the total number of tests taken. It notes it only uses this approach because it doesn’t have access to the data state and local health departments have to identify — and separate out — repeat tests from the same individual. Some states divide the number of people with positive tests by the total number of tests taken. Some states use Wisconsin’s approach, dividing the number of people with positive tests by the number of people tested. The New York Times — which also runs a COVID dashboard — notes at least 18 states report tests like Wisconsin, based on the number of people tested, rather than the number of tests. None of this national context was included in the MacIver piece. DHS spokeswoman Elizabeth Goodsitt said Sept. 29, 2020, the agency was in the process of launching an updated dashboard that shows test positivity calculated using both approaches — number of people and number of tests. She said both measures are informative to the response effort in Wisconsin. One more quick note for some general context on this metric. The COVID Tracking Project, another dashboard operator, said in a Sept. 22, 2020, blog post that test positivity is useful but also one of the most commonly misunderstood metrics for monitoring the COVID-19 pandemic. It notes percent positivity rates can vary greatly based on who officials decide to test. The choice of who gets tested is based on state- or county-specific criteria, but is often made based on how sick people appear to be, which in turn influences test positivity, the blog post said. If a state only tests people who have clear symptoms of the virus, it will likely have a higher test positivity than one that is also testing asymptomatic people. Our ruling A MacIver article widely shared on Facebook says bad math is to blame for Wisconsin’s exploding positive test rate. This is wrong on multiple fronts.They’re asserting manipulated math is behind the increase in test positivity, but it’s actually because the number of people testing positive has risen based on human behavior. There is no bad math. The state DHS uses a methodology (tallying people tested rather than raw tests) that is widely used by health agencies around the country. It is considered a more accurate approach because it assures people who are tested on a regular, even daily, basis — such as health care workers — don’t skew the data. And whatever the quibbles with the methodology, this wouldn’t cause the increase in the test positivity rate because the formula DHS used to calculate the rate was the same throughout the time period. That makes this claim both false and ridiculous, or as we call it, Pants on Fire. Note: DHS updated their COVID-19 dashboard Sept. 30, the day after this story published, to include test positivity in terms of both tests and people. That does not affect the rating for this item since the per-person method remains on the DHS website and remains an acceptable and even preferred approach.
(en)
|