Serious man working at laptop looking at chart in his hand.

If overpayments are found, then the extrapolation recoupment number will go up; if underpayments are found, the extrapolation will go down.

Precision matters – in everything. Especially in Medicare statistical sampling and extrapolations.

But that’s not how the world works.

In the fantasy world of statistical sampling and extrapolation inhabited by program integrity auditors, the statisticians long have grown accustomed to cutting corners and doing substandard and sloppy work. And the Administrative Law Judges (ALJs) have been hoodwinked into accepting this inaccurate work based on a dangerous myth.

Yes, there is a dangerous myth in the world of Medicare auditing, statistical sampling, and extrapolations. What is it?

Well, let’s start with the Program Integrity Manual (PIM), which as we all know has a number of dos and don’ts that are routinely ignored by auditors. In the infamous Chapter 8 on statistical sampling and extrapolation, the PIM states explicitly that the auditor can pick and choose what rules they wish to comply with. Well, it doesn’t say that exactly, but here is the tortured language in Section 8.4.1.1: 

Failure by a contractor to follow one or more of the requirements contained herein does not necessarily affect the validity of the statistical sampling that was conducted or the projection of the overpayment.

So here is what that means. This is Washington, D.C. gobbledygook for an insinuation that the auditor can cut corners, take shortcuts, skip steps in the statistical methodology, make up data (or use the wrong data in crucial formulas), and even use the wrong formulas, yet still the extrapolation for the most part will be accepted.

So, on the one hand, the healthcare provider is audited using the strictest possible interpretation of the Local Coverage Determinations (LCDs). A small typo, a missing abbreviation, the smallest detail can disqualify a claim, and then its value will get multiplied hundreds of times in the extrapolation.

On the other hand, it is clear that the auditor is not held to the same standard. Far from it. There is no need to ask if this is fair, because we all know it is not.

Let’s look at this in more detail.

“Plus or Minus”

Small things add up to large things. We know this because it is possible to actually measure statistical work and assess how bad it is. One measure is precision. This is the “plus-or-minus” number.

Polls are statistical extrapolations. “The politician has 45-percent support in the polls, plus or minus 5 percent.” This means the support of the politician is somewhere between 40 and 50 percent, or 45 minus 5 percent and 45 plus 5 percent. This is known as a precision of 5 percent. Using the arcane language of statistics, the 45 percent is the so-called “point estimate.”

But there is another important number. It tells how sure you can be that the precision is plus or minus 5 percent. The standard for this is 95 percent. This means that one can be 95-percent sure that the somewhere between 40 and 50 percent. The term for this is “confidence.” It is fairly standard to use a 95-percent confidence. But the PIM gives much more leeway to the auditors. They can use 90 percent. They need to be only 90-percent sure.

But that is not what happens. Far from it.

We routinely find Medicare auditors handing in extrapolations with precision far worse than 5 percent. They rarely are that precise. In Medicare, precision can be plus or minus10, 15, 20, 30 percent –  or even worse.

That is somewhere between 25 and 85 percent. But that seems to be good enough for government work.

Effect on Medicare Extrapolations

OK, enough statistics. What does this really mean? In Medicare audits, the extrapolated overpayment amount, the recoupment being demanded, often is a number that is not very accurate. And in some cases, this can mean a very great amount of money, “plus or minus.”

Dangerous Myth

So, what do the auditors and the PIM do about this? They rationalize the imprecision by saying that instead of taking the “point estimate,” they will take the number at the lower range of the confidence. So, they would take the 40 percent in the first example, for the politician, and they would take 25 percent in the last example.

And this is how the ALJs have been told to rationalize sloppy statistical work. They will say something like “well, we can be 90-percent sure that the number is at least this.” And they will tell the healthcare provider that poor precision actually works in their favor.

After all, the worse the precision, the farther and farther the number on the lower side will drop. So everyone should be happy. In a recent case, a statistician hired by the ALJ was responding to complaints that the precision was very poor, somewhere around36 percent. They wrote:

Providers should be happy with poor precision. The more imprecise, the better it is for the provider, because we always ask for the lower bound of the confidence interval.”

And the ALJs go along with this. Do you see what has happened? The Medicare auditing system in the United States has degenerated into an official policy of poor standards for statistical sampling and extrapolations, and although there are guidelines in the PIM for the auditors, they are not required to follow them.

The PIM is Wrong

The problem with all of this is that it is wrong, wrong, wrong. How do we know this? Recently, we developed the mathematical proof that about a third of the time, the true number will be actually lower than the original recoupment demand. So, the PIM is wrong. It is horribly wrong; it is mathematically wrong. And this means that the auditor’s arguments, and the ALJ decisions that accept lousy precision based on this spurious logic, also are wrong. That poor precision works in favor of the provider is a myth, it can be proved to be a myth.

Share This Article

Facebook
Twitter
LinkedIn
Email
Print