Since 1992, payments to physicians for Medicare Part B have been based upon the Resource-Based Relative Value Scale (RBRVS). The RBRVS contains three specific components: the Work Relative Value Unit (RVU), the Practice Expense RVU, and the Malpractice Expense RVU. Each of these measures, relationally, focuses on the resources consumed when a medical provider performs a given service or procedure on a beneficiary.

While there is a complex process to create the individual RVU values, for the most part they rely upon a time calculation that is maintained by the Specialty Society Relative Value Scale Update Committee, or RUC for short. The RUC, which is managed by the American Medical Association (AMA), conducts research on the amount of time it takes to perform a procedure and reports its findings to the Centers for Medicare & Medicaid Services (CMS), which then uses the results to create the RVU components. While the Practice Expense RVU is influenced by this data, it is the Work RVU that is plainly based upon the RUC time estimates. 

These time estimates are created based on a survey system devised by the RUC, which represents only a couple dozen of specialties via national medical specialty societies. Every five years, the RUC goes through a major review of the time estimates and CMS makes adjustments to the RBRVS based upon ongoing recommendations. The reviews are conducted via a survey that is completed by some number of physicians for each of the member specialty societies.   

For 2016, the RBRVS database contains some 16,289 procedures, including those with -26 and TC modifiers. Of these, 7,987 have an associated work RVU, and of these, 7,405 have some number of minutes associated to the work RVU. Time is estimated in minutes and is broken down into four parts: 

  1. Pre-service time, which includes physician services provided from the day before the operative service until the time of the operative service, which may include hospital admission workup, pre-op evaluations, scrubbing, waiting, positioning the patient, etc. 
  2. Intra-service time, which is defined as face-to-face time for E&M visits and skin-to-skin time for procedures. This includes all services required to complete the service or procedure. 
  3. Post-service time includes all services provided on the day of the procedure that are related to the procedure, including post-op care, patient stabilization, recovery room care, etc. 
  4. Follow-up care, which includes all follow-up visits and care related to the procedure during the post-op follow-up period. 

Add these up and you get the total time in minutes. For example, for procedure code 49000 (exploratory laparotomy, exploratory celiotomy with or without biopsy), the pre-service time is 60 minutes, the intra-service time is 90 minutes, the post-service time is 30 minutes, and the follow-up time is 124 minutes. Therefore, the total amount of time estimated for this procedure is 304 minutes, or 5.07 hours. For procedure code 99213, the pre-service time is three minutes, the intra-service time is 15 minutes, the post-service time is five minutes, and the follow-up time is zero minutes. For this E&M visit, then, the total time is 23 minutes, or 0.38 hours. 

Of concern is that these time estimates are used to determine whether the hours that physicians are reporting exceed reasonable expectations. Many of us have heard of the case of Dr. Angel S. Martin of Newton, Iowa. On Jan. 15, 2010, the U.S. Attorney’s Office in that state published a press release on Dr. Martin’s conviction on 31 counts of healthcare fraud. According to the press release, based on minutes associated with Dr. Martin’s reported procedure codes, he would have exceeded 24 hours of care in day, creating an unbelievable scenario. How did they calculate this? Take the number of minutes associated with each procedure reported by the provider, multiply by the frequency, get the sum of the products, and then divide by 60, and you will have the number of assessed hours for that provider. I use the word assessed because there is no way to tell what the actual number of worked hours are unless someone follows the physician around with a stopwatch, which typically doesn’t happen.

How does a physician get to this high of a number? Let’s look at a couple of examples. The estimated time for a 99203 is 29 minutes. The estimated time for a 99205 is 67 minutes, a difference of 38 minutes, or just over half an hour. Let’s say that a provider sees 10 new patients a day coded with a 99203. That would equate to 1,160 hours per year based on a five-day workweek and 48 weeks a year. If, however, the visits were upcoded to a 99205, the assessed time would be 2,680 hours per year. So, upcoding is one way that physicians can report a higher-than-expected assessed time. The other has to do with the use of NPPs. Typically, NPPs will handle much of the follow-up load reported by a physician, and in surgical cases, the NPP may be involved in prep, opening, and closing, which can mitigate a significant portion of the total assessed time. But perhaps the most prevalent contributor to high time readings is the fatally flawed methodology relied upon by the RUC. This isn’t me talking, but rather dozens and dozens of studies conducted over the past 10 years, all critical of the RUC, its lack of transparency, CMS’s tendency to take nearly all of its recommendations without question, and the fact that there is no oversight to the process or the methods. 

In May of last year, the U.S. Government Accountability Office (GAO) issued a report in which it indicated that the recommendations from the AMA “may not be accurate due to process and data-related weaknesses.” The GAO further noted that the U.S. Department of Health and Human Services (HHS) “does not plan to inform the public of services identified by the RUC as potentially misvalued.” 

But wait, there’s more. In a study conducted by Cromwell et al, the authors concluded that “national estimates of visit duration overestimate the combination of face-to-face time and time spent on visit-specific work outside the examination room by 41 percent.“ In the 2010 NAMCS study, the preponderance of physician office visits lasted between 11 and 15 minutes, a duration nearly half of the duration reported by AMA and 40 percent of that reported by RUC. A landmark study conducted by Health Economics Research, Inc. concluded that “all three analyses suggest that current time estimates that have been used to develop resource-based practice expenses, and physician work RVUs are overstated. Medical and surgical specialties are currently spending less time with patients than indicated by the guideline time associated with the 15 major office-based E&M CPT® codes.” On July 20, 2013, the Washington Post published an article based on a meta-analysis of the RUC and physician time, concluding that physician time estimates, as used to develop the work RVUs, are overstated by as much as 100 percent

So what does this mean for the typical physician? Well, a lot, actually. If we accept that most of the research points to the RUC time estimates being significantly overstated (from 40 to 127 percent, depending on the study you read), it puts physicians at a greater risk for an audit. A general rule is that if the assessed time exceeds 5,000 hours per year, which is 2.5 times the fair market value time of 2,000 hours, the physician is at a higher risk of an HHS Office of Inspector General (OIG) audit. I have participated as a statistical expert in a number of criminal and civil fraud cases brought against physicians for which the primary complaint was that the physician worked more hours than what was considered reasonable. In many cases, I personally interviewed the physicians and reviewed their work logs, and they were working significantly fewer hours than were being reported. After an audit that supported that their coding was appropriate, the only conclusion one could come to was that the time estimates were significantly overstated.

In the end, the entire issue of assessed time presents a confounding paradox for medical providers. On one hand, calculating assessed time is a great way to get a handle on potential aberrant coding and billing practices, and to estimate audit risk. On the other hand, the use of time, at least based on the current black-box methods employed by RUC, is not reliable enough to prove any wrongdoing on behalf of the provider. But that does not stop the government from trying. In the case of United States v. Dilip Kachare, the physician defendant was indicted on counts of criminal fraud based almost exclusively on the amount of assessed time that was calculated using the RUC study. I was engaged as the statistical expert to help defend Dr. Kachare, and in the end, he was acquitted of all charges.

In this case, we had a physician who, for the most part, reported about 10 codes, mostly visit codes, that were associated with a handful of diagnostic codes. In essence, here you had a physician that, for the past 30 years, has been doing the same thing for the same type of patient with the same problems, and as a result, achieved a level of superb efficiency. His outcomes were excellent and the only one complaining about his work was the government. It’s a fine line to walk, promoting the use of time on one hand and then discounting its value on the other. But the facts are the facts, and it seems ridiculous to penalize someone for being efficient or for performing procedures for which the time assessments are clearly inaccurate. 

The bottom line? Based on what I have read and seen, irrespective of how flawed and inaccurate the RUC time study is reported to be, it looks like it is here to stay. My advice to physicians is to take it seriously. I would include conducting time assessments as part of my overall compliance risk strategy. Remember, just because you are paranoid doesn’t mean they aren’t out to get you.

And that’s the world according to Frank.

About the Author

Frank Cohen is the director of analytics and business nitelligence for DoctorsManagement, a Knoxville, Tenn. consulting firm. Mr. Cohen’s specializes in data mining, applied statistics, practice analytics, decision support, and process improvement.

Contact the Author

Comment on this Article

Share This Article