Why the Scoring Mechanisms of Medical Decision-Making are Flawed

The scoring mechanisms of the MDM are suggested tools, not rules or laws.

In our last article we explored how time in conjunction with medical decision-making (MDM) must support the same level of service, and why that rule makes sense. However, we were left with a puzzling consideration in regard to MDM, and here is why: The scoring mechanisms of MDM were created through the Marshfield Clinic Guidelines (MCG). So, let’s consider any relevance MCG has on our current audit process.

We should start at the beginning. Marshfield Clinic was a large clinic in Wisconsin. The group was multi-specialty and included many locations. During the creation of 1995 Documentation Guidelines, the group’s Medicare Administrative Contractor (MAC) approached them and asked if they would beta-test the guidelines. The clinic quickly identified that while the history and exam components included some sort of qualification system, the MDM did not have such a scoring algorithm. Therefore, the clinic created its own internal MDM guidance.

This evolved into the MDM scoring process we find in most every scoring or audit grid in the industry. It has become our industry’s standard approach to reviewing MDM. Ironically, while most MACs include the MCG of scoring MDM as part of their audit interactive or static worksheets, the Centers for Medicare & Medicaid Services (CMS) National Carrier Guidance (NCG) has never acknowledged or validated the MCG scoring approach.

Whether reviewing documentation for time-based services or based it on the documentation components, how does this newfound knowledge of MDM scoring impact your audits? Remember, these are guidelines—not rules, not laws—and they are not even “official” guidelines. Do a quick Internet search for CMS Evaluation and Management (E&M) Documentation Guidelines and you will find a tool that is essentially the 1995 & 1997 Documentation Guidelines enveloped in the wrapper of CMS logos with no mention of MCG.

I’m not telling you to burn the guidelines and never use them again, but I am saying you need to understand that this scoring is just a suggested tool. Maybe I can better explain by actually reviewing some of the controversies that exist in MCG scoring, and how knowing the information in this article may sway you on future audits and reviews.

Our first scoring step in the MDM is reviewing the diagnosis. The MCG have an option for a self-limited or minor problem, while the other two options are “new problem” or “established problem to the provider today.” The golden rule of auditing says, “I am not allowed to assume or interpret.” So how could I then reasonably use this option, as I would be assuming or interpreting the documentation, as a non-clinician, to assess that the problem is minor? This is subjective and completely open to interpretation. For example, an insect bite is commonly defined by this sort of visit, but with Lyme disease, Zika, West Nile virus, etc., an insect bite suddenly is no longer merely a minor problem. Therefore, maybe it would be best to not use this option on the scoring grid through an audit policy for your coders and auditors to follow. I have followed this scoring technique in audits I have personally performed and do not feel that I have ever given more or less “points” than were due.

Also within the diagnosis we have another issue that is truly Pandora’s box, and that is a new problem with additional workup. Most MACs who have published guidance have stated that additional workup is considered any work done beyond today’s encounter. This would mean that if my provider performed a chest x-ray in the office today, that would not be considered additional workup, but if the patient now needs a CT scan, that would represent additional workup. The rationale is that at the end of the encounter the provider still does not have a definitive answer to the patient’s problem, and so additional workup was needed. However, there is one MAC (Palmetto) that actually has published guidance that states work done during today’s encounter would also be considered additional workup. This rather blatant contradiction between MACs demonstrates the continuing complexities within the MDM.

Therefore, there is a quandary about what really constitutes additional workup. Before we can answer that, we have to take a sharp look at two letters—U and P. MCG specifically included the word “up” within their scoring statement of a new problem to the provider today. Here is the quandary this creates: If a provider orders a UA in the office and then decides to send off the urine for culture and sensitivity, this is considered additional workup. But if that same provider sees a new patient and decides that surgical intervention is indicated—it is additional work, but many say you cannot count it because it is not additional workup. Before you jump on the side of those who say, well, that is not additional workup—I want you to be the one to sit across the table from a physician who’s just trying to get an honest day’s pay for an honest day’s work and tell him that sending urine to the lab gives him more “points” than deciding surgical intervention was needed. I believe the consideration here should be removing those two letters up, and merely stating additional work.

If the condition of the patient necessitates work by the provider to be done beyond today’s encounter for him to accurately treat (lab/x-ray), diagnose (biopsy/diagnostic testing), identify the problem (consult to another provider), or resolve the patient’s problem (surgical intervention), this seems more inclusive of how “additional work” could be more accurately defined, as well as clearing up a lot of the gray area.

MCG guidelines seem to be pretty straightforward on identifying the difference between a new versus established problem to the provider by indicating if the problem is new to the examiner, then we score it as a higher point value, and if it is established we score it with a lower point value. This approach in healthcare today is no longer as straightforward, as we have large groups that include extenders in which a patient may see the physician in the first encounter, but then a non-physician provider (NPP) the next time. This style of clinic organization has led to an unstated “abuse” of the new problem about the established problem to the provider.

Let’s find the actual rules—well, guidelines—to truly interpret the intent of this area of encounter. Ironically, in this area Documentation Guidelines (DG) offer a very direct approach. DG does not define these as a new or established problem by the examiner today, but rather identifies them as a presenting problem with an established diagnosis or without an established diagnosis. DG never includes the phrase “new to the provider” that MCG does. The wording of DG seems to make the guidance clearer. For example, if you are seeing my patient for me today because I’m lying on a beach somewhere in the Caribbean, and we are the same specialty, under the same TIN, and you therefore have access to my full medical documentation that includes an active treatment plan on the problem—then this would NOT be a new problem to you. It is an established problem, and should not be given the credit of a new problem for that provider. As a result of misunderstanding the MCG scoring of new vs. established problem, NGS Medicare came out last year with a “clarification” to this point, and there was a lot of hum that NGS was going rogue and changing their view on a new problem within DG. No, they weren’t. They were creating awareness of the misleading interpretation of MCG and referencing you back to DG, which clearly defines how to consider these conditions.

That leads us to the Data & Complexity portion of the MDM. If you do any auditing or coding on a regular basis, you know that within the MDM this is the area where you tend to “drop” in the scoring process. We do this, however, because of a lack of documentation of work done during the encounter. Our focus is on any contradictions or concerns that are raised due to MCG. In my opinion, MCG did provide a great scoring tool in this area, but the confusion is due to the fact that there are categories that redundantly include the same element, sometimes even at a higher point value. For example, look at the MCG scoring grid on data & complexity, and look at the fifth option down that provides 1 point. It states that we are allowed to count this when the history is obtained from someone other than the patient. But then look down at the sixth option, which is 2 points, and allows the use of obtaining history from someone other than the patient. How do I know which one to use? I have heard the same “opinions” and “interpretations” as you have, given by consultants, educators, and conference speakers through the years. But again, those are the facts—the rules—and at the end of the day it is their interpretation of how to best use a contradiction. Many say that if someone is merely supplementing this history to allow the 1-point option, but if we obtain the history fully from someone else to then use the 2-point option. That sounds like a good idea, but then again, I don’t have a rule, a law, or a guideline to differentiate these definitively.

The last point we will discuss today is that last option on the data & complexity grid, and to discuss this I am going to provide a side-by-side comparison of MCG guidance and DG for consideration:

MCG: Independent visualization of image, tracing or specimen itself (not simply review of report)

         

The way MCG interprets this element above leads to coders and auditors giving “points” in this area that are unwarranted. Many read this as saying if my provider read the chest x-ray he performed in his clinic on that day, then he gets this credit as well, but that couldn’t be further from the intention. Remember that the provider is also receiving professional component reimbursement for the interpretation, so that could be construed as “double-dipping” to provide both.

Now, let’s review DG guidance on this same element:

DG: The direct visualization and independent interpretation of an image, tracing or specimen previously or subsequently interpreted by another physician should be documented.

 

That is not nearly as vague or ambiguous. It helps to specifically indicate that if my provider is laying eyes on the images/tracing/specimen, creating his own interpretation—even though it was already read by the radiologist, cardiologist, or pathologist—then my provider is given additional credit for that work performed. So referring back to the actual guidance certainly helped to better define this element.

Are you curious as to what prompted this article? On our own internal auditing team at the National Alliance of Medical Accreditation Specialists (NAMAS) we have had—to put it nicely—a difference of opinion when it comes to considering surgical intervention as additional workup.

So I set out to research what the guidance truly is and how we should accurately score these encounters, and that is when all of the dominos fell into place for me. I realized MCG is not “official” guidance, and there is room for interpretation of “up.” Therefore there really isn’t a “rule” on this point. Which means if you think surgery is additional work and should be counted, I could agree with you, but if you stand strong that MCG says it is workup and surgery is NOT workup—I cannot say you are right or wrong, either. What I do now know, however, is that the scoring of MDM is a reference guide, if you will—one which (just like the rest of DG) includes gray areas. Our team, along with your team, will continue to use the MCG of scoring the MDM, as it is the industry standard, but for consistency’s sake you must have a policy to guide your team. You must lay out what additional workup is considered within your organization, and whether you are even considering the minimal problem scoring element. The key here is that MCG are not guidelines that CMS formally recognizes, but DG are. Use DG to help guide your selections for MCG scoring of the MDM—and when you reach those areas with blatant potholes, fill the potholes with internal policy and guidance for interpretation.

One more thought: There are many coders, auditors, and organizations who feel MDM should define the overall level of service. Really? I would be curious after you have read this article whether you still feel that assigning the level based on guidelines that are not CMS-approved is really the best approach, especially in light of what we do know from CMS—that medical necessity and not MDM is the overarching criterion. In fact, Noridian even says medical necessity cannot be quantified by a points system alone, and that is exactly what MDM scoring is: a flawed scoring system. Just another thought to ponder within your compliance and auditing policy creation.

 

 
Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

Shannon DeConda CPC, CPC-I, CEMC, CMSCS, CPMA®

Shannon DeConda is the founder and president of the National Alliance of Medical Auditing Specialists (NAMAS) as well as the president of coding and billing services and a partner at DoctorsManagement, LLC. Ms. DeConda has more than 16 years of experience as a multi-specialty auditor and coder. She has helped coders, medical chart auditors, and medical practices optimize business processes and maximize reimbursement by identifying lost revenue. Since founding NAMAS in 2007, Ms. DeConda has developed the NAMAS CPMA® Certification Training, written the NAMAS CPMA® Study Guide, and launched a wide variety of educational products and web-based educational tools to help coders, auditors, and medical providers improve their efficiencies. Shannon is a member of the RACmonitor editorial board and is a popular guest on Monitor Mondays.

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Frank Cohen shows you how to leverage the Comprehensive Error Rate Testing Program (CERT) to create your own internal coding and billing risk assessment plan, including granular identification of risk areas and prioritizing audit tasks and functions resulting in decreased claim submission errors, reduced risk of audit-related damages, and a smoother, more efficient reimbursement process from Medicare.

April 9, 2024
2024 Observation Services Billing: How to Get It Right

2024 Observation Services Billing: How to Get It Right

Dr. Ronald Hirsch presents an essential “A to Z” review of Observation, including proper use for Medicare, Medicare Advantage, and commercial payers. He addresses the correct use of Observation in medical patients and surgical patients, and how to deal with the billing of unnecessary Observation services, professional fee billing, and more.

March 21, 2024
Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Explore the top-10 federal audit targets for 2024 in our webcast, “Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets,” featuring Certified Compliance Officer Michael G. Calahan, PA, MBA. Gain insights and best practices to proactively address risks, enhance compliance, and ensure financial well-being for your healthcare facility or practice. Join us for a comprehensive guide to successfully navigating the federal audit landscape.

February 22, 2024
Mastering Healthcare Refunds: Navigating Compliance with Confidence

Mastering Healthcare Refunds: Navigating Compliance with Confidence

Join healthcare attorney David Glaser, as he debunks refund myths, clarifies compliance essentials, and empowers healthcare professionals to safeguard facility finances. Uncover the secrets behind when to refund and why it matters. Don’t miss this crucial insight into strategic refund management.

February 29, 2024
2024 SDoH Update: Navigating Coding and Screening Assessment

2024 SDoH Update: Navigating Coding and Screening Assessment

Happy World Health Day! Our exclusive webcast is just $99 for a limited time! Use code WorldHealth24 at checkout before April 12th to claim this discount.

Dive deep into the world of Social Determinants of Health (SDoH) coding with our comprehensive webcast. Explore the latest OPPS codes for 2024, understand SDoH assessments, and discover effective strategies for integrating coding seamlessly into healthcare practices. Gain invaluable insights and practical knowledge to navigate the complexities of SDoH coding confidently. Join us to unlock the potential of coding in promoting holistic patient care.

May 22, 2024
2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

HIM coding expert, Kay Piper, RHIA, CDIP, CCS, reviews the guidance and updates coders and CDIs on important information in each of the AHA’s 2024 ICD-10-CM/PCS Quarterly Coding Clinics in easy-to-access on-demand webcasts, available shortly after each official publication.

April 15, 2024

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

Happy World Health Day! Our exclusive webcast, ‘2024 SDoH Update: Navigating Coding and Screening Assessment,’  is just $99 for a limited time! Use code WorldHealth24 at checkout.

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →