Can Algorithms Really Be Biased?

One should avoid passing judgment on those who create the algorithms.

I recently read a study in Science Magazine that focused on racial biases found in popular algorithms that are used in the area of population health. I would encourage you to read the entire article to get some detailed understanding of the data, methods, analytics, findings, and conclusions, as this is not the forum for such a detailed discussion. 

In general, the study authors found that these population health algorithms, which are used to dole out healthcare services to individuals within certain specific populations, are biased against black people. Their thesis is that these biases occur due to health disparities that are conditional upon some risk score that is calculated as part of the algorithm. The authors also opined that much of this occurred because the algorithm uses cost as a basis for predicting health needs. For example, it found that across every level of algorithm-predicted risk, “blacks and whites have (roughly) the same costs the following year.” The issue, then, is that the data studied would indicate that black people were, in essence, sicker than the corresponding white population for whom data was available.

Imagine, then, that you had two populations (population a and population b). Let’s say that the population contains patients that are significantly sicker than patients in population b. One would expect that the costs for those patients in the population also would be significantly higher than those in population b, but in this study, such was not the case. One can argue whether the algorithm predicts outcomes based on the correct variable, but in my experience, besides avoiding catastrophic health crises (which, by the way, can be very costly), looking to lower the cost of healthcare is not an unreasonable goal. In fact, it is likely pursued by every payer, including Medicare and Medicaid, and hence the growing interest in population health programs.

In this study, however, the problem was an inequity between the two populations (blacks and whites), wherein the costs were the same, but the severity of illness was not. The authors suggest that perhaps a more suitable approach would be to predict some measure of health. For example, the number of active chronic health conditions could be used, or a comorbidity score, which is mostly used in medical research. In this way, the algorithm would predict the need for healthcare services based on the severity of illness rather than the costs – which, according to the authors, “necessarily means being racially biased on health.” Why is that? Again, according to the researchers, poor patients face more barriers to healthcare access – and in this case, they are stating that socioeconomic status and race are closely correlated. In other words, the authors opine that black populations tend to be poorer, and therefore, are in the group with greater access challenges. These challenges included geography, transportation issues, competing demands from jobs and/or childcare, and/or even just the basic knowledge that a certain health condition may require medical care. 

The authors also opined that some determinants depended upon the racial relationship between black and white primary care physicians. They referenced another study that found black providers would be more aggressive about preventive care when the patient was black, as opposed to a white provider. In essence, the authors are laying a foundation for bias among people, and then associate those people to the human intelligence used to create those algorithms. 

This type of algorithmic bias, as the authors duly point out, is not isolated to just healthcare. Several years ago, I worked with a law enforcement agency to develop an algorithm that would predict when and where crimes (of violence, in particular) might occur. The results almost always identified neighborhoods that reported a lower socioeconomic status, and because that was correlated to race, the algorithm would predict these events in neighborhoods with a higher proportion of blacks and Hispanics. Now, there are many reasons that this type of an algorithm might be biased, but they are not due to the algorithm, per se, but rather, the programming and the data that go into that algorithm.

For example, if, historically, the rate of violent crime was significantly higher in specific neighborhoods, then it would make sense that the algorithm would predict future events in those neighborhoods. One reason suggested was that there was a more anemic police presence in those neighborhoods, and as such, there was less of a disincentive to commit violent crimes. And this may very well be true, but it is an indictment on society and the data, not necessarily those who create and develop those algorithms (or the algorithms themselves).

The authors give as examples credit-scoring algorithms or hiring or retail algorithms, which they claim all are influenced by racial and gender biases. I can very well see where these findings should raise the discussion as to how these types of biases could be checked and even eliminated, but I would caution against passing judgment on those who create the algorithms. In this case, for example, the reasonable approach to controlling costs may have resulted in a biased outcome, but not because the developers are biased, nor that the data is biased, but rather than the system is biased, producing the data that drives these algorithms.

And that’s the world according to Frank.

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

Frank Cohen

Frank Cohen is the director of analytics and business Intelligence for DoctorsManagement, a Knoxville, Tenn. consulting firm. He specializes in data mining, applied statistics, practice analytics, decision support, and process improvement. He is a member of the RACmonitor editorial board and a popular contributor on Monitor Monday.

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Frank Cohen shows you how to leverage the Comprehensive Error Rate Testing Program (CERT) to create your own internal coding and billing risk assessment plan, including granular identification of risk areas and prioritizing audit tasks and functions resulting in decreased claim submission errors, reduced risk of audit-related damages, and a smoother, more efficient reimbursement process from Medicare.

April 9, 2024
2024 Observation Services Billing: How to Get It Right

2024 Observation Services Billing: How to Get It Right

Dr. Ronald Hirsch presents an essential “A to Z” review of Observation, including proper use for Medicare, Medicare Advantage, and commercial payers. He addresses the correct use of Observation in medical patients and surgical patients, and how to deal with the billing of unnecessary Observation services, professional fee billing, and more.

March 21, 2024
Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Explore the top-10 federal audit targets for 2024 in our webcast, “Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets,” featuring Certified Compliance Officer Michael G. Calahan, PA, MBA. Gain insights and best practices to proactively address risks, enhance compliance, and ensure financial well-being for your healthcare facility or practice. Join us for a comprehensive guide to successfully navigating the federal audit landscape.

February 22, 2024
Mastering Healthcare Refunds: Navigating Compliance with Confidence

Mastering Healthcare Refunds: Navigating Compliance with Confidence

Join healthcare attorney David Glaser, as he debunks refund myths, clarifies compliance essentials, and empowers healthcare professionals to safeguard facility finances. Uncover the secrets behind when to refund and why it matters. Don’t miss this crucial insight into strategic refund management.

February 29, 2024
2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

HIM coding expert, Kay Piper, RHIA, CDIP, CCS, reviews the guidance and updates coders and CDIs on important information in each of the AHA’s 2024 ICD-10-CM/PCS Quarterly Coding Clinics in easy-to-access on-demand webcasts, available shortly after each official publication.

April 15, 2024

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →