Skip to main content

New research from Independence Blue Cross, Massachusetts Institute of Technology, and University of California Berkeley shows growing need to address algorithmic bias in the healthcare industry

Article in February issue of Health Affairs offers a guide to identify and deal with bias through data analytics

As more and more health insurers use machine learning for business and clinical decision making, there are growing concerns about equity, fairness, and bias. New research published this month in Health Affairs from Independence Blue Cross (Independence), Massachusetts Institute of Technology (MIT), and the University of California Berkeley presents a guide that highlights where algorithmic bias can arise and how to increase fairness through analytics.

The article outlines how health insurers use predictive modeling to identify members with complex health needs for interventions and outreach. It examines three common models used to prioritize care management: disease onset, likelihood of hospitalization, and medication adherence. The authors also lay out several ways to check predictive models and business practices for bias and protocols.

The authors assert that as an industry, health insurers share responsibility for assessing and reducing bias.  A tactic proposed in the article is increased industry vigilance, also known as “algorithmovigilance.” This refers to scientific methods and activities relating to evaluating, monitoring, understanding, and preventing adverse effects of algorithms in health care.

“Independence takes our responsibility to build equitable machine learning models and to reduce bias in our operations very seriously. As an industry we have to be on alert that the solutions we’re putting into action to improve health outcomes don’t increase racial disparities.  We have to use technology to help solve those challenges, not perpetuate them or worse, increase them,” said Mike Vennera, senior vice president and chief information officer at Independence and a co-author.

“Auditing and addressing algorithmic bias is a technical problem as much as it is a societal problem. If we want to make a real-world impact with our work, it is crucial for academics and health insurers to work closely,” said Irene Chen, PhD student at MIT, and one of the paper’s co-authors. The other co-authors are:

  • Ravi Chawla, chief analytics officer, Independence Blue Cross
  • Stephanie Gervasi, manager, Health Informatics & Advanced Analytics, Independence Blue Cross
  • Ziad Obermeyer, Associate Professor, Blue Cross of California Distinguished Professor, School of Public Health University of California Berkeley
  • Aaron Smith-McLallen, Director, Health Informatics & Advanced Analytics, Independence Blue Cross
  • David Sontag, Associate Professor, Massachusetts Institute of Technology

Media contact:
Ruth Stoolman
215-667-9537
Ruth.Stoolman@ibx.com