News Item

Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms | WIRED

Mental and physical health are crucial contributors to living happy and fulfilled lives. How we feel impacts the work we perform, the social relationships we forge, and the care we provide for our loved ones. Because the stakes are so high, people often turn to technology to help keep our communities safe. Artificial intelligence is one of the big hopes, and many companies are investing heavily in tech to serve growing health needs across the world. And many promising examples exist: AI can be used to detect cancertriage patients, and make treatment recommendations. One goal is to use AI to increase access to high-quality health care, especially in places and for people that have historically been shut out.

Yet racially biased medical devices, for example, caused delayed treatment for darker-skinned patients during the Covid-19 pandemic because pulse oximeters overestimated blood oxygen levels in minorities. Similarly, lung and skin cancer detection technologies are known to be less accurate for darker-skinned people, meaning they more frequently fail to flag cancers in patients, delaying access to life-saving care. Patient triage systems regularly underestimate the need for care in minority ethnic patients. One such system, for example, was shown to regularly underestimate the severity of illness in Black patients because it used health care costs as a proxy for illness while failing to account for unequal access to care, and thus unequal costs, across the population. The same bias can also be observed along gender lines. Female patients are disproportionately misdiagnosed for heart disease, and receive insufficient or incorrect treatment.

Source: Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms | WIRED