News Item

AI’s coronavirus test | POLITICO

Everybody celebrates the hope and promise; if you google “Covid and AI” you get something like 2.7 million hits.

But it’s easy to gloss over the hype and the peril. If AI is built on data scaffolding that is tainted, flawed or distorted by systematic bias — or just the wrong starting points — those inequities will be baked in and perpetuated.

These topics are on our mind before POLITICO’s annual AI summit next week. It’s timely, when we so desperately need to make sense of things, when we so desperately need scientific breakthroughs — and when we so desperately need tools that will address inequities and foster trust.

“Good intentions can go wrong,” UC Berkeley’s Ziad Obermeyer, who researches the intersection of machine learning, medicine and health policy, told a recent ABIM Foundation conference on health care and trust. Obermeyer, who will be a panelist at our summit, cited an algorithm that conflated health care costs with health care needs — making it appear as if larger sums spent on or by white patients meant they needed more care or were in worse health.

[…]

Source: AI’s coronavirus test – POLITICO