
Suchi Saria, CEO of Bayesian Well being and Affiliate Professor of Medication, Johns Hopkins College
It’s laborious to get away from the subject of enormous language fashions, chatGPT and extra broadly, synthetic intelligence in healthcare. It’s all around the information, on social media, within the conferences we go to (together with MedCity’s personal INVEST convention that concluded earlier this week in Chicago) and even within the pitches that I get from our healthcare content material contributors.
But the concern about AI is actual. And I don’t imply Ex Machina sort doomsday situations the place AI will get sentient and takes over the human world. The extra rational concern is its authoritative tone, the power to current even false data as if it have been true — consider deep fakes — to not point out issues algorithms being leveraged to disclaim care.
In response to the superior energy this new expertise wields — that some imagine will emerge to be as pivotal as the commercial revolution — there’s a higher recognition that requirements have to be developed. Not surprisingly, world companies, firms have taken up the cost of setting forth tips for accountable AI together with the White Home. On this episode of the Pivot podcast, I spoke with Suchi Saria, affiliate professor of medication at John Hopkins College and director of its Machine Studying and Healthcare Lab. She can be CEO of Bayesian Well being. Saria has spent loads of time researching this matter of accountable AI and methods to develop a framework for its adoption in healthcare.