Medical AI testing is unsafe, but addressing hidden stratification may be a way to prevent harm, without upending the current regulatory environment.
I discuss a piece of medical AI research that has not received much attention, but actually did a proper clinical trial!
For the first time ever AI systems can directly harm patients. Are we doing enough to prevent a medical AI tragedy, the equivalent of a thalidomide event?
Humans explain their decisions with words. In our latest work, we suggest AI systems should do the same.
I just wanted to do a quick follow up to my recent blog post, which discussed the performance metrics I think might be appropriate for use in medical AI studies. One thing I didn't cover was the reason we might want to use multiple metrics, or the philosophy behind choosing the ones I did. So today, … Continue reading The philosophical argument for using ROC curves
2017 was cool. Medical AI progressed apace, the AI community grew up some and got a bit creative, and I made some predictions that mostly held up to vague scrutiny.
Today I want to look at two papers which tell us something very useful about medical AI, particularly if we are trying to predict the future of medicine.
In a recent blogpost I explored how to critically read medical artificial intelligence research, focusing on the relevance of these experiments to clinical practice. It has since struck me that we don't have a simple, clear way to discuss the idea that some studies are still a still a long way off use in the clinics, and others … Continue reading The three phases of medical AI trials