Medical AI has a safety problem; we know for a fact our testing isn't reliable. We've seen how this plays out before.
For the first time ever AI systems can directly harm patients. Are we doing enough to prevent a medical AI tragedy, the equivalent of a thalidomide event?
Humans explain their decisions with words. In our latest work, we suggest AI systems should do the same.
Medical data is horrible to work with, but deep learning can quickly and efficiently solve many of these problems.
Our team has post-doc and PhD positions available, so come to unexpectedly great Adelaide!
The first doctors to feel the effects of automation might not be radiologists at all. It might be surgeons.
Since the CheXNet paper came out in November 2017 I have been communicating with the author team. I'm finally ready to review the paper. Some of the things I found out surprised me.
I just wanted to do a quick follow up to my recent blog post, which discussed the performance metrics I think might be appropriate for use in medical AI studies. One thing I didn't cover was the reason we might want to use multiple metrics, or the philosophy behind choosing the ones I did. So today, … Continue reading The philosophical argument for using ROC curves
2017 was cool. Medical AI progressed apace, the AI community grew up some and got a bit creative, and I made some predictions that mostly held up to vague scrutiny.
A couple of weeks ago, I mentioned I had some concerns about the ChestXray14 dataset. I said I would come back when I had more info, and since then I have been digging into the data. I've talked with Dr Summers via email a few times as well. Unfortunately, this exploration has only increased my concerns about the dataset.