Forget about interpretability, don't share your code or data, and remember, AI is magic.
My first impressions of these datasets. How do they measure up, and how useful might they be?
Medical AI has a safety problem; we know for a fact our testing isn't reliable. We've seen how this plays out before.
For the first time ever AI systems can directly harm patients. Are we doing enough to prevent a medical AI tragedy, the equivalent of a thalidomide event?
Humans explain their decisions with words. In our latest work, we suggest AI systems should do the same.
Medical data is horrible to work with, but deep learning can quickly and efficiently solve many of these problems.
Our team has post-doc and PhD positions available, so come to unexpectedly great Adelaide!
The first doctors to feel the effects of automation might not be radiologists at all. It might be surgeons.
Since the CheXNet paper came out in November 2017 I have been communicating with the author team. I'm finally ready to review the paper. Some of the things I found out surprised me.
I just wanted to do a quick follow up to my recent blog post, which discussed the performance metrics I think might be appropriate for use in medical AI studies. One thing I didn't cover was the reason we might want to use multiple metrics, or the philosophy behind choosing the ones I did. So today, … Continue reading The philosophical argument for using ROC curves