I discuss a piece of medical AI research that has not received much attention, but actually did a proper clinical trial!
Forget about interpretability, don't share your code or data, and remember, AI is magic.
My first impressions of these datasets. How do they measure up, and how useful might they be?
Medical AI has a safety problem; we know for a fact our testing isn't reliable. We've seen how this plays out before.
For the first time ever AI systems can directly harm patients. Are we doing enough to prevent a medical AI tragedy, the equivalent of a thalidomide event?
Humans explain their decisions with words. In our latest work, we suggest AI systems should do the same.
Medical data is horrible to work with, but deep learning can quickly and efficiently solve many of these problems.
Our team has post-doc and PhD positions available, so come to unexpectedly great Adelaide!
The first doctors to feel the effects of automation might not be radiologists at all. It might be surgeons.
Since the CheXNet paper came out in November 2017 I have been communicating with the author team. I'm finally ready to review the paper. Some of the things I found out surprised me.