I discuss a piece of medical AI research that has not received much attention, but actually did a proper clinical trial!
My first impressions of these datasets. How do they measure up, and how useful might they be?
Humans explain their decisions with words. In our latest work, we suggest AI systems should do the same.
Medical data is horrible to work with, but deep learning can quickly and efficiently solve many of these problems.
Our team has post-doc and PhD positions available, so come to unexpectedly great Adelaide!
Since the CheXNet paper came out in November 2017 I have been communicating with the author team. I'm finally ready to review the paper. Some of the things I found out surprised me.
A couple of weeks ago, I mentioned I had some concerns about the ChestXray14 dataset. I said I would come back when I had more info, and since then I have been digging into the data. I've talked with Dr Summers via email a few times as well. Unfortunately, this exploration has only increased my concerns about the dataset.
Deep learning research in medicine is a bit like the Wild West at the moment; sometimes you find gold, sometimes a giant steampunk spider-bot causes a ruckus. This has derailed my series on whether AI will be replacing doctors soon, as I have felt the need to focus a bit more on how to assess … Continue reading Do machines actually beat doctors? ROC curves and performance metrics
Today I want to look at two papers which tell us something very useful about medical AI, particularly if we are trying to predict the future of medicine.
Just a quick note: If you are in South Australia and you are interested in radiology or research, or even radiology research, feel free to contact me. I can answer any questions you have or maybe even connect you with researchers who need help. And if anyone is willing to give me feedback on my teaching materials, … Continue reading Interested in radiology or research?