Medical AI can detect the racial identity of patients from x-rays. This is extremely concerning, and raises urgent questions about how we test medical AI systems.
For those who don't know, I was on parental leave for almost all of 2020. Not only did this mean almost no research time, but it meant no blogging time as well. Of course, there were plenty of benefits! 😍👶😍👶😍👶😍👶😍👶😍👶😍👶😍👶😍 I still got 5 blog posts done this year, which I'm pretty amazed about given … Continue reading 2020 and 2021 on the blog, for some reason with more emojis than strictly necessary
The way we currently report human performance systematically underestimates it, making AI look better than it is.
CMS will reimburse an AI stroke detection model through Medicare/Medicaid. It is so darn complicated that it deserves a much deeper look.
AI is finally getting paid, apparently at a rate of $1000 per patient. What?
Reports that CT scanning may be better than PCR testing for covid-19 are flawed and almost certainly wrong.
We need AI-trained radiologists in every practice. Become one by applying for the first Clinical AI Fellowship in Australia!
Super-resolution promises to be one of the most impactful medical imaging AI technologies, but only if it is safe.
This week we saw the FDA approve the first MRI super-resolution product, from the same company that received approval for a similar PET product last year. This news seems as good a reason as any to talk about the safety concerns myself and many other people have with these systems.
Medical AI testing is unsafe, but addressing hidden stratification may be a way to prevent harm, without upending the current regulatory environment.
Ai competitions are fun, community building, talent scouting, brand promoting, and attention grabbing. But competitions are not intended to develop useful models.