Medical AI can detect the racial identity of patients from x-rays. This is extremely concerning, and raises urgent questions about how we test medical AI systems.
Reports that CT scanning may be better than PCR testing for covid-19 are flawed and almost certainly wrong.
Super-resolution promises to be one of the most impactful medical imaging AI technologies, but only if it is safe.
This week we saw the FDA approve the first MRI super-resolution product, from the same company that received approval for a similar PET product last year. This news seems as good a reason as any to talk about the safety concerns myself and many other people have with these systems.
Ai competitions are fun, community building, talent scouting, brand promoting, and attention grabbing. But competitions are not intended to develop useful models.
Medical AI has a safety problem; we know for a fact our testing isn't reliable. We've seen how this plays out before.
2017 was cool. Medical AI progressed apace, the AI community grew up some and got a bit creative, and I made some predictions that mostly held up to vague scrutiny.