If you are working in artificial intelligence or medicine I’d like to pleased my case to you. Id just like to pass along a note.
The current “responsible” safety stance is that we should not have AI agents dispense healthcare advice as if they had the knowledge of a doctor. I think this is safetyism and rob’s sick people of their own agency
I have very complicated healthcare needs and have experienced the range of how human doctors fail. The failure case is almost always in the presumption that you will fall within a median result.
Now for most people this is obviously true. They are more likely to be the average case. And we should all be concerned that people without basic numerate skills may misinterpret a risk. Whether it’s our collective responsibility to set limits to project regular people is not a solved problem.
But for the complex informed patient knows they are not average? The real outliers. Giving them access to more granular data let’s them accelerate their own care.
It’s a persistent issue of paternalism in medicine to assume the doctor knows best and the presumption that the patient is either stupid, lying, or hysterical is the norm. It’s also somewhat gendered in my experience.
I now regularly work with my doctors using an LLM precisely so we can avoid these failure cases where I am treated as an average statistic in a guessing game. I’m a patient not a customer after all. I decided my best interest.
A strict regulatory framework constricts access without solving any of the wider issues of access to care for those outside of norms. Artificial intelligence has the capacity to save lives and improve quality of life for countless difficult patients. It’s a social good and probably a financial one too.