AI ‘can’t fly solo,’ needs humanising context for use in healthcare, data expert emphasises
Dina Katabi, principal investigator for AI and health at the MIT Jameel Clinic, is at the forefront of radio wave and AI-assisted medical device research and development. Dina’s research involves generating health data from passive monitoring devices to track indicators such as respiratory rate and gait for individuals with Parkinson’s disease.
As AI becomes more prevalent in healthcare and questions around the accuracy and ethics of data used to train AI models, Dina responds to concerns around the future of AI, saying, "AI left unmanaged by humans is less of a threat to take over the world and more akin to a freezer for which the door has been left open so that everything inside melts." Dina urges for increased adoption of technology, as population-level data will prove invaluable for future research and AI capabilities.
A researcher who builds passive devices that use radio waves and artificial intelligence believes that if more people adopted such technology, it could provide invaluable population-level data on diseases such as Alzheimer’s and Parkinson’s.
The potential for AI to help in early screening for those debilitating conditions has spurred much recent research but also has raised an equal number of questions and concerns about whether the data are accurate.
AI can indeed lead to issues such as denying care to patients or generating false positives. But part of the issue is a misunderstanding of AI as operating separately from the context of humans — both those who design and those who depend on the tech, researcher Dina Katabi, PhD, argued in a recent story on AI’s limitations.
Katabi’s research has involved collecting data from passive monitoring devices, using AI to weigh a user’s current details such as gait or respiratory rate against a learned norm.