In this video, Stanford University’s Professor of Biomedical Informatics, Dr. Nigam Shah, discusses the use of machine learning and medical ontologies in healthcare, focusing on the necessity of thinking beyond just the models themselves. He emphasizes the importance of considering policy, capacity, and potential harms associated with interventions in order to ensure fair, reliable, and useful application of AI in healthcare.
Shah explores real-world examples of AI from Stanford Healthcare, demonstrating the necessity of assessing risk, determining the threshold for action, and discussing treatment options based on data. He illustrates the journey from raw data to models that can inform treatment decisions, and touches on the obstacles encountered due to missing data or partial patient history.
On the subject of fairness, Shah stresses the need to ensure that predictive models behave the same way for all patients, as well as the need for alignment of values among all stakeholders in order to deliver equitable care.
Shah also discusses the topic of reliability, arguing that it is not enough for a model to be able to explain its inner mechanics (engineering interpretability). A model must also detail the causal associations behind its outputs (causal interpretability) and inspire trust in the end-users.
Furthermore, Shah covers the concept of usefulness under capacity constraints, explaining how due to limited resources, not all patients recommended by a model can receive expected interventions. He suggests assessing models based on their ability to improve care, and outlines the necessity of correctly handling missing data.
Finally, Shah highlights that the relevancy and generalizability of models may differ across healthcare settings and populations, as functions of each model can be context-specific.
You can watch the video here.