COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Unveiling Causal Drivers of Non-Communicable Diseases with Interpretable Models
Unveiling Causal Drivers of Non-Communicable Diseases with Interpretable ModelsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Pietro Lio. In healthcare, where accurate and reliable decision-making is paramount, interpretability is essential. Traditional Machine Learning (ML) models have provided valuable insights but often lack transparency in their reasoning, limiting their effectiveness. The recent surge in ML techniques across medical fields such as radiology, cardiology, mental health, and pathology holds great promise. These techniques can improve diagnostic accuracy, enhance workflow efficiency, minimise medical errors, and ultimately improve public health outcomes. However, the “black-box” nature of many ML algorithms raises significant concerns about interpretability. The lack of transparency in these models’ decision-making processes often prevents clear explanations for their predictions, which undermines trust and hinders their integration into clinical practice. This issue has led to a growing movement towards interpretable models in healthcare, shifting away from traditional approaches. Probabilistic graphical models (PGMs), particularly Causal Bayesian Networks (CBNs), are emerging as front-runners in interpretable models for healthcare. CBNs offer a framework for representing causal relationships between variables. This fosters a deeper understanding of the mechanisms influencing healthcare outcomes. Integrating domain knowledge and expert clinical insights empowers CBNs to capture more accurate causal relationships between risk factors and health outcomes. This enriched model provides a more realistic understanding of healthcare phenomena, as it goes beyond simply identifying correlations and unveils the underlying causal drivers. By prioritizing interpretable models like CBNs, we empower healthcare professionals to make informed decisions and develop improved preventative strategies. This ultimately leads to superior patient outcomes. Our research prioritises Non-Communicable Diseases (NCDs) like diabetes and cardiovascular diseases (CVD) due to their significant public health burden. These chronic illnesses are often preventable through lifestyle modifications, highlighting the importance of identifying key modifiable risk factors. To achieve this, we conducted an extensive analysis utilizing various structural learning algorithms. This analysis helped us identify causal pathways among potential risk factors affecting the progression of these diseases. Based on these pathways, we developed novel CBNs that represent the identified causal relationships. These CBNs offer valuable insights into the progression and prevention of NCDs, empowering healthcare professionals with a powerful tool to combat these diseases at their root cause. This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsJohn Ray Society Next Genration Biophysics; Friday 8th November 2019 The Wilberforce SocietyOther talksDesigning Multiphasic Materials Self-limited structures in active fluids : vignettes from theory, simulation and experiments CSAR lecture: Segmenting the biological causes of hearing loss Machine learning: applications to cancer Quantum Information Tutorial Collective motion and hydrodynamic instabilities in a sheet of microswimmers |