Unveiling Causal Drivers of Non-Communicable Diseases with Interpretable Models
- 👤 Speaker: Sheresh Zahoor
- 📅 Date & Time: Friday 24 May 2024, 17:15 - 18:00
- 📍 Venue: Lecture Theatre 2, Computer Laboratory, William Gates Building
Abstract
In healthcare, where accurate and reliable decision-making is paramount, interpretability is essential. Traditional Machine Learning (ML) models have provided valuable insights but often lack transparency in their reasoning, limiting their effectiveness. The recent surge in ML techniques across medical fields such as radiology, cardiology, mental health, and pathology holds great promise. These techniques can improve diagnostic accuracy, enhance workflow efficiency, minimise medical errors, and ultimately improve public health outcomes. However, the “black-box” nature of many ML algorithms raises significant concerns about interpretability. The lack of transparency in these models’ decision-making processes often prevents clear explanations for their predictions, which undermines trust and hinders their integration into clinical practice. This issue has led to a growing movement towards interpretable models in healthcare, shifting away from traditional approaches. Probabilistic graphical models (PGMs), particularly Causal Bayesian Networks (CBNs), are emerging as front-runners in interpretable models for healthcare. CBNs offer a framework for representing causal relationships between variables. This fosters a deeper understanding of the mechanisms influencing healthcare outcomes. Integrating domain knowledge and expert clinical insights empowers CBNs to capture more accurate causal relationships between risk factors and health outcomes. This enriched model provides a more realistic understanding of healthcare phenomena, as it goes beyond simply identifying correlations and unveils the underlying causal drivers. By prioritizing interpretable models like CBNs, we empower healthcare professionals to make informed decisions and develop improved preventative strategies. This ultimately leads to superior patient outcomes. Our research prioritises Non-Communicable Diseases (NCDs) like diabetes and cardiovascular diseases (CVD) due to their significant public health burden. These chronic illnesses are often preventable through lifestyle modifications, highlighting the importance of identifying key modifiable risk factors. To achieve this, we conducted an extensive analysis utilizing various structural learning algorithms. This analysis helped us identify causal pathways among potential risk factors affecting the progression of these diseases. Based on these pathways, we developed novel CBNs that represent the identified causal relationships. These CBNs offer valuable insights into the progression and prevention of NCDs, empowering healthcare professionals with a powerful tool to combat these diseases at their root cause.
Series This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.
Included in Lists
- All Talks (aka the CURE list)
- Artificial Intelligence Research Group Talks (Computer Laboratory)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge talks
- Chris Davis' list
- Department of Computer Science and Technology talks and seminars
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- Lecture Theatre 2, Computer Laboratory, William Gates Building
- Martin's interesting talks
- ndk22's list
- ob366-ai4er
- PhD related
- rp587
- School of Technology
- Speech Seminars
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Friday 24 May 2024, 17:15-18:00