University of Cambridge > Talks.cam > Women@CL Events > Multimodal AI for Radiology Applications

Multimodal AI for Radiology Applications

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact women-at-CL-admin.

Radiology reporting is a complex task requiring detailed medical image understanding and precise language generation, for which generative multimodal models offer a promising solution. However, to impact clinical practice, models must achieve a high level of both verifiable performance and utility. We augment the utility of automated report generation by incorporating localisation of individual findings on the image– a task we call grounded report generation– and enhance performance by incorporating realistic reporting context as inputs. We design a novel evaluation framework (RadFact) leveraging the log ical inference capabilities of large language models (LLMs) to quantify report correctness and completeness at the level of individual sentences, while supporting the new task of grounded reporting. We develop MAIRA -2, a large radiology-specific multimodal model designed to generate chest X-ray reports with and without grounding. MAIRA -2 achieves state of the art on existing report generation benchmarks and establishes the novel task of grounded report generation.

This talk is part of the Women@CL Events series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity