Keywords: Explainability, Fairness, Multi-modal
Need: This work follows on from a previous internship project. The final report from this will be coming in September.
Multimodal AI (MMAI) provides opportunities improve performance and gain insights from by modelling correlations and representations of data of different types. These approaches are incredibly powerful for the analysis of healthcare data, where the integration of data sources is key for gaining a holistic view of individual patients (personalised medicine) or evaluating models across different patient profiles to ensure safe and ethical use (population health). However, MMAI presents an unique challenges when deciding how best to incorporate and fuse information, maintaining an understanding of how data is processed (explainability), ensuring bias is not amplified as a result.
The project would seek to build off from the previous work and continue the exploration to identify how bias can be mitigated or unintentionally enhanced through multimodal fusion models.
Current Knowledge/Examples & Possible Techniques/Approaches:
Related Previous Internship Projects: https://nhsx.github.io/nhsx-internship-projects/advances-modalities-explainability/
Enables Future Work: Feed into policy around AI Safety as a demonstration of best practice as well as enabling our future work to have a unified approach to explainability
Outcome/Learning Objectives:
Datasets: Previous work built a pipeline from MIMIC-IV but INSPECT would also be of interest
Desired skill set: When applying please highlight any experience around explainability techniques, fairness, clinical, machine learning, coding experience (including any coding in the open), and any other data science experience you feel relevant.
Return to list of all available projects.