Keywords: Explainability, Fairness, Multi-modal
Need: For the NHS to incorporate many of the current AI offerings into clinical and operational decision making we need to have confidence in the model outputs. Being able to:
A clear understanding of the fairness in these models in terms of inherent bias in the data and induced bias created by the model architecture or data capture.
This project would seek to explore the available techniques that could be built into the deployment of different AI solutions in healthcare using language and context specific to the NHS. Clarity around robustness, failure modes, and handling edge cases will be investigated and how to talk about these in a clinical context.
The project would also look to demonstrate the learning by applying current tooling and frameworks to a small range of healthcare tasks highlighting the range of considerations required and showing how to talk about fairness and explainability across data modalities and use-cases.
Current Knowledge/Examples & Possible Techniques/Approaches:
Related Previous Internship Projects: N/A as first year of the project
Enables Future Work: Feed into policy around AI Safety as a demonstration of best practice as well as enabling our future work to have a unified approach to explainability
Outcome/Learning Objectives: Main outcome is a report considering the current state of the art, but ideally paired with a starting point for a framework or tool suite which could be applied across different projects.
Focus on areas of robustness as well as limitations of available techniques including edge cases and failure modes would be of significant interest
Datasets: Open-source datasets with appropriate modality for the techniques under study
Desired skill set: When applying please highlight any experience around explainability techniques, fairness, clinical, machine learning, coding experience (including any coding in the open), and any other data science experience you feel relevant.
Return to list of all available projects.