Keywords: Explainability, AI Safety, Multimodal
Need: Artificial intelligence (AI) systems hold great promise for transforming healthcare by improving diagnosis, treatment, and patient outcomes. To ensure the safe and effective implementation of mature and approved AI systems at scale being used by clinicians in clinical pathways for direct care, rigorous system testing and service evaluation are essential in the local technical environment. A review of testing methodologies and considerations is underway.
This work would seek to compliment this review by investigating different testing techniques such as shadow mode, research vs pre-deployment development, system validation test suites, and A/B (canary) testing in different scenarios to highlight the considerations that these approaches need to address such as non-deterministic behaviour, unseen data, bias, and monitoring over time at scale.
The project will also place these techniques in the wider context of MLOps (Machine Learning Operations), as well as many other important and essential practices it is built on, including the established best practices from other software and technology disciplines to ensure the safe deployment, and monitoring of mature AI systems.
Being part of this work will be mean you will be working closely with the AI Deployment platform team and helping to inform future policy and guidance at a national level for the system testing of AI technologies at scale in the NHS. This will involve engaging and working with the national AI policy and regulation team in DHSC and regulators.
Current Knowledge/Examples & Possible Techniques/Approaches:
Related Previous Internship Projects: n/a first year of this topic
Enables Future Work: Supports wide deployment of AI applications in healthcare workflows
Outcome/Learning Objectives: Final report to be presented to HRA around AI research and deployment.
Datasets: Use of open datasets to enable the work to be widely shared and discussed.
Desired skill set: When applying please highlight any experience around work with AI deployment, bias, ethics, testing, python coding experience and software development (including any coding in the open), and any other data science experience you feel relevant.
Return to list of all available projects.