Keywords: MachineLearning, Decision Support, Tabular
Need: There are several emerging use cases where performance of AI (Artificial Intelligence) applications in narrow tasks can rival human domain experts on many examples, but still produce unexpected errors in areas where humans perform very well. Further, many of these models do not inherently produce an output that is clearly mapped to uncertainty as understood by humans or can identify when they are provided on out-of-domain examples which should be deferred to a different process.
This project looks to explore areas and techniques where human-algorithmic interactions and collaborations can improve performance and robustness, or settings where a model is used to improve efficiency or augment decision-making from the point of view of an arbitrator.
The main aim is to understand how to identify and explain the impact the arbitrator has upon model tuning and performance including a shared understanding of uncertainty and the ability to fully audit outputs produced.
Current Knowledge/Examples & Possible Techniques/Approaches:
Related Previous Internship Projects: n/a first year of this topic
Enables Future Work: Supports wide deployment of AI applications in healthcare workflows
Outcome/Learning Objectives: Build a better understanding of the frameworks and techniques available to build collaborative and transparent systems
Datasets: Open healthcare datasets that support the exploration of human-algorithm interactions
Desired skill set: When applying please highlight any experience around work with human-computer interaction, agent settings, AI ethics, model explainability, python coding experience and software development (including any coding in the open), and any other data science experience you feel relevant.
Return to list of all available projects.