Description | This project aims to explore the factors influencing trust and reliance in AI-human collaborative decision-making environments. The study will involve developing a digital platform for collaborative decision-making, using group travel planning as a case study. The project will incorporate various interaction modalities, including Large Language Models (LLMs), and explainable AI (XAI) techniques to identify strategies that enhance trust, appropriate reliance, and decision quality in AI-assisted collaboration. By investigating different XAI approaches and interaction paradigms, particularly LLM-based interactions, the research seeks to develop guidelines for designing trustworthy AI systems for collaborative decision-making across various domains.
Methodology
1. Literature Review: Examine current research on trust in AI, explainable AI, LLMs, and computer-supported cooperative work.
2. Platform Development: Create a collaborative decision-making platform, using group travel planning as a context, incorporating LLM-based interactions.
3. User Studies: Design and conduct experiments to evaluate trust, reliance, and decision quality across different interaction paradigms.
4. Data Analysis: Employ mixed-methods analysis to assess the effectiveness of different XAI and interaction approaches, including LLM-based interactions.
5. Guideline Development: Synthesize findings to create design guidelines for trustworthy AI collaborators. |
Preparation | It is not expected you know anything specific before you start your research however if you are interested these are the things to investigate and helpful skills.
• AI Modelling: Develop and implement AI models with various XAI techniques, including LLM-based explanations.
• Data Science: Collect and analyze user study data to derive insights on trust and collaboration patterns.
• Human-Computer Interaction: Design intuitive interfaces and multiple interaction modalities, with a focus on LLM-based interactions.
• Computer-Supported Cooperative Work: Investigate different collaboration models between users and AI, including LLM-mediated collaboration.
• Evaluation Methods: Develop metrics and methods for measuring trust, reliance, and decision quality in AI-human collaboration. |