Paper 1: Enhancing Trust in Human-AI Collaboration: A Conceptual Review of Operator-AI Teamwork
Abstract: Trust is vital to collaborative work between opera-tors and AI. Yet, important elements of its nature remain to be investigated, including the dynamic process of trust formation, growth, decline, and even death between an operator and an AI. This review analyzes how the dynamic development of trust is determined by Team performance and its complex interaction with factors related to AI system characteristics, Operator competencies, and Contextual factors. This review summarizes current concepts, theories, and models to propose a genuine framework for enhancing trust. It analyzes the current understanding of trust in human-AI collaborations, highlighting key gaps and limitations, such as a lack of robustness, poor explainability, and effective collaboration design. The findings emphasize the importance of key components in this collaborative environment, including Operator capabilities and AI technology characteristics, underscoring their impact on trust. This study advances understanding of the nature of Operator-AI collaboration and the Dynamics of trust in trust calibration. Through a multidisciplinary approach, it also emphasizes the impact of Explainability, Transparency, and trust repair mechanisms. It highlights how Operator-AI systems can be improved through Design principles and developing Human competencies to enhance collaboration.
Keywords: Operator-AI collaboration; trust calibration; trust dynamics; explainability; transparency; trust repair mechanisms; cross-cultural trust; Clinical Decision Support Systems (CDSS); AI autonomy and influence; ethical considerations in AI; team performance; AI system characteristics; operator competencies; contextual factors; framework; limitations; robustness; Human-AI teaming; design principles; human competencies; predictability; re-liability; understandability; over-reliance; automation bias; under-trust; trust measurement; trust erosion