03/24/2026
By Danielle Fretwell
The Francis College of Engineering, Department of Electrical and Computer Engineering, invites you to attend a Doctoral Dissertation defense by Zahra Rezaei khavas on: "Trust Beyond Performance: Understanding Moral and Performance Trust Violations in Human–Robot Interaction."
Candidate Name: Zahra Rezaei khavas
Degree: Doctoral
Defense Date: Friday, April 3, 2026
Time: Noon - 2 p.m.
Location: Perry 115
Committee:
- Advisor: Paul Robinette, Associate Professor, Electrical and Computer Engineering, University of Massachusetts Lowell
- Co-Advisor: Reza Azadeh, Associate Professor, Computer Science, University of Massachusetts Lowell
- Jean-Francois Millithaler, Associate Professor, Electrical and Computer Engineering, University of Massachusetts Lowell
- Justin W. Hart, Assistant Professor, Computer Science, The University of Texas Austin
- Katherine Tsui, Leader University Research Partnership (URP) Program, Toyota Research Institute
Brief Abstract:
Given the impact of trust observed in human-robot interaction (HRI), appropriate trust in robotic collaborators is one of the leading factors influencing HRI performance. Thus, factors affecting trust and the effects of different trust violations by robots on human trust need to be investigated.
Problem/Gap 1: Factors affecting trust in human-drone interaction
In the initial phase of my studies, I focused on factors affecting Human-Drone Interaction in an online setting. A test bed was developed to assess the effects of various factors on human trust, including drone-related, task-related, and environment-related elements. I also examined the impact of different drone failures on human trust, comparing results from real-world and simulated videos. The findings showed that drone-related features, particularly performance, had the greatest influence on human trust, and that more severe failures can lead to greater trust loss. Simulated videos also yield similar results when accurately designed.
Problem/Gap-2: Effects of robots violating various aspects of human trust
Researchers have widely acknowledged the multidimensional nature of trust in HRI, leading to trust scales that reflect various dimensions. One such trust scale incorporates both a performance aspect and a moral aspect. In the second phase of my research, I focused on four main goals:
1. Designing a game that distinguishes between robots’ performance and moral trust violations.
2. Investigating whether individuals acknowledge the potential for robots to possess morality. Our results revealed that some people only consider the possibility of robots possessing morality after witnessing them violate moral trust.
3. Assessing the effects of robots’ performance and moral trust violations on humans. Our results showed that moral trust violations by robots lead to a higher trust loss in humans. Additionally, people tend to retaliate against robots that violate moral trust.
4. Assessing the effects of teammate identity on the effects of violations of the two trust aspects on humans. Our results indicated that violations of either performance or moral trust by robots cause a higher trust loss in humans than similar violations by human teammates, with moral trust violations resulting in more severe differences.
Problem/Gap-3: Feasibility of assessing the effects of various trust aspects violations using physiological measures
Building on the findings in the second phase of my studies, in the third phase, I investigated whether pupil dilation can serve as a physiological marker for detecting and differentiating human responses to breaches of performance- versus morality-based trust. My findings indicate that pupil dilation is sensitive to the type of trust violation, and that the duration and frequency of pupil responses depend on which trust dimension is violated.