01/05/2026
By Danielle Fretwell
The Francis College of Engineering, Department of Electrical and Computer Engineering, invites you to attend a Doctoral Dissertation Proposal defense by Russell Perkins entitled: "Deciding When Robots Should Take Over (and When They Shouldn’t), An Interdependence Theory of Trust in Human–Robot Decision Making."
Candidate Name: Russell Perkins
Defense Date: Friday, Jan. 9, 2026
Time: 10 a.m. - noon
Location: Ball 314
Committee:
- Advisor: Paul Robinette, PhD, Professor, EECE, University of Massachusetts Lowell
- Henry Admoni, Ph.D., Professor, Robotics Institute, Carnegie Mellon University
- Reza Azadeh, Ph.D., Professor, Computer Science, University of Massachusetts Lowell
- Kavitha Chandra, Ph.D., Professor, EECE, University of Massachusetts Lowell
- Alan Wagner, Ph.D., Professor, Aerospace Engineering, Penn State University
Brief Abstract:
Robots are increasingly integrated into everyday situations where they are expected to function as effective teammates. A requirement for effective human-robot teaming is trust. Trust is understood as an attitude toward a system that involves a subjective evaluation of that system’s trustworthiness, which is an objective quality of the system. Trust calibration is the process that aligns trustworthiness with the subjective evaluation of trust. Calibrated trust requires understanding of how humans interpret robot behavior and how different failure types affect trust.
This work frames trust calibration as a problem of understanding the interaction structure instead of a purely signal-based model. It builds on Interdependence Theory, which models social interaction in terms of the control that agents have over the outcomes in an interaction. By representing human-robot interaction through control relationships, trust is treated as a property of structural interdependence. In this framework, trust calibration aligns trust with the structural conditions that reflect the human perception of the trustworthiness of the system.
We adopt a multidimensional view of trust and distinguish between performance-based trust and moral-based trust. Performance trust measures relate to a robot’s capability and reliability, whereas the measures of moral trust relate to transparency, integrity, and benevolence. Trust violations divide along these same dimensions. Performance violations occur when a robot fails to execute a task or performs poorly.
In a large-scale search-and-rescue game, participants interacted with a robot partner under conditions that explicitly separated performance violations from moral violations. Performance errors harmed the team outcome, whereas moral violations benefited the individual at the expense of the team. Two verbal calibration cues were tested: an apology that admitted fault and promised improvement, and a denial that externalized responsibility. Results show that cue effectiveness depends strongly on violation type.
Building on our previous experiments, we analyze how TCC structure by comparing static and adaptive strategies in a collaborative CAPTCHA task using a QTrobot. Static cues repeated the same apology after each error, while adaptive cues incorporated user explanations of errors. Adaptive cues produced larger and more consistent increases in competence-related trust dimensions, including reliability, capability, and transparency. Static cues reduced trust along these same dimensions. Moral trust dimensions showed limited change over short interaction horizons, suggesting that participants do not attribute robot errors to moral causes.
This thesis proposal outlines a future direction that operationalizes interdependence variables in a multi-armed bandit setting. Interdependence variables are cast as expected value differences under human and robot policies, enabling an online trust index that updates with experience. By monitoring agreement between predicted and observed behavior, the robot can trigger adaptive calibration cues when trust is misaligned with evidence. This framework links exploration, exploitation, and trust calibration in a unified decision-making model.