Ph.D. Research

Unfair! Perceptions of Fairness in Human-Robot Teams

How team members are treated influences their performance in the team and their desire to be a part of the team in the future. Prior research in human-robot teamwork proposes fairness definitions for human-robot teaming that are based on the work completed by each team member. However, metrics that properly capture people's perception of fairness in human-robot teaming remains a research gap. We present work on assessing how well objective metrics capture people's perception of fairness. First, we extend prior fairness metrics based on team members' capabilities and workload to a bigger team. We also develop a new metric to quantify the amount of time that the robot spends working on the same task as each person. We conduct an online user study (n=95) and show that these metrics align with perceived fairness. Importantly, we discover that there are bleed-over effects in people's assessment of fairness. When asked to rate fairness based on the amount of time that the robot spends working with each person, participants used two factors (fairness based on the robot's time and teammates' capabilities). This bleed-over effect is stronger when people are asked to assess fairness based on capability. From these insights, we propose design guidelines for algorithms to enable robotic teammates to consider fairness in its decision-making to maintain positive team social dynamics and team task performance..

Team scenario used to assess fairness metrics.

-------------------------------------

Valuable Robotic Teammates: Algorithms That Reason About the Multiple Dimensions of Human-Robot Teamwork

As robots enter our homes and work places, one of the roles they will have to fulfill is being a teammate. Prior approaches in human-robot teamwork enabled robots to reason about intent, decide when and how to help, and allocate tasks to achieve efficiency. However, these existing algorithms mostly focused on understanding intent and providing help and assumed that teamwork is always present. Overall, effective robotic teammates must be able to reason about the multi-dimensional aspects of teamwork. Working towards this challenge, we present empirical findings and an algorithm that enables robots to understand the human’s intent, communicate their own intent, display effortful behavior, and provide help to optimize the team’s task performance. In addition to task performance, people also care about being treated fairly. As part of future work, we propose an algorithm that reasons about task performance and fairness to achieve lasting human-robot partnerships.

Evaluations of TASC Algorithm

-------------------------------------

Teammate Algorithm for Shared Cooperation (TASC)

For robots to be perceived as full-fledged team members, they must display intelligent behavior along multiple dimensions. One challenge is that even when the robot and human are on the same team, the interaction may not feel like teamwork to the human. We present a novel algorithm, Teammate Algorithm for Shared Cooperation (TASC). TASC is motivated by the concept of shared cooperative activity (SCA) for human-human teamwork, developed in prior work by Bratman. We focus on enabling the robot to prioritize certain SCA facets in its action selection depending on the task. We evaluated TASC in three experiments using different tasks with human users on Amazon Mechanical Turk. Our results show that TASC enabled participants to predict the robot's goal earlier by one robot move and with greater confidence. The robot also helped reduce participants' energy usage in a simulated block-moving task. Altogether, these results show that considering the SCA facets in the robot's action selection improves teamwork.

-------------------------------------

Study exploring factors that influence people's perception of fairness in human-robot teams.

Defining Fairness in Human-Robot Teams

We seek to understand the human teammate's perception of fairness during a human-robot physical collaborative task where certain subtasks leverage the robot's strengths and others leverage the human's. We conduct a user study (n=30) to investigate the effects of fluency (absent vs. present) and effort (absent vs. present) on participants' perception of fairness. Fluency controls if the robot minimizes the idle time between the human's action and robot's action. Effort controls if the robot performs tasks that it is least skilled at, i.e., most time-consuming tasks, as quickly as possible. We evaluated four human-robot teaming algorithms that consider different levels of fluency and effort. Our results show that effort and fluency help improve fairness without making a trade-off with efficiency. When the robot displays effort, this significantly increased participants' perceived fairness. Participants' perception of fairness is also influenced by team members' skill levels and task type. To that end, we propose three notions of fairness for effective human-robot teamwork: equality of workload, equality of capability, and equality of task type.

-------------------------------------

Study exploring the effects of integrated intent recognition and communication on human-robot collaboration.

Effects of Integrated Intent Recognition and Communication on Human-Robot Collaboration

Human-robot interaction research to date has  investigated intent recognition and communication separately. In this paper, we explore the effects of integrating both the robot's ability to generate intentional motion and predict the human`s motion in a collaborative physical task. We implemented an intent recognition system to recognize the human partner`s hand motion intent and a motion planner system to enable the robot to communicate its intent by using legible and predictable motion. We tested this bi-directional intent system in a 2-way within-subjects user study. Results suggest that an integrated intent recognition and communication system may facilitate more collaborative behavior among team members.