The world is changing, and people with it. People adapt to technology rapidly and, often, in unexpected ways. What would it look like if robots could predict human behavior to work better with them? If the human knows the robot adapts to them, their behavior changes. If the human knows that the robot knows that the human behavior is changing … Theory of Mind, ad nauseum. We may hear about humans reading the minds of technology: we may often hear people talk about how they interact with the {Youtube, TikTok, Instagram, etc.} “algorithm” to influence future recommendations. These recommendation systems look at interaction behavior and suggest content that maximizes user interaction time. Can we do the same with robots, maximizing user safety or task completion accuracy?

My research uses explicit cognitive state assessments to predict future human behavior when interacting with virtual robots.

How do you feel about this cute little Astrobee robot? It floats around the International Space Station, mapping modules and taking sensor readings.


Any assessment you make relies on your knowledge of robots, the ISS, or the picture I added above. This is a static assessment. The slider position indicates something about how you would interact with Astrobee. However, a single data point of your trust gives you little to work with.

Imagine you’ve been watching Astrobee map the Kibō module of the ISS, a supervisory task. You notice the mapping coverage and update your trust level.

Astrobee is destined for more dextrous work on the Lunar Gateway Station in the coming years. Here, you may be watching Astrobee and commanding other parts of the station as part of a maintenance task, a collaborative task. You notice Astrobee moves slower as it approaches the module latch, and you update your trust level.

I’ve used a literal slider to measure trust in these instances. Still, trust would be measured during a conversation or human action. A distinction I made in these examples is the change in trust due to performance (mapping coverage) and behavior (robot speed). Recent research in the trust field relies heavily on humans changing trust when facing performance degradation. My experiments use both robot performance and robot behavior as independent variables for measuring trust. We hope that the findings from this work improve robots' sense of empathy when interacting with human counterparts!

2023-08-22 | Planetary Science Summer School

The cantaloupe-looking moon of Neptune, Triton, has been on the to-return list of planetary scientists since Voyager 2 flew by in 1989.

2022-11-22 | Smarthab Workshop

I had the opportunity to travel to San Antonio, Texas, to explore ideas and challenges related to Smart Habitats.