Augmented reality (AR) headsets like the Microsoft HoloLens allow users to interact with virtual objects in new ways. These headsets allow for hands-free and immersive interaction, and have often been touted as technology that will replace our phones. One common way to provide a hands-free interface is through the use of a speech/dialogue agent, such as Siri or Alexa. Thus, we need to explore how to best implement these agents to support applications in AR.
I am currently focused on investigating how agents are visually represented in AR (such as having an embodiment vs. speech only) and diving into how users perceive and respond to different agents. I will be presenting part of this work at CHI 2019.