Mode Switch Assistance To Maximize Human Intent Disambiguation


Authors: Deepak Edakkattil Gopinath, Brenna Argall

In this paper, we develop an algorithm for goal disambiguation with a shared-control assistive robotic arm. Assistive systems are often required to infer human intent and this usually becomes a bottleneck for providing assistance quickly and accurately. We introduce the notion of inverse legibility in which the human-generated actions are legible enough for the robot to infer the human intent confidently and accurately. The proposed disambiguation paradigm seeks to elicit legible control commands from the human by selecting control modes that maximally disambiguate between the various goals in the scene. We present simulation results which look into the robustness of our algorithm and the impact of the choice of confidence functions on the performance of the system. Our simulation results suggest that the disambiguating control mode computed by our algorithm produces more intuitive results when the confidence function is able to capture the ``directedness’’ towards a goal. We also present a pilot study that explores the efficacy of the algorithm on real hardware. Preliminary results indicate that the assistance paradigm proposed was successful in decreasing task effort (number of mode switches) across interfaces and tasks.