### Breaking the Human-Robot Deadlock: Exceeding Shared Control Performance Limits through Human-Robot Interaction Sparsity

Authors: Peter Trautman

We prove that classical shared control fails to optimize human and robot agreement and intent if the human or robot model is multimodal in a Gaussian process (GP) basis (i.e., intention ambiguity is present). For example, shared controllers derived from the \emph{function allocation}~\cite{fitts-list,parasuraman-loa} or \emph{levels of autonomy}~\cite{sheridan-loa} paradigm are classical, and thus fail to optimize human and robot agreement and intent when intention ambiguity is present. Practically, this suboptimality can manifest as unnecessary and unresolvable disagreement (an unnecessary deadlock). For instance, if the models provide the human’s strongest preference (go left) and the robot’s strongest preference (go right), arbitration can only provide two solutions: freeze in place, or blend the predictions into a collision trajectory with the obstacle that the agents were trying to avoid. However, multimodal GPs carry rich agent compromise information. For instance, the robot is often willing to go left or right, and so deadlock can be avoided. The inability of classical methods to optimize human and robot agreement and intent—and the cause of unnecessary deadlock—stems from arbitration occurring over \emph{model samples}, rather than over \emph{models} (Figure~\ref{fig:block-diagram}). Our key insight is thus to arbitrate over human and robot distributions, thereby optimizing human and robot agreement and intent. Our key contribution is a computationally efficient distribution arbitration method: if the human and robot carry $\nh$ and $\nr$ GP modes, the joint (naively) has at least $\nh\nr$ modes. In our approach, deadlock solutions have vanishingly small coefficients, while the $\nmin = \argmin{\nh,\nr }$ non-zero coefficients correspond to deadlock-free actions. Surprisingly, then, our joint only has $\nmin$ non-trivial modes, which is fewer modes than the individual agent models possess. This sparsity is achieved by leveraging interaction’s ability to \emph{constrain} shared control actions. We call our approach $\nmin$-sparse generalized shared control.