Safe Visual Navigation via Deep Learning and Novelty Detection


Authors: Charlie Richter, Nicholas Roy

Robots that use learned models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. Deep learning methods provide state-of-the-art capabilities for processing raw sensor data such as images, however they may produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent methods for quantifying neural network uncertainty, such as MC Dropout, may not provide suitable or efficient uncertainty estimates when queried with novel inputs in domains such as image-based robot navigation. Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, our primary contribution is to show that we can use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior. With this capability, we can deploy an autonomous deep-learning system in arbitrary environments, without concern for whether it has received the appropriate training. We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar environments, while reverting to the prior behavior in novel environments so that it can safely collect additional data and continually improve. A video illustrating our approach is available: at http://www.ccode.ml/XgEhf.