ViNT: A Foundation Model for Visual Navigation

UC Berkeley

Oral Talk at Conference on Robot Learning (CoRL) 2023
Atlanta, Georgia

Live Demo at Conference on Robot Learning (CoRL) 2023
Live Demo at Robot Learning Workshop, NeurIPS 2023
Oral Talk at Bay Area Machine Learning Symposium (BayLearn) 2023

Abstract

General-purpose pre-trained models ("foundation models") have enabled practitioners to produce generalizable solutions for individual machine learning problems with datasets that are significantly smaller than those required for learning from scratch. Such models are typically trained on large and diverse datasets with weak supervision, consuming much more training data than is available for any individual downstream application.

In this paper, we describe the Visual Navigation Transformer (ViNT), a foundation model that aims to bring the success of general-purpose pre-trained models to vision-based robotic navigation. ViNT is trained with a general goal-reaching objective that can be used with any navigation dataset, and employs a flexible Transformer-based architecture to learn navigational affordances and enable efficient adaptation to a variety of downstream navigational tasks. ViNT is trained on a number of existing navigation datasets, comprising hundreds of hours of robotic navigation from a variety of different robotic platforms, and exhibits positive transfer, outperforming specialist models trained on singular datasets.

ViNT can be augmented with diffusion-based subgoal proposals to explore novel environments, and can solve kilometer-scale navigation problems when equipped with long-range heuristics. ViNT can also be adapted to novel task specifications with a technique inspired by prompt-tuning, where the goal encoder is replaced by an encoding of another task modality (e.g., GPS waypoints or routing commands) embedded into the same space of goal tokens. This flexibility and ability to accommodate a variety of downstream problem domains establishes ViNT as an effective foundation model for mobile robotics.

Summary Video

ViNT Architecture

ViNT uses a Transformer-based architecture to encode (current and past) visual observations and goals using an EfficientNet CNN, and predicts temporal distance and normalized actions in an embodiment-agnostic manner.

ViNT architecture

Search Overview

ViNT can explore previously unseen environments by employing a topological graph-based global planner. An image-to-image diffusion model proposes diverse exploration targets which are spatially grounded using ViNT (yellow), and scored using a goal-directed heuristic h. Subgoals are added to the topological graph and executed using the ViNT policy.

Overview of heuristic-guided search with ViNT

Long-Range Navigation with Context


ViNT can solve long-range navigation problems when equipped with a long-range heuristic. Here, we show ViNT solving a 1.5km navigation problem in a previously unseen environment, using a heuristic that estimates the distance to the goal using a pre-trained depth model.

To further show the different exploration behaviors supported by ViNT, here we deploy a locobot to explore an office floor from the same starting point but with two different position goals to guide the search. Starting from the same position, the different goals lead the robot to two different parts of the building, and both trajectories succeed in reaching their goals.


Adaptation to Downstream Tasks


Beyond its core functionality as an image goal-conditioned model, the strong navigational priors learned by ViNT can be adapted to a variety of downstream tasks, beyond navigating to image goals, by fine-tuning part or all of the model in novel environments or with new modalities of data.

ViNT fine-tuning
ViNT can transfer navigational affordances to novel tasks (40% success in zero-shot), and efficiently masters the task (80% success) with less than 1 hour of fine-tuning data. ViNT fine-tuning outperforms a specialist model trained with 5✕ data.
ViNT adaptation
ViNT can easily be adapted to other common forms of goal-specification by learning a mapping from the desired goal modality to the ViNT goal token.
This allows ViNT to be adapted to a variety of new robots, environments, objectives, and goal modalities. One such example is the massively out-of-distribution task of controlling a car in a simulated urban environment (ViNT was only trained on real data), for the task of lane-keeping, and with high-level routing commands (ViNT was only trained to reach image goals).

Emergent Behaviors


Implicit Navigation Preferences

ViNT exhibits implicit preferences for following paved roads and narrow hallways while searching previously unseen environments, enabling efficient exploration.

Robustness to Dynamic Pedestrians

ViNT can successfully navigate around a crowd of dynamic pedestrians and reach the goal behind them, despite its simple self-supervised training objective.

BibTeX

@inproceedings{shah2023vint,
          title     = {Vi{NT}: A Foundation Model for Visual Navigation},
          author    = {Dhruv Shah and Ajay Sridhar and Nitish Dashora and Kyle Stachowicz and Kevin Black and Noriaki Hirose and Sergey Levine},
          booktitle = {7th Annual Conference on Robot Learning},
          year      = {2023},
          url       = {https://arxiv.org/abs/2306.14846}
        }

The website design was adapted from Nerfies.