Evo-0: Vision-Language-Action Model with Implicit Spatial Understanding

1School of AI, Shanghai Jiao Tong University, 2EvoMind Tech, 3IAAR-Shanghai, 4University of Cambridge

*Equal Contribution, Corresponding Author

Abstract

Vision-Language-Action (VLA) models have emerged as a promising framework for enabling generalist robots capable of perceiving, reasoning, and acting in the real world. These models usually build upon pretrained Vision-Language Models (VLMs), which excel at semantic understanding due to large-scale image and text pretraining. However, existing VLMs typically lack precise spatial understanding capabilities, as they are primarily tuned on 2D image-text pairs without 3D supervision. To address this limitation, recent approaches have incorporated explicit 3D inputs such as point clouds or depth maps, but this necessitates additional depth sensors or pre-trained depth estimation models, which may yield defective results. In contrast, our work introduces a plug-and-play module that implicitly incorporates 3D geometry features into VLA models by leveraging an off-the-shelf visual geometry foundation model. This integration provides the model with depth-aware visual representations, improving its ability to understand the geometric structure of the scene and the spatial relationships among objects from RGB images alone. We evaluate our method on a set of spatially challenging tasks in both simulation and the real world. Extensive evaluations show that our method significantly improves the performance of state-of-the-art VLA models across diverse scenarios.

Architecture of Evo-0

Result Image

Simulation Experiments

Result Image

Real-world Experiments

Result Image
  1. Centering a cylinder on a target.
    The robot is required to align a cylindrical object precisely at the center of a marked target area on the table. This task resembles target shooting: the target has concentric rings, and scoring is based on which ring the center of the cylinder falls into. The closer to the center, the higher the score.
  2. Peg-in-hole insertion.
    This task requires the robot to insert a cylindrical peg into one of three tightly fitting holes on a board. This necessitates accurate alignment in 3D space, as small tilting or offset could cause task failure.
  3. Middle bottle grasping.
    Three bottles are closely placed in a row, and the robot is instructed to pick the middle one. This setup mimics a grocery store scenario, where items are densely arranged on shelves. Success is defined as picking up the middle bottle without knocking over the adjacent ones.
  4. Can pick-and-place.
    In this task, the robot must pick up a standard can and place it in a designated spot on a shelf. The location of the placement is varied across trials in both position and height, requiring the model to generalize spatial understanding to different configurations.
  5. Transparent object pick-and-place.
    The task setup is similar to the previous one, but involves transparent objects such as glass bottles. This presents additional challenge, since transparent materials are often poorly captured by RGB sensors and are prone to glare, making them difficult to perceive and localize.

Qualitative results of our model in real-world tasks

Result Image

Continuous Rollouts without Clipping (10x)

Baseline(Pi0)

Ours

Baseline(Pi0)

Ours

Baseline(Pi0)

Ours

BibTeX

@article{lin2025evo,
  title={Evo-0: Vision-Language-Action Model with Implicit Spatial Understanding},
  author={Lin, Tao and Li, Gen and Zhong, Yilei and Zou, Yanwen and Zhao, Bo},
  journal={arXiv preprint arXiv:2507.00416},
  year={2025}
}