Evo-0: Vision-Language-Action Model with Implicit Spatial Understanding

1School of AI, Shanghai Jiao Tong University, 2EvoMind Tech, 3IAAR-Shanghai

*Equal Contribution, Corresponding Author

Abstract

Vision-Language-Action (VLA) models have emerged as a promising framework for enabling generalist robots capable of perceiving, reasoning, and acting in the real world. These models usually build upon pretrained Vision-Language Models (VLMs), which excel at semantic understanding due to large-scale text pretraining. However, VLMs typically lack precise spatial understanding capabilities, as they are primarily tuned on 2D image-text pairs without 3D supervision. To address this limitation, recent approaches have incorporated explicit 3D inputs such as point clouds or depth maps, but this necessitates additional depth sensors or defective estimation. In contrast, our work introduces a plug-and-play module that implicitly injects 3D geometry features into VLA models by leveraging an off-the-shelf visual geometry foundation models. We design five spatially challenging tasks that require precise spatial understanding ability to validate effectiveness of our method. Extensive evaluations show that our method significantly improves the performance of state-of-the-art VLA models across diverse scenarios.

Architecture of Evo-0

Result Image

Tasks setup

Result Image
  1. Centering a cylinder on a target.
    The robot is required to align a cylindrical object precisely at the center of a marked target area on the table. This task resembles target shooting: the target has concentric rings, and scoring is based on which ring the center of the cylinder falls into. The closer to the center, the higher the score.
  2. Peg-in-hole insertion.
    This task requires the robot to insert a cylindrical peg into one of three tightly fitting holes on a board. This necessitates accurate alignment in 3D space, as small tilting or offset could cause task failure.
  3. Middle bottle grasping.
    Three bottles are closely placed in a row, and the robot is instructed to pick the middle one. This setup mimics a grocery store scenario, where items are densely arranged on shelves. Success is defined as picking up the middle bottle without touching or knocking over the adjacent ones.
  4. Can pick-and-place.
    In this task, the robot must pick up a standard can and place it in a designated spot on a shelf. The location of the placement is varied across trials in both position and height, requiring the model to generalize spatial understanding to different configurations.
  5. Transparent object pick-and-place.
    The task setup is similar to the previous one, but involves transparent objects such as glass bottles. This presents additional challenge, since transparent materials are often poorly captured by RGB sensors and are prone to glare, making them difficult to perceive and localize.

Qualitative results of our model in real-world tasks

Result Image

Continuous Rollouts without Clipping (10x)

Baseline(Pi0)

Ours

Baseline(Pi0)

Ours

Baseline(Pi0)

Ours

BibTeX

@article{lin2025evo,
  title={Evo-0: Vision-Language-Action Model with Implicit Spatial Understanding},
  author={Lin, Tao and Li, Gen and Zhong, Yilei and Zou, Yanwen and Zhao, Bo},
  journal={arXiv preprint arXiv:2507.00416},
  year={2025}
}