Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Abstract
Vision-Language-Action models for robotic control face computational and memory challenges on edge platforms, prompting research into efficient architectures and training strategies.
Vision-Language-Action (VLA) models extend vision-language models to embodied control by mapping natural-language instructions and visual observations to robot actions. Despite their capabilities, VLA systems face significant challenges due to their massive computational and memory demands, which conflict with the constraints of edge platforms such as on-board mobile manipulators that require real-time performance. Addressing this tension has become a central focus of recent research. In light of the growing efforts toward more efficient and scalable VLA systems, this survey provides a systematic review of approaches for improving VLA efficiency, with an emphasis on reducing latency, memory footprint, and training and inference costs. We categorize existing solutions into four dimensions: model architecture, perception feature, action generation, and training/inference strategies, summarizing representative techniques within each category. Finally, we discuss future trends and open challenges, highlighting directions for advancing efficient embodied intelligence.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VLA-IAP: Training-Free Visual Token Pruning via Interaction Alignment for Vision-Language-Action Models (2026)
- Look Before Acting: Enhancing Vision Foundation Representations for Vision-Language-Action Models (2026)
- DIAL: Decoupling Intent and Action via Latent World Modeling for End-to-End VLA (2026)
- DA-PTQ: Drift-Aware Post-Training Quantization for Efficient Vision-Language-Action Models (2026)
- Latent World Models for Automated Driving: A Unified Taxonomy, Evaluation Framework, and Open Challenges (2026)
- History-Conditioned Spatio-Temporal Visual Token Pruning for Efficient Vision-Language Navigation (2026)
- MMaDA-VLA: Large Diffusion Vision-Language-Action Model with Unified Multi-Modal Instruction and Generation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2510.17111 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper