Open-vocabulary semantic segmentation (OVSS) underpins many vision and robotics tasks that require generalizable semantic understanding. Existing approaches either rely on limited segmentation training data, which hinders generalization, or apply zero-shot heuristics to vision-language models (e.g CLIP), while the most competitive approaches combine multiple models to improve performance at the cost of high computational and memory demands.
In this work, we leverage an overlooked agglomerative vision foundation model, RADIO, to improve zero-shot OVSS along three key axes simultaneously: mIoU, latency, and parameter efficiency. We present the first comprehensive study of RADIO for zero-shot OVSS and enhance its performance through self-correlating recursive attention, self-correlating global aggregation, and computationally efficient mask refinement.
Our approach, RADSeg, achieves 6-30% mIoU improvement in the base ViT class while being 3.95x faster and using 2.5x fewer parameters. Surprisingly, RADSeg-base (105M) outperforms previous combinations of huge vision models (850-1350M) in mIoU, achieving state-of-the-art accuracy with substantially lower computational and memory cost.
RADSeg leverages the RADIO foundation model and enhances it with Self-Correlating Recursive Attention (SCRA) and Self-Correlating Global Aggregation (SCGA) to achieve state-of-the-art zero-shot open-vocabulary semantic segmentation.
RADSeg achieves state-of-the-art performance on multiple 2D open-vocabulary semantic segmentation benchmarks including ADE20K, COCO-Stuff, Cityscapes, Pascal VOC, and Pascal Context.
RADSeg extends naturally to 3D by generating open-vocabulary semantic maps from multi-view RGB-D sequences, demonstrating strong multi-view consistency and spatial understanding.
@article{alama2025radseg,
title={RADSeg: Unleashing Parameter and Compute Efficient Zero-Shot Open-Vocabulary Segmentation Using Agglomerative Models},
author={Alama, Omar and Jariwala, Darshil and Bhattacharya, Avigyan and Kim, Seungchan and Wang, Wenshan and Scherer, Sebastian},
journal={arXiv preprint arXiv:2511.19704},
year={2025}
}