RADSeg: Unleashing Parameter and Compute Efficient Zero-Shot Open-Vocabulary Segmentation Using Agglomerative Models

1Carnegie Mellon University, 2IIIT Hyderabad
*Equal contribution
RADSeg teaser

RADSeg is a dense, language-aligned feature encoder that enables low-parameter, low-latency open-vocabulary semantic segmentation in 2D and 3D. By enhancing spatial locality of RADIO features, RADSeg outperforms previous state-of-the-art methods in accuracy while remaining highly efficient.

Abstract

Open-vocabulary semantic segmentation (OVSS) underpins many vision and robotics tasks that require generalizable semantic understanding. Existing approaches either rely on limited segmentation training data, which hinders generalization, or apply zero-shot heuristics to vision-language models (e.g CLIP), while the most competitive approaches combine multiple models to improve performance at the cost of high computational and memory demands.

In this work, we leverage an overlooked agglomerative vision foundation model, RADIO, to improve zero-shot OVSS along three key axes simultaneously: mIoU, latency, and parameter efficiency. We present the first comprehensive study of RADIO for zero-shot OVSS and enhance its performance through self-correlating recursive attention, self-correlating global aggregation, and computationally efficient mask refinement.

Our approach, RADSeg, achieves 6-30% mIoU improvement in the base ViT class while being 3.95x faster and using 2.5x fewer parameters. Surprisingly, RADSeg-base (105M) outperforms previous combinations of huge vision models (850-1350M) in mIoU, achieving state-of-the-art accuracy with substantially lower computational and memory cost.

Method Overview

RADSeg leverages the RADIO foundation model and enhances it with Self-Correlating Recursive Attention (SCRA) and Self-Correlating Global Aggregation (SCGA) to achieve state-of-the-art zero-shot open-vocabulary semantic segmentation.

RADSeg Method

2D and 3D Segmentation Results

2D Open-Vocabulary Segmentation

RADSeg achieves state-of-the-art performance on multiple 2D open-vocabulary semantic segmentation benchmarks including ADE20K, COCO-Stuff, Cityscapes, Pascal VOC, and Pascal Context.

2D Segmentation Results

3D Open-Vocabulary Semantic Mapping

RADSeg extends naturally to 3D by generating open-vocabulary semantic maps from multi-view RGB-D sequences, demonstrating strong multi-view consistency and spatial understanding.

3D Semantic Segmentation Results

BibTeX

@article{alama2025radseg,
      title={RADSeg: Unleashing Parameter and Compute Efficient Zero-Shot Open-Vocabulary Segmentation Using Agglomerative Models},
      author={Alama, Omar and Jariwala, Darshil and Bhattacharya, Avigyan and Kim, Seungchan and Wang, Wenshan and Scherer, Sebastian},
      journal={arXiv preprint arXiv:2511.19704},
      year={2025}
    }