Scene #0345
Scene #0099
Scene #0565
Scene #0107
The 3D occupancy prediction task has witnessed remarkable progress in recent years, playing a crucial role in vision-based autonomous driving systems. While traditional methods are limited to fixed semantic categories, recent approaches have moved towards predicting text-aligned features to enable open-vocabulary text queries in real-world scenes. However, there exists a trade-off in text-aligned scene modeling: sparse Gaussian representation struggles to capture small objects in the scene, while dense representation incurs significant computational overhead. To address these limitations, we present PG-Occ, an innovative Progressive Gaussian Transformer framework that enables open-vocabulary 3D occupancy prediction. Our framework employs progressive online densification, a feed-forward strategy that gradually enhances the 3D Gaussian representation to capture fine-grained scene details. By iteratively enhancing the representation, the framework achieves increasingly precise and detailed scene understanding. Another key contribution is the introduction of an anisotropy-aware sampling strategy with spatio-temporal fusion, which adaptively assigns receptive fields to Gaussians at different scales and stages, enabling more effective feature aggregation and richer scene information capture. Through extensive evaluations, we demonstrate that PG-Occ achieves state-of-the-art performance with a relative 14.3% mIoU improvement over the previous best performing method.
We showcase third-person view comparisons between PG-Occ predictions and Ground Truth for scenes #0099, #0770, and #0557.
We provide a baseline comparison for scene #0103 among GaussTR, our method (PG-Occ), and Ground Truth. The video demonstrates the qualitative differences between these approaches, highlighting the improvements achieved by PG-Occ over previous state-of-the-art methods.
@article{yan2025pgocc,
title={Progressive Gaussian Transformer with Anisotropy-aware Sampling for Open Vocabulary Occupancy Prediction},
author={Yan, Chi and Xu, Dan},
journal={arXiv preprint arXiv:2510.04759},
year={2025}
}