Existing 3D vision-language (3D-VL) benchmarks fall short in evaluating 3D-VL models,
creating a "mist" that obscures rigorous insights into model capabilities and 3D-VL tasks.
This mist persists due to three key limitations.
First, flawed test data, like ambiguous referential text in the grounding task,
can yield incorrect and unreliable test results.
Second, oversimplified metrics such as simply averaging accuracy per question answering (QA) pair,
cannot reveal true model capability due to their vulnerability to language variations.
Third, existing benchmarks isolate the grounding and QA tasks,
disregarding the underlying coherence that QA should be based on solid grounding capabilities.
To unveil the "mist", we propose Beacon3D, a benchmark for 3D-VL grounding and QA tasks,
delivering a perspective shift in the evaluation of 3D-VL understanding.
Beacon3D features (i) high-quality test data with precise and natural language,
(ii) object-centric evaluation with multiple tests per object to ensure robustness,
and (iii) a novel chain-of-analysis paradigm to address language robustness and model performance coherence across grounding and QA.
Our evaluation of state-of-the-art 3D-VL models on Beacon3D reveals that
(i) object-centric evaluation elicits true model performance and particularly weak generalization in QA;
(ii) grounding-QA coherence remains fragile in current 3D-VL models,
and (iii) incorporating large language models (LLMs) to 3D-VL models,
though commonly viewed as a practical technique, hinders grounding capabilities and has yet to elevate QA capabilities.
We hope Beacon3D and our comprehensive analysis could benefit the 3D-VL community towards faithful developments.