Abstract and 1 Introduction
Related Works
2.1. Vision-and-Language Navigation
2.2. Semantic Scene Understanding and Instance Segmentation
2.3. 3D Scene Reconstruction
Methodology
3.1. Data Collection
3.2. Open-set Semantic Information from Images
3.3. Creating the Open-set 3D Representation
3.4. Language-Guided Navigation
Experiments
4.1. Quantitative Evaluation
4.2. Qualitative Results
Conclusion and Future Work, Disclosure statement, and References
f 3D scenes. This domain has been thoroughly explored using closed-set vocabulary methods, including our prior work [1], which utilizes Mask2Former [7] for image segmentation. Various studies [18, 19, 20] have adopted a similar approach to achieve object segmentation, resulting in a closed-set framework. While these methods are effective, they are constrained by the limitation of predefined object categories. Our approach employs SAM [21] to acquire segmentation masks for open-set detection. Moreover, our methodology, distinct from many existing techniques that depend heavily on extensive pre-training or fine-tuning, integrates these models to forge a more comprehensive and adaptable 3D scene representation. This emphasizes enhanced semantic understanding and spatial awareness.
\ To improve the semantic understanding of the objects detected within our images, we harness detailed feature representations using two foundational models: CLIP [9] and DINOv2 [10]. DINOv2, a Vision Transformer trained through self-supervision, recognises pixel-level correspondences between images and captures spatial hierarchies. Compared to CLIP, DINOv2 more effectively distinguishes between two distinct instances of the same object type, which poses challenges for CLIP.
\ It’s crucial to differentiate individual instances following the semantic identification of objects. Early methods employed a Region Proposal Network (RPN) to predict bounding boxes for these instances [22]. Alternatively, some strategies suggest a generalized architecture for managing panoptic segmentation [23]. In our preceding approach, we utilized the segmentation model Mask2Former [7], which employs an attention mechanism to isolate object-centric features. Recent research also tackles semantic scene understanding using open vocabularies [24], utilizing multi-view fusion and 3D convolutions to derive dense features from an open-vocabulary embedding space for precise semantic segmentation. Our current pipeline leverages Grounding DINO [25] to generate bounding boxes, which are then input into the Segment Anything Model (SAM) [21] to produce individual object masks, thus enabling instance segmentation within the scene.
\
:::info Authors:
(1) Laksh Nanwani, International Institute of Information Technology, Hyderabad, India; this author contributed equally to this work;
(2) Kumaraditya Gupta, International Institute of Information Technology, Hyderabad, India;
(3) Aditya Mathur, International Institute of Information Technology, Hyderabad, India; this author contributed equally to this work.
(4) Swayam Agrawal, International Institute of Information Technology, Hyderabad, India;
(5) A.H. Abdul Hafez, Hasan Kalyoncu University, Sahinbey, Gaziantep, Turkey;
(6) K. Madhava Krishna, International Institute of Information Technology, Hyderabad, India.
:::
:::info This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.
:::
\

