STMI: Segmentation-Guided Token Modulation with Cross-Modal Hypergraph Interaction for Multi-Modal Object Re-Identification
Abstract
A novel multi-modal learning framework for object ReID that enhances foreground representations, extracts compact features through token reallocation, and captures high-order semantic relationships via cross-modal hypergraph interaction.
Multi-modal object Re-Identification (ReID) aims to exploit complementary information from different modalities to retrieve specific objects. However, existing methods often rely on hard token filtering or simple fusion strategies, which can lead to the loss of discriminative cues and increased background interference. To address these challenges, we propose STMI, a novel multi-modal learning framework consisting of three key components: (1) Segmentation-Guided Feature Modulation (SFM) module leverages SAM-generated masks to enhance foreground representations and suppress background noise through learnable attention modulation; (2) Semantic Token Reallocation (STR) module employs learnable query tokens and an adaptive reallocation mechanism to extract compact and informative representations without discarding any tokens; (3) Cross-Modal Hypergraph Interaction (CHI) module constructs a unified hypergraph across modalities to capture high-order semantic relationships. Extensive experiments on public benchmarks (i.e., RGBNT201, RGBNT100, and MSVR310) demonstrate the effectiveness and robustness of our proposed STMI framework in multi-modal ReID scenarios.
Community
STMI tackles RGB/NIR/TIR ReID by injecting SAM masks into attention (SFM), replacing hard token filtering with learnable query-based redistribution (STR), and modeling higher-order cross-modal relations via a unified hypergraph (CHI), achieving strong gains on RGBNT201/100 and MSVR310.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Learning Language-Driven Sequence-Level Modal-Invariant Representations for Video-Based Visible-Infrared Person Re-Identification (2026)
- RAGTrack: Language-aware RGBT Tracking with Retrieval-Augmented Generation (2026)
- TIGaussian: Disentangle Gaussians for Spatial-Awared Text-Image-3D Alignment (2026)
- MyGram: Modality-aware Graph Transformer with Global Distribution for Multi-modal Entity Alignment (2026)
- LoGoSeg: Integrating Local and Global Features for Open-Vocabulary Semantic Segmentation (2026)
- Context Patch Fusion With Class Token Enhancement for Weakly Supervised Semantic Segmentation (2026)
- M2I2HA: Multi-modal Object Detection Based on Intra- and Inter-Modal Hypergraph Attention (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper