Applied Sciences, Vol. 13, Pages 11876: PMG—Pyramidal Multi-Granular Matching for Text-Based Person Re-Identification

6 months ago 19

Applied Sciences, Vol. 13, Pages 11876: PMG—Pyramidal Multi-Granular Matching for Text-Based Person Re-Identification

Applied Sciences doi: 10.3390/app132111876

Authors: Chao Liu Jingyi Xue Zijie Wang Aichun Zhu

Given a textual query, text-based person re-identification is supposed to search for the targeted pedestrian images from a large-scale visual database. Due to the inherent heterogeneity between different modalities, it is challenging to measure the cross-modal affinity between visual and textual data. Existing works typically employ single-granular methods to extract local features and align image regions with relevant words/phrases. Nevertheless, the limited robustness of single-granular methods cannot adapt to the imprecision and variances of visual and textual features, which are usually influenced by the background clutter, position transformation, posture diversity, and occlusion in surveillance videos, thereby leading to the deterioration of cross-modal matching accuracy. In this paper, we propose a Pyramidal Multi-Granular matching network (PMG) that incorporates a gradual transition process between the coarsest global information and the finest local information by a coarse-to-fine pyramidal method for multi-granular cross-modal features extraction and affinities learning. For each body part of a pedestrian, PMG is adequate in ensuring the integrity of local information while minimizing the surrounding interference signals at a certain scale and can adapt to capture discriminative signals of different body parts and achieve semantically alignment between image strips with relevant textual descriptions, thus suppressing the variances of feature extraction and improving the robustness of feature matching. Comprehensive experiments are conducted on the CUHK-PEDES and RSTPReid datasets to validate the effectiveness of the proposed method and results show that PMG outperforms state-of-the-art (SOTA) methods significantly and yields competitive accuracy of cross-modal retrieval.

Read Entire Article