• 基于多层次特征提取的细粒度图像哈希检索方法

    Fine-grained deep hashing image retrieval method based on multi level feature extraction

    • 在细粒度图像检索领域,现有研究成果主要集中于采用深层网络实现判别特征提取与精准定位,忽略了浅层特征信息的重要性,且无法消除背景中的复杂噪声干扰,限制了检索性能的提升。有鉴于此,提出了一种基于多层次特征提取的细粒度图像哈希检索方法(Fine-grained Deep Hashing image retrieval method based on Multi-level Feature Extraction, FDH-MFE)。该方法主要关注不同层次间特征的关联性,并增强了局部特征的提取能力。首先,提出了一个特征提取模块,旨在从网络的不同阶段提取细粒度特征,并通过图神经网络揭示其潜在的长距离依赖关系,为后续阶段提供更全面和精细的特征表示。其次,设计了一种代理损失算法,使得哈希码分布更加均匀,从而提升细粒度特征的区分能力。最后,通过设计背景抑制算法并结合三元组损失,增强了模型拟合全局分布的能力,使得所提出的方法在细粒度图像检索任务中表现出色。实验结果表明:该方法在4个公开数据集上的平均检索精度相较于次先进方法分别提高了15.03%、10.94%、9.98%和9.78%。

       

      Abstract: In the field of fine-grained image retrieval, existing research mainly focuses on using deep networks to achieve discriminative feature extraction and precise localization, ignoring the importance of shallow feature information and unable to eliminate complex noise interference in the background, which limits the improvement of retrieval performance. Therefore, a Fine-grained Deep Hashing image retrieval method based on Multi-level Feature Extraction (FDH-MFE) is proposed, which mainly focuses on the correlation between features at different levels and enhances the ability to extract local features. Firstly, a feature extraction module is proposed to extract fine-grained features from different stages of the network and reveal their potential long-range dependencies through graph neural networks, providing more comprehensive and refined feature representations for subsequent stages. Secondly, a proxy loss algorithm is designed to make the distribution of hash codes more uniform, thereby enhancing the discriminative ability of fine-grained features. Finally, by designing a background suppression algorithm and combining it with ternary loss, the model's ability to fit global distributions is enhanced, making the proposed method perform well in fine-grained image retrieval tasks. The experimental results show that the average retrieval accuracy of this method on four public data sets is improved by 15.03%, 10.94%, 9.98% and 9.78% respectively compared with the sub advanced method.

       

    /

    返回文章
    返回