HAL: Improved Text-­Image Matching by Mitigating Visual Semantic Hubs

Fangyu Liu, Rongtian Ye, Xun Wang, Shuaipeng Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


The hubness problem widely exists in high-dimensional embedding space and is a fundamental source of error for cross-modal matching tasks. In this work, we study the emergence of hubs in Visual Semantic Embeddings (VSE) with application to text-image matching. We analyze the pros and cons of two widely adopted optimization objectives for training VSE and propose a novel hubness-aware loss function (HAL) that addresses previous methods' defects. Unlike (Faghri et al.2018) which simply takes the hardest sample within a mini-batch, HAL takes all samples into account, using both local and global statistics to scale up the weights of "hubs". We experiment our method with various configurations of model architectures and datasets. The method exhibits exceptionally good robustness and brings consistent improvement on the task of text-image matching across all settings. Specifically, under the same model architectures as (Faghri et al. 2018) and (Lee at al. 2018), by switching only the learning objective, we report a maximum R@1improvement of 7.4% on MS-COCO and 8.3% on Flickr30k.
Original languageEnglish
Title of host publicationThe Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)
ISBN (Electronic)978-1-57735-823-7
Publication statusPublished - 2020
MoE publication typeA4 Article in a conference publication
EventAAAI Conference on Artificial Intelligence - New York, United States
Duration: 7 Feb 202012 Feb 2020
Conference number: 34


ConferenceAAAI Conference on Artificial Intelligence
Abbreviated titleAAAI
Country/TerritoryUnited States
CityNew York
Internet address


Dive into the research topics of 'HAL: Improved Text-­Image Matching by Mitigating Visual Semantic Hubs'. Together they form a unique fingerprint.

Cite this