Geometry-aware relational exemplar attention for dense captioning

Tzu Jui Julius Wang, Hamed R. Tavakoli, Mats Sjöberg, Jorma Laaksonen

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

2 Citations (Scopus)
97 Downloads (Pure)

Abstract

Dense captioning (DC), which provides a comprehensive context understanding of images by describing all salient visual groundings in an image, facilitates multimodal understanding and learning. As an extension of image captioning, DC is developed to discover richer sets of visual contents and to generate captions of wider diversity and increased details. The state-of-the-art models of DC consist of three stages: (1) region proposals, (2) region classification, and (3) caption generation for each proposal. They are typically built upon the following ideas: (a) guiding the caption generation with image-level features as the context cues along with regional features and (b) refining locations of region proposals with caption information. In this work, we propose (a) a joint visual-textual criterion exploited by the region classifier that further improves both region detection and caption accuracy, and (b) a Geometryaware Relational Exemplar attention (GREatt) mechanism to relate region proposals. The former helps the model learn a region classifier by effectively exploiting both visual groundings and caption descriptions. Rather than treating each region proposal in isolation, the latter relates regions in complementary relations, i.e. contextually dependent, visually supported and geometry relations, to enrich context information in regional representations. We conduct an extensive set of experiments and demonstrate that our proposed model improves the state-of-the-art by at least +5.3% in terms of the mean average precision on the Visual Genome dataset.

Original languageEnglish
Title of host publicationMULEA 2019 - 1st International Workshop on Multimodal Understanding and Learning for Embodied Applications, co-located with MM 2019
PublisherACM
Pages3-11
Number of pages9
ISBN (Electronic)9781450369183
DOIs
Publication statusPublished - 15 Oct 2019
MoE publication typeA4 Conference publication
EventInternational Workshop on Multimodal Understanding and Learning for Embodied Applications - Nice, France
Duration: 25 Oct 201925 Oct 2019
Conference number: 1

Workshop

WorkshopInternational Workshop on Multimodal Understanding and Learning for Embodied Applications
Abbreviated titleMULEA
Country/TerritoryFrance
CityNice
Period25/10/201925/10/2019

Keywords

  • Attention
  • Dense captioning
  • Relationship modeling

Fingerprint

Dive into the research topics of 'Geometry-aware relational exemplar attention for dense captioning'. Together they form a unique fingerprint.

Cite this