Paying Attention to Descriptions Generated by Image Captioning Models

Hamed Rezazadegan Tavakoli, Rakshith Shetty, Ali Borji, Jorma Laaksonen

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

53 Citations (Scopus)
300 Downloads (Pure)

Abstract

To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliencyboosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data.
Original languageEnglish
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PublisherIEEE
Pages2506-2515
ISBN (Electronic)978-1-5386-1032-9
DOIs
Publication statusPublished - 2017
MoE publication typeA4 Conference publication
EventIEEE International Conference on Computer Vision - Venice, Italy
Duration: 22 Oct 201729 Oct 2017

Publication series

NameIEEE International Conference on Computer Vision
PublisherIEEE
ISSN (Print)1550-5499
ISSN (Electronic)2380-7504

Conference

ConferenceIEEE International Conference on Computer Vision
Abbreviated titleICCV
Country/TerritoryItaly
CityVenice
Period22/10/201729/10/2017

Keywords

  • Visualization
  • Measurement
  • Data models
  • Grammar
  • Computational modeling
  • Computer science

Fingerprint

Dive into the research topics of 'Paying Attention to Descriptions Generated by Image Captioning Models'. Together they form a unique fingerprint.
  • Finnish centre of excellence in computational inference research

    Xu, Y. (Project Member), Rintanen, J. (Project Member), Kaski, S. (Principal investigator), Anwer, R. (Project Member), Parviainen, P. (Project Member), Soare, M. (Project Member), Vuollekoski, H. (Project Member), Rezazadegan Tavakoli, H. (Project Member), Peltola, T. (Project Member), Blomstedt, P. (Project Member), Puranen, S. (Project Member), Dutta, R. (Project Member), Gebser, M. (Project Member), Mononen, T. (Project Member), Bogaerts, B. (Project Member), Tasharrofi, S. (Project Member), Pesonen, H. (Project Member), Weinzierl, A. (Project Member) & Yang, Z. (Project Member)

    01/01/201531/12/2017

    Project: Academy of Finland: Other research funding

Cite this