Image and Video Captioning with Augmented Neural Architectures

Rakshith Shetty, Hamed Rezazadegan Tavakoli, Jorma Laaksonen

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Neural-network-based image and video captioning can be substantially improved by utilizing architectures that make use of special features from the scene context, objects, and locations. A novel discriminatively trained evaluator network for choosing the best caption among those generated by an ensemble of caption generator networks further improves accuracy.
Original languageEnglish
Pages (from-to)34-46
Number of pages13
JournalIEEE Multimedia
Volume25
Issue number2
DOIs
Publication statusPublished - 2018
MoE publication typeA1 Journal article-refereed

Keywords

  • computer vision
  • applications and expert knowledge-intensive systems
  • artificial intelligence
  • computing
  • deep learning
  • image captioning
  • recurrent networks

Fingerprint

Dive into the research topics of 'Image and Video Captioning with Augmented Neural Architectures'. Together they form a unique fingerprint.

Cite this