Frame-and segment-level features and candidate pool evaluation for video caption generation

Rakshith Shetty, Jorma Laaksonen

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

46 Citations (Scopus)

Abstract

We present our submission to the Microsoft Video to Language Challenge of generating short captions describing videos in the challenge dataset. Our model is based on the encoder-decoder pipeline, popular in image and video captioning systems. We propose to utilize two different kinds of video features, one to capture the video content in terms of objects and attributes, and the other to capture the motion and action information. Using these diverse features we train models specializing in two separate input sub-domains. We then train an evaluator model which is used to pick the best caption from the pool of candidates generated by these domain expert models. We argue that this approach is better suited for the current video captioning task, compared to using a single model, due to the diversity in the dataset. Efficacy of our method is proven by the fact that it was rated best in MSR Video to Language Challenge, as per human evaluation. Additionally, we were ranked second in the automatic evaluation metrics based table.

Original languageEnglish
Title of host publicationMM 2016 - Proceedings of the 2016 ACM Multimedia Conference
PublisherACM
Pages1073-1076
Number of pages4
ISBN (Electronic)9781450336031
DOIs
Publication statusPublished - 1 Oct 2016
MoE publication typeA4 Article in a conference publication
EventACM Multimedia - Amsterdam, Netherlands
Duration: 15 Oct 201619 Oct 2016
Conference number: 24

Conference

ConferenceACM Multimedia
Abbreviated titleACMMM
CountryNetherlands
CityAmsterdam
Period15/10/201619/10/2016

Fingerprint Dive into the research topics of 'Frame-and segment-level features and candidate pool evaluation for video caption generation'. Together they form a unique fingerprint.

Cite this