Frame-and segment-level features and candidate pool evaluation for video caption generation

Rakshith Shetty, Jorma Laaksonen

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

81 Citations (Scopus)


We present our submission to the Microsoft Video to Language Challenge of generating short captions describing videos in the challenge dataset. Our model is based on the encoder-decoder pipeline, popular in image and video captioning systems. We propose to utilize two different kinds of video features, one to capture the video content in terms of objects and attributes, and the other to capture the motion and action information. Using these diverse features we train models specializing in two separate input sub-domains. We then train an evaluator model which is used to pick the best caption from the pool of candidates generated by these domain expert models. We argue that this approach is better suited for the current video captioning task, compared to using a single model, due to the diversity in the dataset. Efficacy of our method is proven by the fact that it was rated best in MSR Video to Language Challenge, as per human evaluation. Additionally, we were ranked second in the automatic evaluation metrics based table.

Original languageEnglish
Title of host publicationMM 2016 - Proceedings of the 2016 ACM Multimedia Conference
Number of pages4
ISBN (Electronic)9781450336031
Publication statusPublished - 1 Oct 2016
MoE publication typeA4 Conference publication
EventACM Multimedia - Amsterdam, Netherlands
Duration: 15 Oct 201619 Oct 2016
Conference number: 24


ConferenceACM Multimedia
Abbreviated titleACMMM


Dive into the research topics of 'Frame-and segment-level features and candidate pool evaluation for video caption generation'. Together they form a unique fingerprint.

Cite this