Frame-and segment-level features and candidate pool evaluation for video caption generation

Rakshith Shetty, Jorma Laaksonen

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference contributionScientificvertaisarvioitu

57 Sitaatiot (Scopus)


We present our submission to the Microsoft Video to Language Challenge of generating short captions describing videos in the challenge dataset. Our model is based on the encoder-decoder pipeline, popular in image and video captioning systems. We propose to utilize two different kinds of video features, one to capture the video content in terms of objects and attributes, and the other to capture the motion and action information. Using these diverse features we train models specializing in two separate input sub-domains. We then train an evaluator model which is used to pick the best caption from the pool of candidates generated by these domain expert models. We argue that this approach is better suited for the current video captioning task, compared to using a single model, due to the diversity in the dataset. Efficacy of our method is proven by the fact that it was rated best in MSR Video to Language Challenge, as per human evaluation. Additionally, we were ranked second in the automatic evaluation metrics based table.

OtsikkoMM 2016 - Proceedings of the 2016 ACM Multimedia Conference
ISBN (elektroninen)9781450336031
DOI - pysyväislinkit
TilaJulkaistu - 1 lokakuuta 2016
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisuussa
TapahtumaACM Multimedia - Amsterdam, Alankomaat
Kesto: 15 lokakuuta 201619 lokakuuta 2016
Konferenssinumero: 24


ConferenceACM Multimedia

Sormenjälki Sukella tutkimusaiheisiin 'Frame-and segment-level features and candidate pool evaluation for video caption generation'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä