2 Citations (Scopus)
51 Downloads (Pure)

Abstract

Dense video captioning (VC) aims at generating a paragraph-long description for events in video segments. Borrowing from the success in language modeling, Transformer-based models for VC have been shown effective also in modeling cross-domain video-text representations with cross-attention (Xatt). Despite Xatt’s effectiveness, the queries and outputs of attention, which are from different domains, tend to be weakly related. In this paper, we argue that the weak relatedness, or domain discrepancy, could impede a model from learning meaningful cross-domain representations. Hence, we propose a simple yet effective Post-Attention Modulator (PAM) that post-processes Xatt’s outputs to narrow the discrepancy. Specifically, PAM modulates and enhances the average similarity over Xatt’s queries and outputs. The modulated similarities are then utilized as a weighting basis to interpolate PAM’s outputs. In our experiments, PAM was applied to two strong VC baselines, VTransformer and MART, with two different video features on the well-known VC benchmark datasets ActivityNet Captions and YouCookII. According to the results, the proposed PAM brings consistent improvements in, e.g., CIDEr-D at most to 14.5%, as well as other metrics, BLEU and METEOR, considered.
Original languageEnglish
Title of host publicationProceedings of the 26th International Conference on Pattern Recognition (ICPR)
PublisherIEEE
Pages1536-1542
ISBN (Electronic)978-1-6654-9062-7
DOIs
Publication statusPublished - 2022
MoE publication typeA4 Conference publication
EventInternational Conference on Pattern Recognition - Montreal, Canada
Duration: 21 Aug 202225 Aug 2022
Conference number: 26

Publication series

NameInternational Conference on Pattern Recognition
ISSN (Print)1051-4651

Conference

ConferenceInternational Conference on Pattern Recognition
Abbreviated titleICPR
Country/TerritoryCanada
CityMontreal
Period21/08/202225/08/2022

Fingerprint

Dive into the research topics of 'Post-Attention Modulator for Dense Video Captioning'. Together they form a unique fingerprint.

Cite this