Global Fusion Attention for Vision and Language Understanding

Zixin Guo*, Chen Liang, Ziyu Wan, Yang Bai

*Corresponding author for this work

Research output: Contribution to conferenceAbstractScientificpeer-review

Abstract

We extend the popular transformer architecture to a multimodal model, processing both visual and textual inputs. We propose a new attention mechanism on Transformer-based architecture for the joint vision and language understanding tasks. Our model fuses multi-level comprehension between images and texts in a weighted manner, which could better curve the internal relationships. Experiments on benchmark VQA dataset CLEVR demonstrate the effectiveness of the proposed attention mechanism. We also observe the improvements in sample efficiency of reinforcement learning through the experiments on grounded language understanding tasks of BabyAI platform.

Original languageEnglish
Pages15789-15790
Number of pages2
Publication statusPublished - 2021
MoE publication typeNot Eligible
Event35th AAAI Conference on Artificial Intelligence / 33rd Conference on Innovative Applications of Artificial Intelligence / 11th Symposium on Educational Advances in Artificial Intelligence - Virtual, Online
Duration: 2 Feb 20219 Feb 2021

Conference

Conference35th AAAI Conference on Artificial Intelligence / 33rd Conference on Innovative Applications of Artificial Intelligence / 11th Symposium on Educational Advances in Artificial Intelligence
CityVirtual, Online
Period02/02/202109/02/2021

Fingerprint

Dive into the research topics of 'Global Fusion Attention for Vision and Language Understanding'. Together they form a unique fingerprint.

Cite this