Abstract
We extend the popular transformer architecture to a multimodal model, processing both visual and textual inputs. We propose a new attention mechanism on Transformer-based architecture for the joint vision and language understanding tasks. Our model fuses multi-level comprehension between images and texts in a weighted manner, which could better curve the internal relationships. Experiments on benchmark VQA dataset CLEVR demonstrate the effectiveness of the proposed attention mechanism. We also observe the improvements in sample efficiency of reinforcement learning through the experiments on grounded language understanding tasks of BabyAI platform.
Original language | English |
---|---|
Pages | 15789-15790 |
Number of pages | 2 |
Publication status | Published - 2021 |
MoE publication type | Not Eligible |
Event | 35th AAAI Conference on Artificial Intelligence / 33rd Conference on Innovative Applications of Artificial Intelligence / 11th Symposium on Educational Advances in Artificial Intelligence - Virtual, Online Duration: 2 Feb 2021 → 9 Feb 2021 |
Conference
Conference | 35th AAAI Conference on Artificial Intelligence / 33rd Conference on Innovative Applications of Artificial Intelligence / 11th Symposium on Educational Advances in Artificial Intelligence |
---|---|
City | Virtual, Online |
Period | 02/02/2021 → 09/02/2021 |