Understanding Speech and Scene with Ears and Eyes

Project Details


One of the biggest challenges of AI is to develop computational abilities to understand speech and video scenes as effectively as we humans do it. This project aims to develop multimodal techniques for understanding and interpreting aural and visual inputs. These novel machine learning based techniques will first learn representations of visual stimuli and human speech in various abstraction levels and then cross-modal correlations between the representations. This can be achieved by devising new network structures and utilizing diverse uni- and multimodal datasets for training of the various parts of the model first separately and then jointly. As a result, we believe, the accuracy of both speech recognition, visual description and interpretation will improve.
Effective start/end date01/01/202231/12/2024

Collaborative partners


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.