We focus on the audio-visual video parsing (AVVP) problem that involves detecting audio and visual event labels with temporal boundaries. The task is especially challenging since it is weakly supervised with only event labels available as a bag of labels for each video. An existing state-of-the-art model for AVVP uses a hybrid attention network (HAN) to generate cross-modal features for both audio and visual modalities, and an attentive pooling module that aggregates predicted audio and visual segment-level event probabilities to yield video-level event probabilities. We provide a detailed analysis of modality bias in the existing HAN architecture, where a modality is completely ignored during prediction. We also propose a variant of featur...
Deepfakes are synthetic media generated using deep generative algorithms and have posed a severe soc...
TVQA is a large scale video question answering (video-QA) dataset based on popular TV shows. The qu...
Large-scale sound recognition data sets typically consist of acoustic recordings obtained from multi...
This paper focuses on the weakly-supervised audio-visual video parsing task, which aims to recognize...
We investigate the weakly-supervised audio-visual video parsing task, which aims to parse a video in...
Audio-visual speech recognition (AVSR) has gained remarkable success for ameliorating the noise-robu...
Recognizing and localizing events in videos is a fundamental task for video understanding. Since eve...
Visual objects often have acoustic signatures that are naturally synchronized with them in audio-bea...
Self-supervised pre-training recently demonstrates success on large-scale multimodal data, and state...
In this paper, we propose two techniques, namely joint modeling and data augmentation, to improve sy...
Active speaker detection and speech enhancement have become two increasingly attractive topics in au...
Making each modality in multi-modal data contribute is of vital importance to learning a versatile m...
The heterogeneity gap problem is the main challenge in cross-modal retrieval. Because cross-modal da...
In recent years, there have been numerous developments towards solving multimodal tasks, aiming to l...
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multipl...
Deepfakes are synthetic media generated using deep generative algorithms and have posed a severe soc...
TVQA is a large scale video question answering (video-QA) dataset based on popular TV shows. The qu...
Large-scale sound recognition data sets typically consist of acoustic recordings obtained from multi...
This paper focuses on the weakly-supervised audio-visual video parsing task, which aims to recognize...
We investigate the weakly-supervised audio-visual video parsing task, which aims to parse a video in...
Audio-visual speech recognition (AVSR) has gained remarkable success for ameliorating the noise-robu...
Recognizing and localizing events in videos is a fundamental task for video understanding. Since eve...
Visual objects often have acoustic signatures that are naturally synchronized with them in audio-bea...
Self-supervised pre-training recently demonstrates success on large-scale multimodal data, and state...
In this paper, we propose two techniques, namely joint modeling and data augmentation, to improve sy...
Active speaker detection and speech enhancement have become two increasingly attractive topics in au...
Making each modality in multi-modal data contribute is of vital importance to learning a versatile m...
The heterogeneity gap problem is the main challenge in cross-modal retrieval. Because cross-modal da...
In recent years, there have been numerous developments towards solving multimodal tasks, aiming to l...
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multipl...
Deepfakes are synthetic media generated using deep generative algorithms and have posed a severe soc...
TVQA is a large scale video question answering (video-QA) dataset based on popular TV shows. The qu...
Large-scale sound recognition data sets typically consist of acoustic recordings obtained from multi...