Information, Vol. 16, Pages 233: VAD-CLVA: Integrating CLIP with LLaVA for Voice Activity Detection


Information, Vol. 16, Pages 233: VAD-CLVA: Integrating CLIP with LLaVA for Voice Activity Detection

Information doi: 10.3390/info16030233

Authors:
Andrea Appiani
Cigdem Beyan

Voice activity detection (VAD) is the process of automatically determining whether a person is speaking and identifying the timing of their speech in an audiovisual data. Traditionally, this task has been tackled by processing either audio signals or visual data, or by combining both modalities through fusion or joint learning. In our study, drawing inspiration from recent advancements in visual-language models, we introduce a novel approach leveraging Contrastive Language-Image Pretraining (CLIP) models. The CLIP visual encoder analyzes video segments focusing on the upper body of an individual, while the text encoder processes textual descriptions generated by a Generative Large Multimodal Model, i.e., the Large Language and Vision Assistant (LLaVA). Subsequently, embeddings from these encoders are fused through a deep neural network to perform VAD. Our experimental analysis across three VAD benchmarks showcases the superior performance of our method compared to existing visual VAD approaches. Notably, our approach outperforms several audio-visual methods despite its simplicity and without requiring pretraining on extensive audio-visual datasets.



Source link

Andrea Appiani www.mdpi.com