Evaluation of Deep Audio Representations for Hearables
ICASSP'25Effectively steering hearable devices requires understanding the acoustic environment around the user. In the computational analysis of sound scenes, foundation models have emerged as the state of the art to produce high-performance, robust, multi-purpose audio representations. We introduce and release Deep Evaluation of Audio Representations (DEAR), the first dataset and benchmark to evaluate the efficacy of foundation models in capturing essential acoustic properties for hearables. The dataset includes 1,158 audio tracks, each 30 seconds long, created by spatially mixing proprietary monologues and dialogues with commercial, high-quality recordings of everyday acoustic scenes. Our benchmark encompasses eight tasks that assess the general context, speech sources, and technical acoustic properties of the audio scenes. Through our evaluation of four general-purpose audio representation models, we demonstrate that the BEATs model significantly surpasses its counterparts. This superiority underscores the advantage of models trained on diverse audio collections, confirming their applicability to a wide array of auditory tasks, including encoding the environment properties necessary for hearable steering.
@article{groger_evaluation_2025,
title = {{Evaluation of Deep Audio Representations for Hearables}},
author = {Gr\"oger, Fabian and Baumann, Pascal and Amruthalingam, Ludovic and Simon, Laurent and Giurda, Ruksana and Lionetti, Simone},
year = 2025,
month = {4},
journal = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
}
powered by Academic Project Page Template