Title
Combining visual and acoustic features for audio classification tasks
Abstract
In this paper a novel and effective approach for automated audio classification is presented that is based on the fusion of different sets of features, both visual and acoustic. A number of different acoustic and visual features of sounds are evaluated and compared. These features are then fused in an ensemble that produces better classification accuracy than other state-of-the-art approaches. The visual features of sounds are built starting from the audio file and are taken from images constructed from different spectrograms, a gammatonegram, and a rhythm image. These images are divided into subwindows from which a set of texture descriptors are extracted. For each feature descriptor a different Support Vector Machine (SVM) is trained. The SVMs outputs are summed for a final decision. The proposed ensemble is evaluated on three well-known databases of music genre classification (the Latin Music Database, the ISMIR 2004 database, and the GTZAN genre collection), a dataset of Bird vocalization aiming specie recognition, and a dataset of right whale calls aiming whale detection. The MATLAB code for the ensemble of classifiers and for the extraction of the features will be publicly available
Department(s)
Information Technology and Cybersecurity
Document Type
Article
DOI
https://doi.org/10.1016/j.patrec.2017.01.013
Keywords
audio classification, texture, image processing, acoustic features, ensemble of classifiers, pattern recognition
Publication Date
2017
Recommended Citation
Nanni, Loris, Yandre MG Costa, Diego Rafael Lucio, Carlos Nascimento Silla Jr, and Sheryl Brahnam. "Combining visual and acoustic features for audio classification tasks." Pattern Recognition Letters 88 (2017): 49-56.
Journal Title
Pattern Recognition Letters