|Title||A Graphical Representation and Dissimilarity Measure for Basic Everyday Sound Events |
|Publication Type||Journal Article |
|Year of Publication||2012 |
|Authors||Adiloglu, K., Anniés R., Wahlen E., Purwins H., & Obermayer K. |
|Journal Title||IEEE Transactions on Audio, Speech, and Language Processing |
|Abstract||Studies of Gaver revealed that humans categorize everyday sounds considering the processes that have generated them: He defined these categories in a taxonomy according to the aggregate states of the involved materials (solid, liquid, gas) and the physical nature of the sound generating interaction such as deformation, friction, etc. for solids. We exemplified this taxonomy in an everyday sound database that contains recordings of basic isolated sound events of these categories.
We used a sparse method to represent and to visualize these sound events. This representation relies on a sparse decomposition of sounds into atomic filter functions in the time-frequency domain. The filter functions maximally correlated with a given sound are selected automatically to perform the decomposition. The obtained sparse point pattern depicts the skeleton of the given sound.
The visualization of these point patterns revealed that acoustically similar sounds have similar point patterns. To detect these similarities, we defined a novel dissimilarity function by considering these point patterns as 3D point graphs and applied a graph matching algorithm, which assigns the points of one sound to the points of the other sound. This novel dissimilarity measure is used in combination with a kernel machine for the classification experiments, yielding an average accuracy of 95% in one vs. one discrimination tasks.