在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):sharathadavanne/seld-dcase2019开源软件地址(OpenSource Url):https://github.com/sharathadavanne/seld-dcase2019开源编程语言(OpenSource Language):Python 100.0%开源软件介绍(OpenSource Introduction):DCASE 2019: Sound event localization and detection (SELD) task
Sound event localization and detection (SELD) is the combined task of identifying the temporal onset and offset of a sound event, tracking the spatial location when active, and further associating a textual label describing the sound event. As part of DCASE 2019, we are organizing an SELD task with a multi-room reverberant dataset synthesized using real-life impulse response (IR) collected at five different environments. This github page shares the benchmark method, SELDnet, and the dataset for the task. The paper describing the SELDnet can be found on IEEExplore and on Arxiv. The dataset, baseline method and benchmark scores have been described in the task paper available here. If you are interested in reading the general literature on SELD you can refer here. If you are using this code or the datasets in any format, then please consider citing the following two papers
More about SELDnetThe SELDnet architecture is as shown below. The input is the multichannel audio, from which the phase and magnitude components are extracted and used as separate features. The proposed method takes a sequence of consecutive spectrogram frames as input and predicts all the sound event classes active for each of the input frame along with their respective spatial location, producing the temporal activity and DOA trajectory for each sound event class. In particular, a convolutional recurrent neural network (CRNN) is used to map the frame sequence to the two outputs in parallel. At the first output, sound event detection (SED) is performed as a multi-label multi-class classification task, allowing the network to simultaneously estimate the presence of multiple sound events for each frame. At the second output, direction of arrival (DOA) estimates in the continuous 3D space are obtained as a multi-output regression task, where each sound event class is associated with two regressors that estimate the spherical coordinates azimuth (azi) and elevation (ele) of the DOA on a unit sphere around the microphone. In the benchmark method, the variables in the image below have the following values, T = 128, M = 2048, C = 4, P = 64, MP1 = MP2 = 8, MP3 = 4, Q = R = 128, N = 11. The SED output of the network is in the continuous range of [0 1] for each sound event in the dataset, and this value is thresholded to obtain a binary decision for the respective sound event activity as shown in figure below. Finally, the respective DOA estimates for these active sound event classes provide their spatial locations. The figure below visualizes the SELDnet input and outputs for one of the recordings in the dataset. The horizontal-axis of all sub-plots for a given dataset represents the same time frames, the vertical-axis for spectrogram sub-plot represents the frequency bins, vertical-axis for SED reference and prediction sub-plots represents the unique sound event class identifier, and for the DOA reference and prediction sub-plots, it represents the azimuth and elevation angles in degrees. The figures represents each sound event class and its associated DOA outputs with a unique color. Similar plot can be visualized on your results using the provided script. DATASETSThe participants can choose either of the two or both the following datasets,
These datasets contain recordings from an identical scene, with TAU Spatial Sound Events 2019 - Ambisonic providing four-channel First-Order Ambisonic (FOA) recordings while TAU Spatial Sound Events 2019 - Microphone Array provides four-channel directional microphone recordings from a tetrahedral array configuration. Both formats are extracted from the same microphone array, and additional information on the spatial characteristics of each format can be found below. The participants can choose one of the two, or both the datasets based on the audio format they prefer. Both the datasets, consists of a development and evaluation set. The development set consists of 400, one minute long recordings sampled at 48000 Hz, divided into four cross-validation splits of 100 recordings each. The evaluation set consists of 100, one-minute recordings. These recordings were synthesized using spatial room impulse response (IRs) collected from five indoor locations, at 504 unique combinations of azimuth-elevation-distance. Furthermore, in order to synthesize the recordings the collected IRs were convolved with isolated sound events dataset from DCASE 2016 task 2. Finally, to create a realistic sound scene recording, natural ambient noise collected in the IR recording locations was added to the synthesized recordings such that the average SNR of the sound events was 30 dB. The eleven sound event classes used in the dataset and their corresponding index values required for the submission format are as following
More details on the recording procedure and dataset can be read on the DCASE 2019 task webpage or on the task description paper. The two development datasets can be downloaded from the link - TAU Spatial Sound Events 2019 - Ambisonic and Microphone Array, Development dataset (Version 2)
The evaluation datasets can be downloaded from the link - TAU Spatial Sound Events 2019 - Ambisonic and Microphone Array, Evaluation dataset
Getting StartedThis repository consists of multiple Python scripts forming one big architecture used to train the SELDnet.
Additionally, we also provide supporting scripts that help analyse the dataset and results.
PrerequisitesThe provided codebase has been tested on python 2.7.10/3.5.3. and Keras 2.2.2./2.2.4 Training the SELDnetIn order to quickly train SELDnet follow the steps below.
You can now train the SELDnet using default parameters using
Where <job-id> is a unique identifier which is used for output filenames (models, training plots). You can use any number or string for this. In order to get baseline results on the development set for Microphone array recordings, you can run the following command
Similarly, for Ambisonic format baseline results, run the following command
Results on development dataset
Note: The reported baseline system performance is not exactly reproducible due to varying setups. However, you should be able to obtain very similar results. DOA estimation: regression vs classificationThe DOA estimation can be approached as both a regression or a classification task. In the baseline, it is handled as regression task. In case you plan to use a classification approach check the Submission
For more information on the submission file formats check the website LicenseExcept for the contents in the AcknowledgmentsThe research leading to these results has received funding from the European Research Council under the European Unions H2020 Framework Programme through ERC Grant Agreement 637422 EVERYSOUND. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论