在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):sharathadavanne/seld-dcase2021开源软件地址(OpenSource Url):https://github.com/sharathadavanne/seld-dcase2021开源编程语言(OpenSource Language):Python 100.0%开源软件介绍(OpenSource Introduction):DCASE 2021: Sound Event Localization and Detection with Directional InterferencePlease visit the official webpage of the DCASE 2021 Challenge for details missing in this repo. As the baseline method for the SELD task, we use the SELDnet method studied in the following papers, with Activity-Coupled Cartesian Direction of Arrival (ACCDOA) representation as the output format. If you are using this baseline method or the datasets in any format, then please consider citing the following two papers. If you want to read more about generic approaches to SELD then check here. NOTE: The baseline only supports detection of one instance of a sound class in a given time frame. However, the training data can consist of multiple instances of the same sound class in a given time frame. If participants are planning to build an SELD system that can detect such multiple instances of the same class, then you will have to modify this code accordingly. On the other hand, the provided metric code - BASELINE METHODIn comparison to the SELDnet studied in [1], we have changed the output format to ACCDOA [2] to improve its performance.
The final SELDnet architecture is as shown below. The input is the multichannel audio, from which the different acoustic features are extracted based on the input format of the audio. Based on the chosen dataset (FOA or MIC), the baseline method takes a sequence of consecutive feature-frames and predicts all the active sound event classes for each of the input frame along with their respective spatial location, producing the temporal activity and DOA trajectory for each sound event class. In particular, a convolutional recurrent neural network (CRNN) is used to map the frame sequence to a single ACCDOA sequence output which encodes both sound event detection (SED) and direction of arrival (DOA) estimates in the continuous 3D space as a multi-output regression task. Each sound event class in the ACCDOA output is represented by three regressors that estimate the Cartesian coordinates x, y and z axes of the DOA around the microphone. If the vector length represented by x, y and z coordinates are greater than 0.5, the sound event is considered to be active, and the corresponding x, y, and z values are considered as its predicted DOA. The figure below visualizes the SELDnet input and outputs for one of the recordings in the dataset. The horizontal-axis of all sub-plots for a given dataset represents the same time frames, the vertical-axis for spectrogram sub-plot represents the frequency bins, vertical-axis for SED reference and prediction sub-plots represents the unique sound event class identifier, and for the DOA reference and prediction sub-plots, it represents the distances along the Cartesian axes. The figures represents each sound event class and its associated DOA outputs with a unique color. Similar plot can be visualized on your results using the provided script. DATASETSThe participants can choose either of the two or both the following datasets,
These datasets contain recordings from an identical scene, with TAU-NIGENS Spatial Sound Events 2021 - Ambisonic providing four-channel First-Order Ambisonic (FOA) recordings while TAU-NIGENS Spatial Sound Events 2021 - Microphone Array provides four-channel directional microphone recordings from a tetrahedral array configuration. Both formats are extracted from the same microphone array, and additional information on the spatial characteristics of each format can be found below. The participants can choose one of the two, or both the datasets based on the audio format they prefer. Both the datasets, consists of a development and evaluation set. The development set consists of 600, one minute long recordings sampled at 24000 Hz. All participants are expected to use the fixed splits provided in the baseline method for reporting the development scores. We use 400 recordings for training split (fold 1 to 4), 100 for validation (fold 5) and 100 for testing (fold 6). The evaluation set consists of 200, one-minute recordings, and will be released at a later point. More details on the recording procedure and dataset can be read on the DCASE 2021 task webpage. The two development datasets can be downloaded from the link - TAU-NIGENS Spatial Sound Events 2021 - Ambisonic and Microphone Array Getting StartedThis repository consists of multiple Python scripts forming one big architecture used to train the SELDnet.
Additionally, we also provide supporting scripts that help analyse the results.
PrerequisitesThe provided codebase has been tested on python 3.6.9/3.7.3 and Keras 2.2.4/2.3.1 Training the SELDnetIn order to quickly train SELDnet follow the steps below.
You can now train the SELDnet using default parameters using
Where <job-id> is a unique identifier which is used for output filenames (models, training plots). You can use any number or string for this. In order to get baseline results on the development set for Microphone array recordings, you can run the following command
Similarly, for Ambisonic format baseline results, run the following command
Results on development datasetAs the SELD evaluation metric we employ the joint localization and detection metrics proposed in [1], with extensions from [2] to support multi-instance scoring of the same class. There are in total four metrics that we employ in this challenge. The first two metrics are more focused on the detection part, also referred as the location-aware detection, corresponding to the error rate (ER20°) and F-score (F20°) in one-second non-overlapping segments. We consider the prediction to be correct if the prediction and reference class are the same, and the distance between them is below 20°. The next two metrics are more focused on the localization part, also referred as the class-aware localization, corresponding to the localization error (LECD) in degrees, and a localization Recall (LRCD) in one-second non-overlapping segments, where the subscript refers to classification-dependent. Unlike the location-aware detection, we do not use any distance threshold, but estimate the distance between the correct prediction and reference. The evaluation metric scores for the test split of the development dataset is given below
Note: The reported baseline system performance is not exactly reproducible due to varying setups. However, you should be able to obtain very similar results. Submission
For more information on the submission file formats check the website LicenseExcept for the contents in the |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论