Datasets:

Modalities:
Audio
License:
File size: 4,025 Bytes
b6c2275
 
 
d7a6835
 
 
 
7b60348
d7a6835
 
 
 
 
 
 
 
 
 
 
 
 
 
7b60348
 
 
4eae671
 
 
 
 
 
 
 
 
 
 
 
 
 
7b60348
 
 
 
 
 
 
 
 
 
 
 
 
 
d7a6835
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
license: apache-2.0
---
# Positive Transfer Of The Whisper Speech Transformer To Human And Animal Voice Activity Detection
We proposed WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper

>
> [**Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection**](https://doi.org/10.1101/2023.09.30.560270)
> 
> Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser <br>
> University of Zurich and ETH Zurich

This multi-species dataset was customized for Human and Animal Voice Activity Detection (vocal segmentation) when training the multi-species WhisperSeg-large segmenter..

## Download Dataset
```python
from huggingface_hub import snapshot_download
snapshot_download('nccratliri/vad-multi-species', local_dir = "data/multi-species", repo_type="dataset" )
```

For more usage details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg

## Citation
When using this dataset for your work, please cite:
```
@article {Gu2023.09.30.560270,
	author = {Nianlong Gu and Kanghwi Lee and Maris Basha and Sumit Kumar Ram and Guanghao You and Richard Hahnloser},
	title = {Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection},
	elocation-id = {2023.09.30.560270},
	year = {2023},
	doi = {10.1101/2023.09.30.560270},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {This paper introduces WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for human and animal Voice Activity Detection (VAD). Contrary to traditional methods that detect human voice or animal vocalizations from a short audio frame and rely on careful threshold selection, WhisperSeg processes entire spectrograms of long audio and generates plain text representations of onset, offset, and type of voice activity. Processing a longer audio context with a larger network greatly improves detection accuracy from few labeled examples. We further demonstrate a positive transfer of detection performance to new animal species, making our approach viable in the data-scarce multi-species setting.Competing Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270},
	eprint = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270.full.pdf},
	journal = {bioRxiv}
}
```
```
@article {Gu2023.09.30.560270,
	author = {Nianlong Gu and Kanghwi Lee and Maris Basha and Sumit Kumar Ram and Guanghao You and Richard Hahnloser},
	title = {Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection},
	elocation-id = {2023.09.30.560270},
	year = {2023},
	doi = {10.1101/2023.09.30.560270},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {This paper introduces WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for human and animal Voice Activity Detection (VAD). Contrary to traditional methods that detect human voice or animal vocalizations from a short audio frame and rely on careful threshold selection, WhisperSeg processes entire spectrograms of long audio and generates plain text representations of onset, offset, and type of voice activity. Processing a longer audio context with a larger network greatly improves detection accuracy from few labeled examples. We further demonstrate a positive transfer of detection performance to new animal species, making our approach viable in the data-scarce multi-species setting.Competing Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270},
	eprint = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270.full.pdf},
	journal = {bioRxiv}
}
```

## Contact
[email protected]