Skip to content

YOLO-Stutter: End-to-end Region-Wise Speech Dysfluency Detection

License

Notifications You must be signed in to change notification settings

rorizzz/YOLO-Stutter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YOLO-Stutter: End-to-end Region-Wise Speech Dysfluency Detection

Workflow

image-20240321090057059

Datasets

We open sourced our two simulated datasets, VCTK-TTS and VCTK-Stutter. The download links are as follows:

Dataset URL
VCTK-TTS link
VCTK-Stutter link
${DATASET}
├── disfluent_audio/  # simulated audio (.wav)
├── disfluent_labels/ # simualted labels (.json)	      
└── gt_text/  # ground truth text (.txt)

Environment configuration

Please refer environment.yml

If you have Miniconda/Anaconda installed, you can directly use the command: conda env create -f environment.yml

YOLO-Stutter Inference

We opensourced our inference code and checkpoints, here are the steps to perform inference:

  1. Clone this repository.

  2. Download VITS pretrained model, here we use pretrained_ljs.pth.

  3. Download Yolo-Stutter-checkpoints, create a folder under yolo-stutter, named saved_models, and put all downloaded models into it.

  4. We also provide testing datasets for quick inference, you can download it here.

  5. Build Monotonic Alignment Search

cd yolo-stutter/monotonic_align
python setup.py build_ext --inplace
  1. Run yolo-stutter/etc/inference.ipynb to perform inference step by step.

Dysfluency simulation

We use VITS as our TTS model.

  1. Clone this repository.

  2. Download VITS pretrained models, here we need pretrained_vctk.pth to achieve multi-speaker.

    1. create a folder dysfluency_simulation/path/to, and put the downloaded model into it.
  3. Build Monotonoic Alignment Search

cd dysfluency_simulation/monotonic_align
python setup.py build_ext --inplace
  1. Generate simulated speech
# Phoneme level
python generate_phn.py

# Word level
python generate_word.py

Citation

If you find our paper helpful, please cite it by:

@inproceedings{zhou24e_interspeech,
  title     = {YOLO-Stutter: End-to-end Region-Wise Speech Dysfluency Detection},
  author    = {Xuanru Zhou and Anshul Kashyap and Steve Li and Ayati Sharma and Brittany Morin and David Baquirin and Jet Vonk and Zoe Ezzes and Zachary Miller and Maria Tempini and Jiachen Lian and Gopala Anumanchipalli},
  year      = {2024},
  booktitle = {Interspeech 2024},
  pages     = {937--941},
  doi       = {10.21437/Interspeech.2024-1855},
}

About

YOLO-Stutter: End-to-end Region-Wise Speech Dysfluency Detection

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published