Skip to content

snap-research/GenAU

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Taming Data and Transformers for Audio Generation

This is the official GitHub repository of the paper Taming Data and Transformers for Audio Generation.

Taming Data and Transformers for Audio Generation
Moayed Haji-Ali, Willi Menapace, Aliaksandr Siarohin, Guha Balakrishnan, Sergey Tulyakov Vicente Ordonez,
Arxiv 2024

Project Page Arxiv PWCPWC

Introduction


Generating ambient sounds is a challenging task due to data scarcity and often insufficient caption quality, making it difficult to employ large-scale generative models for the task. In this work, we tackle this problem by introducing two new models. First, we propose AutoCap , a high-quality and efficient automatic audio captioning model. By using a compact audio representation and leveraging audio metadata, AutoCap substantially enhances caption quality, reaching a CIDEr score of 83.2, marking a 3.2% improvement from the best available captioning model at four times faster inference speed. Second, we propose GenAu, a scalable transformer-based audio generation architecture that we scale up to 1.25B parameters. Using AutoCap to generate caption clips from existing audio datasets, we demonstrate the benefits of data scaling with synthetic captions as well as model size scaling. When compared to state-of-the-art audio generators trained at similar size and data scale, GenAu obtains significant improvements of 4.7% in FAD score, 22.7% in IS, and 13.5% in CLAP score, indicating significantly improved quality of generated audio compared to previous works. Moreover, we propose an efficient and scalable pipeline for collecting audio datasets, enabling us to compile 57M ambient audio clips, forming AutoReCap-XL, the largest available audio-text dataset, at 90 times the scale of existing ones. For more details, please visit our project webpage.

Updates

  • 2024.10.24: Code released!
  • 2024.06.28: Paper released!

TODOs

  • Add GenAU Gradio demo
  • Add AutoCap Gradio demo

Setup

Initialize a conda environment named genau by running:

conda env create -f environment.yaml
conda activate genau

Dataset Preparation

See Dataset Preparation for details on downloading and preparing the AutoCap dataset, as well as more information on organizing your custom dataset.

Audio Captioning (AutoCap)

See GenAU README for details on inference, training, and evaluating our audio captioner AutoCAP.

Audio Generation (GenAU)

See GenAU README for details on inference, training, finetuning, and evaluating our audio generator GenAU.

Citation

If you find this paper useful in your research, please consider citing our work:

@misc{hajiali2024tamingdatatransformersaudio,
      title={Taming Data and Transformers for Audio Generation}, 
      author={Moayed Haji-Ali and Willi Menapace and Aliaksandr Siarohin and Guha Balakrishnan and Sergey Tulyakov and Vicente Ordonez},
      year={2024},
      eprint={2406.19388},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2406.19388}, 
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published