Skip to content

Commit 583db52

Browse files
buttercrabArthurZuckervasqu
authored
Add Dia model (#38405)
* add dia model * add tokenizer files * cleanup some stuff * brut copy paste code * rough cleanup of the modeling code * nuke some stuff * more nuking * more cleanups * updates * add mulitLayerEmbedding vectorization * nits * more modeling simplifications * updates * update rope * update rope * just fixup * update configuration files * more cleanup! * default config values * update * forgotten comma * another comma! * update, more cleanups * just more nits * more config cleanups * time for the encoder * fix * sa=mall nit * nits * n * refacto a bit * cleanup * update cv scipt * fix last issues * fix last nits * styling * small fixes * just run 1 generation * fixes * nits * fix conversion * fix * more fixes * full generate * ouf! * fixes! * updates * fix * fix cvrt * fixup * nits * delete wrong test * update * update * test tokenization * let's start changing things bit by bit - fix encoder step * removing custom generation, moving to GenerationMixin * add encoder decoder attention masks for generation * mask changes, correctness checked against ad29837 in dia repo * refactor a bit already --> next cache * too important not to push :) * minimal cleanup + more todos * make main overwrite modeling utils * add cfg filter & eos filter * add eos countdown & delay pattern * update eos countdown * add max step eos countdown * fix tests * fix some things * fix generation with testing * move cfg & eos stuff to logits processor * make RepetitionPenaltyLogitsProcessor flexible - can accept 3D scores like (batch_size, channel, vocab) * fix input_ids concatenation dimension in GenerationMixin for flexibility * Add DiaHangoverLogitsProcessor and DiaExponentialDecayLengthPenalty classes; refactor logits processing in DiaForConditionalGeneration to utilize new configurations and improve flexibility. * Add stopping criteria * refactor * move delay pattern from processor to modeling like musicgen. - add docs - change eos countdown to eos delay pattern * fix processor & fix tests * refactor types * refactor imports * format code * fix docstring to pass ci * add docstring to DiaConfig & add DiaModel to test * fix docstring * add docstring * fix some bugs * check * porting / merging results from other branch - IMPORTANT: it very likely breaks generation, the goal is to have a proper forward path first * experimental testing of left padding for first channel * whoops * Fix merge to make generation work * fix cfg filter * add position ids * add todos, break things * revert changes to generation --> we will force 2d but go 3d on custom stuff * refactor a lot, change prepare decoder ids to work with left padding (needs testing), add todos * some first fixes to get to 10. in generation * some more generation fixes / adjustment * style + rope fixes * move cfg out, simplify a few things, more todos * nit * start working on custom logit processors * nit * quick fixes * cfg top k * more refactor of logits processing, needs a decision if gen config gets the new attributes or if we move it to config or similar * lets keep changes to core code minimal, only eos scaling is questionable atm * simpler eos delay logits processor * that was for debugging :D * proof of concept rope * small fix on device mismatch * cfg fixes + delay logits max len * transformers rope * modular dia * more cleanup * keep modeling consistently 3D, generate handles 2D internally * decoder starts with bos if nothing * post processing prototype * style * lol * force sample / greedy + fixes on padding * style * fixup tokenization * nits * revert * start working on dia tests * fix a lot of tests * more test fixes * nit * more test fixes + some features to simplify code more * more cleanup * forgot that one * autodocs * small consistency fixes * fix regression * small fixes * dia feature extraction * docs * wip processor * fix processor order * processing goes brrr * transpose before * small fix * fix major bug but needs now a closer look into the custom processors esp cfg * small thing on logits * nits * simplify indices and shifts * add simpler version of padding tests back (temporarily) * add logit processor tests * starting tests on processor * fix mask application during generation * some fixes on the weights conversion * style + fixup logits order * simplify conversion * nit * remove padding tests * nits on modeling * hmm * fix tests * trigger * probably gonna be reverted, just a quick design around audio tokenizer * fixup typing * post merge + more typing * initial design for audio tokenizer * more design changes * nit * more processor tests and style related things * add to init * protect import * not sure why tbh * add another protect * more fixes * wow * it aint stopping :D * another missed type issue * ... * change design around audio tokenizer to prioritize init and go for auto - in regards to the review * change to new causal mask function + docstrings * change ternary * docs * remove todo, i dont think its essential tbh * remove pipeline as current pipelines do not fit in the current scheme, same as csm * closer to wrapping up the processor * text to audio, just for demo purposes (will likely be reverted) * check if it's this * save audio function * ensure no grad * fixes on prefixed audio, hop length is used via preprocess dac, device fixes * integration tests (tested locally on a100) + some processor utils / fixes * style * nits * another round of smaller things * docs + some fixes (generate one might be big) * msytery solved * small fix on conversion * add abstract audio tokenizer, change init check to abstract class * nits * update docs + fix some processing :D * change inheritance scheme for audio tokenizer * delete dead / unnecessary code in copied generate loop * last nits on new pipeline behavior (+ todo on tests) + style * trigger --------- Co-authored-by: Arthur Zucker <[email protected]> Co-authored-by: Arthur <[email protected]> Co-authored-by: Vasqu <[email protected]>
1 parent 5995cfa commit 583db52

34 files changed

+5732
-28
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -839,6 +839,8 @@
839839
title: CSM
840840
- local: model_doc/dac
841841
title: dac
842+
- local: model_doc/dia
843+
title: Dia
842844
- local: model_doc/encodec
843845
title: EnCodec
844846
- local: model_doc/fastspeech2_conformer

docs/source/en/model_doc/auto.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -350,6 +350,10 @@ The following auto classes are available for the following audio tasks.
350350

351351
[[autodoc]] AutoModelForTextToWaveform
352352

353+
### AutoModelForAudioTokenization
354+
355+
[[autodoc]] AutoModelForAudioTokenization
356+
353357
## Multimodal
354358

355359
The following auto classes are available for the following multimodal tasks.

docs/source/en/model_doc/dia.md

Lines changed: 162 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,162 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
12+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13+
rendered properly in your Markdown viewer.
14+
15+
-->
16+
17+
# Dia
18+
19+
<div style="float: right;">
20+
<div class="flex flex-wrap space-x-1">
21+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
22+
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
23+
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
24+
</div>
25+
</div>
26+
27+
## Overview
28+
29+
Dia is an opensource text-to-speech (TTS) model (1.6B parameters) developed by [Nari Labs](https://huggingface.co/nari-labs).
30+
It can generate highly realistic dialogue from transcript including nonverbal communications such as laughter and coughing.
31+
Furthermore, emotion and tone control is also possible via audio conditioning (voice cloning).
32+
33+
**Model Architecture:**
34+
Dia is an encoder-decoder transformer based on the original transformer architecture. However, some more modern features such as
35+
rotational positional embeddings (RoPE) are also included. For its text portion (encoder), a byte tokenizer is utilized while
36+
for the audio portion (decoder), a pretrained codec model [DAC](./dac.md) is used - DAC encodes speech into discrete codebook
37+
tokens and decodes them back into audio.
38+
39+
## Usage Tips
40+
41+
### Generation with Text
42+
43+
```python
44+
from transformers import AutoProcessor, DiaForConditionalGeneration
45+
46+
torch_device = "cuda"
47+
model_checkpoint = "buttercrab/dia-v1-1.6b"
48+
49+
text = ["[S1] Dia is an open weights text to dialogue model."]
50+
processor = AutoProcessor.from_pretrained(model_checkpoint)
51+
inputs = processor(text=text, padding=True, return_tensors="pt").to(torch_device)
52+
53+
model = DiaForConditionalGeneration.from_pretrained(model_checkpoint).to(torch_device)
54+
outputs = model.generate(**inputs, max_new_tokens=256) # corresponds to around ~2s
55+
56+
# save audio to a file
57+
outputs = processor.batch_decode(outputs)
58+
processor.save_audio(outputs, "example.wav")
59+
60+
```
61+
62+
### Generation with Text and Audio (Voice Cloning)
63+
64+
```python
65+
from datasets import load_dataset, Audio
66+
from transformers import AutoProcessor, DiaForConditionalGeneration
67+
68+
torch_device = "cuda"
69+
model_checkpoint = "buttercrab/dia-v1-1.6b"
70+
71+
ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train")
72+
ds = ds.cast_column("audio", Audio(sampling_rate=44100))
73+
audio = ds[-1]["audio"]["array"]
74+
# text is a transcript of the audio + additional text you want as new audio
75+
text = ["[S1] I know. It's going to save me a lot of money, I hope. [S2] I sure hope so for you."]
76+
77+
processor = AutoProcessor.from_pretrained(model_checkpoint)
78+
inputs = processor(text=text, audio=audio, padding=True, return_tensors="pt").to(torch_device)
79+
prompt_len = processor.get_audio_prompt_len(inputs["decoder_attention_mask"])
80+
81+
model = DiaForConditionalGeneration.from_pretrained(model_checkpoint).to(torch_device)
82+
outputs = model.generate(**inputs, max_new_tokens=256) # corresponds to around ~2s
83+
84+
# retrieve actually generated audio and save to a file
85+
outputs = processor.batch_decode(outputs, audio_prompt_len=prompt_len)
86+
processor.save_audio(outputs, "example_with_audio.wav")
87+
```
88+
89+
### Training
90+
91+
```python
92+
from datasets import load_dataset, Audio
93+
from transformers import AutoProcessor, DiaForConditionalGeneration
94+
95+
torch_device = "cuda"
96+
model_checkpoint = "buttercrab/dia-v1-1.6b"
97+
98+
ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train")
99+
ds = ds.cast_column("audio", Audio(sampling_rate=44100))
100+
audio = ds[-1]["audio"]["array"]
101+
# text is a transcript of the audio
102+
text = ["[S1] I know. It's going to save me a lot of money, I hope."]
103+
104+
processor = AutoProcessor.from_pretrained(model_checkpoint)
105+
inputs = processor(
106+
text=text,
107+
audio=audio,
108+
generation=False,
109+
output_labels=True,
110+
padding=True,
111+
return_tensors="pt"
112+
).to(torch_device)
113+
114+
model = DiaForConditionalGeneration.from_pretrained(model_checkpoint).to(torch_device)
115+
out = model(**inputs)
116+
out.loss.backward()
117+
```
118+
119+
120+
This model was contributed by [Jaeyong Sung](https://huggingface.co/buttercrab), [Arthur Zucker](https://huggingface.co/ArthurZ),
121+
and [Anton Vlasjuk](https://huggingface.co/AntonV). The original code can be found [here](https://github.com/nari-labs/dia/).
122+
123+
124+
## DiaConfig
125+
126+
[[autodoc]] DiaConfig
127+
128+
## DiaDecoderConfig
129+
130+
[[autodoc]] DiaDecoderConfig
131+
132+
## DiaEncoderConfig
133+
134+
[[autodoc]] DiaEncoderConfig
135+
136+
## DiaTokenizer
137+
138+
[[autodoc]] DiaTokenizer
139+
- __call__
140+
141+
## DiaFeatureExtractor
142+
143+
[[autodoc]] DiaFeatureExtractor
144+
- __call__
145+
146+
## DiaProcessor
147+
148+
[[autodoc]] DiaProcessor
149+
- __call__
150+
- batch_decode
151+
- decode
152+
153+
## DiaModel
154+
155+
[[autodoc]] DiaModel
156+
- forward
157+
158+
## DiaForConditionalGeneration
159+
160+
[[autodoc]] DiaForConditionalGeneration
161+
- forward
162+
- generate

src/transformers/configuration_utils.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,6 @@ def __init__(self, **kwargs):
271271
self.pad_token_id = kwargs.pop("pad_token_id", None)
272272
self.eos_token_id = kwargs.pop("eos_token_id", None)
273273
self.sep_token_id = kwargs.pop("sep_token_id", None)
274-
275274
self.decoder_start_token_id = kwargs.pop("decoder_start_token_id", None)
276275

277276
# task specific arguments

0 commit comments

Comments
 (0)