Seeking help: TypeError: DacModel.decode() missing 1 required positional argument: 'quantized_representation'
Could anyone please help to resolve the error below, (let me know which dac package worked for compilation of this code)?
In my case, Following compilation/interpreting error occured when I tried to use this model:
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids, attention_mask=attention_mask,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:_Develop_dev\px3.pixi\envs\default\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:_Develop_dev\px3.pixi\envs\default\Lib\site-packages\parler_tts\modeling_parler_tts.py", line 3637, in generate
sample = self.audio_encoder.decode(audio_codes=sample[None, ...], **single_audio_decode_kwargs).audio_values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: DacModel.decode() missing 1 required positional argument: 'quantized_representation'
(I added attentionmask for description, prompt due to errors still this DAC error is seen),
Is there any specific transformers version that worked for you?
Thanks in advance
In the same windows setup the base indic-parler-tts compiled (but was slow for our usage)
I guess it should work in transformers==4.46.0.dev0.
I had updated to latest transformers (so that it can use huggingface version instead of separate dac module), it had resolved the issue.
Thank you
By the way, is there a known fix to enable "sdpa" attention with this module ( tested attempt to use "sdpa" there is error specific to this, but some community forums are able to use "sdpa" attention, not clear from the discussion what was done to the changes)
does torch.compile() optimize this model/model.forward ? when consumer GPU like Rtx3080 is used for voice generation?
(tried torch.compile(model.forward) on indic-parler-tts, the generate() call hanged)