speecht5_kavinda1

This model is a fine-tuned version of microsoft/speecht5_tts on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4020

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 40
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.8309 1.0 14 0.5929
0.6544 2.0 28 0.5246
0.6079 3.0 42 0.4954
0.5618 4.0 56 0.4706
0.4976 5.0 70 0.4192
0.4688 6.0 84 0.4151
0.4628 7.0 98 0.4090
0.4365 8.0 112 0.4050
0.4674 9.0 126 0.4146
0.4548 10.0 140 0.3966
0.426 11.0 154 0.4067
0.4211 12.0 168 0.3890
0.4106 13.0 182 0.3842
0.4125 14.0 196 0.3863
0.4079 15.0 210 0.3810
0.4003 16.0 224 0.3731
0.396 17.0 238 0.3912
0.392 18.0 252 0.3810
0.387 19.0 266 0.3835
0.3908 20.0 280 0.3847
0.3846 21.0 294 0.3785
0.3834 22.0 308 0.3706
0.386 23.0 322 0.3826
0.3868 24.0 336 0.3899
0.3814 25.0 350 0.3859
0.3711 26.0 364 0.3980
0.3743 27.0 378 0.3855
0.3702 28.0 392 0.3896
0.3682 29.0 406 0.3976
0.3704 30.0 420 0.3898
0.3633 31.0 434 0.3868
0.364 32.0 448 0.3934
0.3674 33.0 462 0.3838
0.357 34.0 476 0.3909
0.3581 35.0 490 0.4014
0.3554 36.0 504 0.3981
0.361 37.0 518 0.3970
0.3531 38.0 532 0.3970
0.3486 39.0 546 0.3998
0.3499 40.0 560 0.4020

Framework versions

  • Transformers 4.53.0
  • Pytorch 2.6.0+cu124
  • Datasets 2.14.4
  • Tokenizers 0.21.2
Downloads last month
4
Safetensors
Model size
144M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kavinda123321/speecht5_kavinda1

Finetuned
(1218)
this model