speecht5_tamil_2
This model is a fine-tuned version of microsoft/speecht5_tts on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4098
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.4677 | 1.0 | 107 | 0.4364 |
0.4683 | 2.0 | 214 | 0.4379 |
0.4601 | 3.0 | 321 | 0.4262 |
0.4586 | 4.0 | 428 | 0.4258 |
0.4529 | 5.0 | 535 | 0.4237 |
0.4471 | 6.0 | 642 | 0.4234 |
0.446 | 7.0 | 749 | 0.4199 |
0.44 | 8.0 | 856 | 0.4195 |
0.4379 | 9.0 | 963 | 0.4185 |
0.4331 | 10.0 | 1070 | 0.4175 |
0.4307 | 11.0 | 1177 | 0.4169 |
0.427 | 12.0 | 1284 | 0.4140 |
0.4234 | 13.0 | 1391 | 0.4134 |
0.4225 | 14.0 | 1498 | 0.4117 |
0.4177 | 15.0 | 1605 | 0.4095 |
0.4158 | 16.0 | 1712 | 0.4098 |
0.4148 | 17.0 | 1819 | 0.4096 |
0.4123 | 18.0 | 1926 | 0.4095 |
0.4101 | 19.0 | 2033 | 0.4114 |
0.409 | 20.0 | 2140 | 0.4098 |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.2
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for kavinda123321/speecht5_tamil_2
Base model
microsoft/speecht5_tts