Udith-Sandaruwan commited on
Commit
104885c
·
verified ·
1 Parent(s): ec6a7a4

Final logs after training

Browse files
session_logs/logs/events.out.tfevents.1743840193.beef31b85274.38701.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8633763f5365454b6be2ec9baccae9a7b7980b455e0221092d44c34e3c2f024c
3
+ size 6412
session_logs/logs/events.out.tfevents.1743840511.beef31b85274.38701.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17804ed67bf072e490c9f5797d40044a08e751187603674051166e528eea0173
3
+ size 354
session_logs/lora_finetuning.log CHANGED
@@ -1,11 +1,11 @@
1
- 2025-04-05 07:52:24,887 - Logging initialized for session: 127ed5b9-6ea8-4c45-bf49-957eccdf65f2
2
- 2025-04-05 07:58:36,552 - Using default tokenizer.
3
- 2025-04-05 07:58:39,637 - Hyperparameters: {'output_dir': './lora_finetuned', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': <IntervalStrategy.STEPS: 'steps'>, 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 2, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 2, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 0.0001, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.3, 'num_train_epochs': 1, 'max_steps': 5, 'lr_scheduler_type': <SchedulerType.LINEAR: 'linear'>, 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 4000, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './logs', 'logging_strategy': <IntervalStrategy.STEPS: 'steps'>, 'logging_first_step': False, 'logging_steps': 500, 'logging_nan_inf_filter': True, 'save_strategy': <SaveStrategy.STEPS: 'steps'>, 'save_steps': 2500, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': True, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': True, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 500, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': './lora_finetuned', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': AcceleratorConfig(split_batches=False, dispatch_batches=None, even_batches=True, use_seedable_sampler=True, non_blocking=False, gradient_accumulation_kwargs=None, use_configured_state=False), 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': 'decryptellix/Llama-3.1-8B-LoRA-only', 'hub_strategy': <HubStrategy.EVERY_SAVE: 'every_save'>, 'hub_private_repo': None, 'hub_always_push': False, 'gradient_checkpointing': True, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'steps', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': None, 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False, 'distributed_state': Distributed environment: NO
4
  Num processes: 1
5
  Process index: 0
6
  Local process index: 0
7
  Device: cuda
8
  , '_n_gpu': 2, '__cached__setup_devices': device(type='cuda', index=0), 'deepspeed_plugin': None}
9
- 2025-04-05 07:58:39,637 - Training details: {'Epochs': 1, 'Training Steps': 5, 'Final Loss': None, 'Final Learning Rate': None, 'Total Training Time (s)': '176.09'}
10
- 2025-04-05 07:58:39,637 - Training metrics: {'epochs': [], 'loss': [], 'learning_rate': [], 'training_time': 176.08783316612244}
11
- 2025-04-05 07:58:39,637 - Evaluation results: {'meteor_scores': {'meteor': 0.051369863013698634}, 'rouge_scores': {'rouge1': 0.0, 'rouge2': 0.0, 'rougeL': 0.0, 'rougeLsum': 0.0}, 'bleu_scores': {'bleu': 0.0, 'precisions': [0.0967741935483871, 0.0, 0.0, 0.0], 'brevity_penalty': 1.0, 'length_ratio': 1.0689655172413792, 'translation_length': 31, 'reference_length': 29}, 'perplexity': 119440350.0}
 
1
+ 2025-04-05 08:02:40,839 - Logging initialized for session: 5d75b9e2-7549-429c-ada6-88566b7a0591
2
+ 2025-04-05 08:11:24,139 - Using default tokenizer.
3
+ 2025-04-05 08:11:31,039 - Hyperparameters: {'output_dir': './lora_finetuned', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': <IntervalStrategy.STEPS: 'steps'>, 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 2, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 2, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 0.0001, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.3, 'num_train_epochs': 1, 'max_steps': 5, 'lr_scheduler_type': <SchedulerType.LINEAR: 'linear'>, 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 4000, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './logs', 'logging_strategy': <IntervalStrategy.STEPS: 'steps'>, 'logging_first_step': False, 'logging_steps': 500, 'logging_nan_inf_filter': True, 'save_strategy': <SaveStrategy.STEPS: 'steps'>, 'save_steps': 2500, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': True, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': True, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 500, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': './lora_finetuned', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': AcceleratorConfig(split_batches=False, dispatch_batches=None, even_batches=True, use_seedable_sampler=True, non_blocking=False, gradient_accumulation_kwargs=None, use_configured_state=False), 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': 'decryptellix/Llama-3.1-8B-LoRA-only', 'hub_strategy': <HubStrategy.EVERY_SAVE: 'every_save'>, 'hub_private_repo': None, 'hub_always_push': False, 'gradient_checkpointing': True, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'steps', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': None, 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False, 'distributed_state': Distributed environment: NO
4
  Num processes: 1
5
  Process index: 0
6
  Local process index: 0
7
  Device: cuda
8
  , '_n_gpu': 2, '__cached__setup_devices': device(type='cuda', index=0), 'deepspeed_plugin': None}
9
+ 2025-04-05 08:11:31,039 - Training details: {'Epochs': 1, 'Training Steps': 5, 'Final Loss': None, 'Final Learning Rate': None, 'Total Training Time (s)': '177.61'}
10
+ 2025-04-05 08:11:31,039 - Training metrics: {'epochs': [], 'loss': [], 'learning_rate': [], 'training_time': 177.61321878433228}
11
+ 2025-04-05 08:11:31,039 - Evaluation results: {'meteor_scores': {'meteor': 0.2126057702514999}, 'rouge_scores': {'rouge1': 0.12309523809523809, 'rouge2': 0.09166666666666666, 'rougeL': 0.12333333333333334, 'rougeLsum': 0.12666666666666665}, 'bleu_scores': {'bleu': 0.10685599389053295, 'precisions': [0.19803746654772525, 0.11989100817438691, 0.08695652173913043, 0.06314797360980208], 'brevity_penalty': 1.0, 'length_ratio': 1.7681388012618298, 'translation_length': 1121, 'reference_length': 634}, 'perplexity': 135658910.0}
session_logs/lora_finetuning_report.pdf CHANGED
Binary files a/session_logs/lora_finetuning_report.pdf and b/session_logs/lora_finetuning_report.pdf differ