YuchenLi01 commited on
Commit
3e326c0
·
verified ·
1 Parent(s): a54d5b3

Model save

Browse files
README.md CHANGED
@@ -27,7 +27,7 @@ print(output["generated_text"])
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs64_lr5e-06_0try1acW2wYtf74lS7e47quvsGidTvEKwgOXV7m5YvCDYL68FrS)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs64_lr5e-06_0try10hSREDaJjQomDtyiyuAfn7zfiVlh1nkj8FKgdDrkRQBAhu)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
all_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.43372609805927037,
5
- "train_runtime": 31153.5653,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.464,
8
- "train_steps_per_second": 0.023
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.4308550809242281,
5
+ "train_runtime": 31776.7244,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.435,
8
+ "train_steps_per_second": 0.022
9
  }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8d0fa99ff4ea33f13c22fc9551a79aff69ff4e25fb0526383e233f95cc5dbbc8
3
  size 4943162336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:216a89d19ce49bef067647a852fa4746f02f1cdd6ce360f6d0b2fc33ed3909b4
3
  size 4943162336
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:222ac28c8c3fb5501e5a81d2c9498b17b0be97e83ecd4233ea4bcfee97689d00
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4459df7b14ceeaeca310fcd521216acdbf5757e02b1d30f8895bf5df0bd64c9
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a0e1c53e41114db2c28e36e2349d84a3f9b371fe643f0050f26f942019825b8
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:179c3963e797bec611e92b2ca25fe578d434ecb32e4053a4ba587e5ecd8f15f2
3
  size 4540516344
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.43372609805927037,
5
- "train_runtime": 31153.5653,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.464,
8
- "train_steps_per_second": 0.023
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.4308550809242281,
5
+ "train_runtime": 31776.7244,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.435,
8
+ "train_steps_per_second": 0.022
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff