YuchenLi01 commited on
Commit
d72f28d
·
verified ·
1 Parent(s): ad91f57

Model save

Browse files
README.md CHANGED
@@ -27,7 +27,7 @@ print(output["generated_text"])
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs64_lr5e-07_1try1S4eJmGXVUhdSrsIAqeWu49ao5UELq4jSJBSvoOVdLYLRFy)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs64_lr5e-07_1try1UFvbARMrdmbScCV57Egcd1vrYCk45kjt43ejW9ou2fn8VJ)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
all_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.4091535410406212,
5
- "train_runtime": 30868.5922,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.477,
8
  "train_steps_per_second": 0.023
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.42380392869090633,
5
+ "train_runtime": 30442.3633,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.498,
8
  "train_steps_per_second": 0.023
9
  }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:caac3875a6859a912d12c4aa5c62a654428619a4f96edab3d6928b82953f6028
3
  size 4943162336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce3bcf28868356f51d8725d84a98ab28379d5496e713433c15d81f375ebe2262
3
  size 4943162336
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6bc04bc6757f3fa1bb90af978b31a49b51fe020eca758d1a9d633f15af7643f
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0d9453f47627daaf61ba738be74693c359cfb6df0955ac54f76c253ceb64901
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d1e0885de00ffd253a274430d4e6c2d4bae0fedca1a794ecbbcf0f126025d99
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74425ad4eaae005a5b048da8fb8205069b87ce0eb45500a2724b5533d5e14aad
3
  size 4540516344
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.4091535410406212,
5
- "train_runtime": 30868.5922,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.477,
8
  "train_steps_per_second": 0.023
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.42380392869090633,
5
+ "train_runtime": 30442.3633,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.498,
8
  "train_steps_per_second": 0.023
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff