AQuarterMile commited on
Commit
92e38eb
·
verified ·
1 Parent(s): 67bcdd2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -18
README.md CHANGED
@@ -1,34 +1,28 @@
1
  ---
2
  library_name: transformers
3
- license: other
4
  base_model: Qwen/Qwen2.5-7B-Instruct
5
  tags:
6
  - llama-factory
7
- - full
8
  - generated_from_trainer
9
  model-index:
10
- - name: WritingBench-Critic-Model-7B
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- # WritingBench-Critic-Model-7B
18
 
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the 50K SFT dataset.
 
 
20
 
21
- ## Model description
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
 
33
  ## Training procedure
34
 
@@ -49,13 +43,20 @@ The following hyperparameters were used during training:
49
  - lr_scheduler_warmup_ratio: 0.1
50
  - num_epochs: 3
51
 
52
- ### Training results
53
-
54
-
55
-
56
  ### Framework versions
57
 
58
  - Transformers 4.46.1
59
  - Pytorch 2.5.1+cu124
60
  - Datasets 3.1.0
61
  - Tokenizers 0.20.3
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
  base_model: Qwen/Qwen2.5-7B-Instruct
5
  tags:
6
  - llama-factory
 
7
  - generated_from_trainer
8
  model-index:
9
+ - name: WritingBench-Critic-Model-Qwen-7B
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # WritingBench-Critic-Model-Qwen-7B
17
 
18
+ <p align="center">
19
+ 📃 <a href="[https://arxiv.org/abs/2503.05244]" target="_blank">[Paper]</a> • 🚀 <a href="[https://github.com/X-PLUG/WritingBench]" target="_blank">[Github Repo]</a>
20
+ </p>
21
 
22
+ This model is fine-tuned from [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on a 50K SFT dataset for writing evaluation tasks.
23
 
24
+ For each criterion, the evaluator independently assigns a score on a 10-point scale to a response, providing both a score and a justification.
25
 
 
 
 
 
 
 
 
26
 
27
  ## Training procedure
28
 
 
43
  - lr_scheduler_warmup_ratio: 0.1
44
  - num_epochs: 3
45
 
 
 
 
 
46
  ### Framework versions
47
 
48
  - Transformers 4.46.1
49
  - Pytorch 2.5.1+cu124
50
  - Datasets 3.1.0
51
  - Tokenizers 0.20.3
52
+
53
+ ## 📝 Citation
54
+
55
+ ```
56
+ @misc{wu2025writingbench,
57
+ title={WritingBench: A Comprehensive Benchmark for Generative Writing},
58
+ author={Yuning Wu and Jiahao Mei and Ming Yan and Chenliang Li and SHaopeng Lai and Yuran Ren and Zijia Wang and Ji Zhang and Mengyue Wu and Qin Jin and Fei Huang},
59
+ year={2025},
60
+ url={https://arxiv.org/abs/2503.05244},
61
+ }
62
+ ```