Update README.md
Browse files
README.md
CHANGED
@@ -12,28 +12,20 @@ tags:
|
|
12 |
pipeline_tag: text-generation
|
13 |
---
|
14 |
|
15 |
-
# DRT
|
16 |
|
17 |
<p align="center">
|
18 |
-
🤗 <a href="https://huggingface.co/Krystalan/DRT-
|
19 |
|
20 |
</p>
|
21 |
|
22 |
-
This repository contains the resources for our paper ["DRT
|
23 |
-
|
24 |
-
|
25 |
-
### Updates:
|
26 |
-
- *2024.12.31*: We updated [our paper](https://arxiv.org/abs/2412.17498) with more detals and analyses. Check it out!
|
27 |
-
- *2024.12.31*: We released the testing set of our work, please refer to `data/test.jsonl`
|
28 |
-
- *2024.12.30*: We released a new model checkpoint using Llama-3.1-8B-Instruct as the backbone, i.e., 🤗 <a href="https://huggingface.co/Krystalan/DRT-o1-8B">DRT-o1-8B</a>
|
29 |
-
- *2024.12.24*: We released [our paper](https://arxiv.org/abs/2412.17498). Check it out!
|
30 |
-
- *2024.12.23*: We released our model checkpoints. 🤗 <a href="https://huggingface.co/Krystalan/DRT-o1-7B">DRT-o1-7B</a> and 🤗 <a href="https://huggingface.co/Krystalan/DRT-o1-14B">DRT-o1-14B</a>.
|
31 |
|
32 |
|
33 |
If you find this work is useful, please consider cite our paper:
|
34 |
```
|
35 |
@article{wang2024drt,
|
36 |
-
title={DRT
|
37 |
author={Wang, Jiaan and Meng, Fandong and Liang, Yunlong and Zhou, Jie},
|
38 |
journal={arXiv preprint arXiv:2412.17498},
|
39 |
year={2024}
|
@@ -54,12 +46,12 @@ If you find this work is useful, please consider cite our paper:
|
|
54 |
## Introduction
|
55 |
|
56 |
|
57 |
-
In this work, we introduce DRT
|
58 |
- 🌟 We mine English sentences with similes or metaphors from existing literature books, which are suitable for translation via long thought.
|
59 |
- 🌟 We propose a designed multi-agent framework with three agents (i.e., a translator, an advisor and an evaluator) to synthesize the MT samples with long thought. There are 22,264 synthesized samples in total.
|
60 |
-
- 🌟 We train DRT-
|
61 |
|
62 |
-
> Our goal is not to achieve competitive performance with OpenAI’s O1 in neural machine translation (MT). Instead, we explore technical routes to bring the success of long thought to MT. To this end, we introduce DRT
|
63 |
|
64 |
|
65 |
## Models
|
@@ -68,9 +60,9 @@ In this work, we introduce DRT-o1, an attempt to bring the success of long thoug
|
|
68 |
|
69 |
| | Backbone | Model Access |
|
70 |
| :--: | :--: | :--: |
|
71 |
-
| DRT-
|
72 |
-
| DRT-
|
73 |
-
| DRT-
|
74 |
|
75 |
### Model Performance
|
76 |
| | GRF | CometKiwi | GRB | BLEU | CometScore |
|
@@ -80,9 +72,9 @@ In this work, we introduce DRT-o1, an attempt to bring the success of long thoug
|
|
80 |
| Qwen2.5-14B-Instruct | 84.74 | 72.01 | 80.85 | 30.23 | 78.84 |
|
81 |
| Marco-o1-7B | 82.41 | 71.62 | 77.50 | 29.48 | 77.41 |
|
82 |
| QwQ-32B-preview | 86.31 | 71.48 | 83.08 | 27.46 | 78.68 |
|
83 |
-
| DRT-
|
84 |
-
| DRT-
|
85 |
-
| DRT-
|
86 |
|
87 |
|
88 |
### Model Prompts
|
@@ -90,7 +82,7 @@ During model inference, please use the following prompts:
|
|
90 |
- System prompt: `You are a philosopher skilled in deep thinking, accustomed to exploring complex problems with profound insight.`
|
91 |
- User prompt: `Please translate the following text from English to Chinese:\n[An English text]`
|
92 |
|
93 |
-
DRT
|
94 |
```
|
95 |
<thought>
|
96 |
[Reasoning process]
|
@@ -106,7 +98,7 @@ DRT-o1 models will first generate the thought and then provide the final transla
|
|
106 |
```python
|
107 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
108 |
|
109 |
-
model_name = "Krystalan/DRT-
|
110 |
|
111 |
model = AutoModelForCausalLM.from_pretrained(
|
112 |
model_name,
|
@@ -176,7 +168,7 @@ print("Chat response:", chat_response)
|
|
176 |
|
177 |
## Translation Cases
|
178 |
|
179 |
-
| Source Sentence | Qwen2.5-14B-Instruct | QwQ-32B-preview | DRT-
|
180 |
| :--: | :--: | :--: | :--: |
|
181 |
|Women were weeping and children crying, and all were going as fast as seemingly lay in their power, looking behind now and then as if pursued by some deadly enemy. | 妇女在哭泣,孩子们在哀号,所有人都似乎在尽全力快速前行,不时回头张望,仿佛背后有某个致命的敌人在追赶。 | 妇女们哭泣着,孩子们哭喊着,每个人都在尽力加快脚步,不时回头张望,仿佛背后有致命的敌人在追击。 | 妇女悲泣,孩童哀号,众人疾奔,不时回首,仿佛有致命之敌在后紧追。 |
|
182 |
|Wait through this season of the rain Till suns of autumn dry the plain, Then shall thy giant foe, and all His host and realm, before thee fall. | 等待这雨季过去,直到秋日的阳光晒干原野,那时你的巨敌及其部众和王国都将倒在你面前。 | 忍过雨季漫长夜,待秋阳升空,晒干大地湿润色,那时,巨敌垮台,他的军队和王国,全在你脚下。 | 历尽这雨季,待秋日阳光驱散平原上的最后一丝湿润,那时,你那巨敌,连同他的军队与王国,都将匍匐在你脚下。 |
|
|
|
12 |
pipeline_tag: text-generation
|
13 |
---
|
14 |
|
15 |
+
# DRT
|
16 |
|
17 |
<p align="center">
|
18 |
+
🤗 <a href="https://huggingface.co/Krystalan/DRT-7B">DRT-7B</a>   |   🤗 <a href="https://huggingface.co/Krystalan/DRT-8B">DRT-8B</a>   |   🤗 <a href="https://huggingface.co/Krystalan/DRT-14B">DRT-14B</a>   |    📑 <a href="https://arxiv.org/abs/2412.17498">Paper</a>
|
19 |
|
20 |
</p>
|
21 |
|
22 |
+
This repository contains the resources for our paper ["DRT: Deep Reasoning Translation via Long Chain-of-Thought"](https://arxiv.org/abs/2412.17498)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
|
25 |
If you find this work is useful, please consider cite our paper:
|
26 |
```
|
27 |
@article{wang2024drt,
|
28 |
+
title={DRT: Deep Reasoning Translation via Long Chain-of-Thought},
|
29 |
author={Wang, Jiaan and Meng, Fandong and Liang, Yunlong and Zhou, Jie},
|
30 |
journal={arXiv preprint arXiv:2412.17498},
|
31 |
year={2024}
|
|
|
46 |
## Introduction
|
47 |
|
48 |
|
49 |
+
In this work, we introduce DRT, an attempt to bring the success of long thought reasoning to neural machine translation (MT). To this end,
|
50 |
- 🌟 We mine English sentences with similes or metaphors from existing literature books, which are suitable for translation via long thought.
|
51 |
- 🌟 We propose a designed multi-agent framework with three agents (i.e., a translator, an advisor and an evaluator) to synthesize the MT samples with long thought. There are 22,264 synthesized samples in total.
|
52 |
+
- 🌟 We train DRT-8B, DRT-7B and DRT-14B using Llama-3.1-8B-Instruct, Qwen2.5-7B-Instruct and Qwen2.5-14B-Instruct as backbones.
|
53 |
|
54 |
+
> Our goal is not to achieve competitive performance with OpenAI’s O1 in neural machine translation (MT). Instead, we explore technical routes to bring the success of long thought to MT. To this end, we introduce DRT, *a byproduct of our exploration*, and we hope it could facilitate the corresponding research in this direction.
|
55 |
|
56 |
|
57 |
## Models
|
|
|
60 |
|
61 |
| | Backbone | Model Access |
|
62 |
| :--: | :--: | :--: |
|
63 |
+
| DRT-7B | 🤗 <a href="https://huggingface.co/Qwen/Qwen2.5-7B-Instruct">Qwen2.5-7B-Instruct</a> | 🤗 <a href="https://huggingface.co/Krystalan/DRT-7B">DRT-7B</a> |
|
64 |
+
| DRT-8B | 🤗 <a href="https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct">Llama-3.1-8B-Instruct</a> | 🤗 <a href="https://huggingface.co/Krystalan/DRT-8B">DRT-8B</a> |
|
65 |
+
| DRT-14B | 🤗 <a href="https://huggingface.co/Qwen/Qwen2.5-14B-Instruct">Qwen2.5-14B-Instruct</a> | 🤗 <a href="https://huggingface.co/Krystalan/DRT-14B">DRT-14B</a> |
|
66 |
|
67 |
### Model Performance
|
68 |
| | GRF | CometKiwi | GRB | BLEU | CometScore |
|
|
|
72 |
| Qwen2.5-14B-Instruct | 84.74 | 72.01 | 80.85 | 30.23 | 78.84 |
|
73 |
| Marco-o1-7B | 82.41 | 71.62 | 77.50 | 29.48 | 77.41 |
|
74 |
| QwQ-32B-preview | 86.31 | 71.48 | 83.08 | 27.46 | 78.68 |
|
75 |
+
| DRT-8B | 84.49 | 70.85 | 80.80 | 32.67 | 78.81 |
|
76 |
+
| DRT-7B | 85.57 | 71.78 | 82.38 | 35.54 | 80.19 |
|
77 |
+
| DRT-14B | **87.19** | **72.11** | **83.20** | **36.46** | **80.64** |
|
78 |
|
79 |
|
80 |
### Model Prompts
|
|
|
82 |
- System prompt: `You are a philosopher skilled in deep thinking, accustomed to exploring complex problems with profound insight.`
|
83 |
- User prompt: `Please translate the following text from English to Chinese:\n[An English text]`
|
84 |
|
85 |
+
DRT models will first generate the thought and then provide the final translation, with the following format:
|
86 |
```
|
87 |
<thought>
|
88 |
[Reasoning process]
|
|
|
98 |
```python
|
99 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
100 |
|
101 |
+
model_name = "Krystalan/DRT-7B"
|
102 |
|
103 |
model = AutoModelForCausalLM.from_pretrained(
|
104 |
model_name,
|
|
|
168 |
|
169 |
## Translation Cases
|
170 |
|
171 |
+
| Source Sentence | Qwen2.5-14B-Instruct | QwQ-32B-preview | DRT-14B |
|
172 |
| :--: | :--: | :--: | :--: |
|
173 |
|Women were weeping and children crying, and all were going as fast as seemingly lay in their power, looking behind now and then as if pursued by some deadly enemy. | 妇女在哭泣,孩子们在哀号,所有人都似乎在尽全力快速前行,不时回头张望,仿佛背后有某个致命的敌人在追赶。 | 妇女们哭泣着,孩子们哭喊着,每个人都在尽力加快脚步,不时回头张望,仿佛背后有致命的敌人在追击。 | 妇女悲泣,孩童哀号,众人疾奔,不时回首,仿佛有致命之敌在后紧追。 |
|
174 |
|Wait through this season of the rain Till suns of autumn dry the plain, Then shall thy giant foe, and all His host and realm, before thee fall. | 等待这雨季过去,直到秋日的阳光晒干原野,那时你的巨敌及其部众和王国都将倒在你面前。 | 忍过雨季漫长夜,待秋阳升空,晒干大地湿润色,那时,巨敌垮台,他的军队和王国,全在你脚下。 | 历尽这雨季,待秋日阳光驱散平原上的最后一丝湿润,那时,你那巨敌,连同他的军队与王国,都将匍匐在你脚下。 |
|