Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ pipeline_tag: text-generation
|
|
23 |
<a href="https://github.com/agentica-project/rllm" style="margin: 2px;">
|
24 |
<img alt="Code" src="https://img.shields.io/badge/RLLM-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
25 |
</a>
|
26 |
-
<a href="https://
|
27 |
<img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
28 |
</a>
|
29 |
<a href="https://x.com/Agentica_" style="margin: 2px;">
|
@@ -39,7 +39,7 @@ pipeline_tag: text-generation
|
|
39 |
## DeepCoder Overview
|
40 |
DeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.
|
41 |
|
42 |
-
<div style="
|
43 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/654037be97949fd2304aab7f/r3-vzkItOCrMf1qldW0Mj.png" style="width: 100%;" />
|
44 |
</div>
|
45 |
|
@@ -111,4 +111,11 @@ This permissive license ensures that researchers, developers, and enthusiasts wo
|
|
111 |
|
112 |
## Citation
|
113 |
```bibtex
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
114 |
```
|
|
|
23 |
<a href="https://github.com/agentica-project/rllm" style="margin: 2px;">
|
24 |
<img alt="Code" src="https://img.shields.io/badge/RLLM-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
25 |
</a>
|
26 |
+
<a href="https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51" target="_blank" style="margin: 2px;">
|
27 |
<img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
28 |
</a>
|
29 |
<a href="https://x.com/Agentica_" style="margin: 2px;">
|
|
|
39 |
## DeepCoder Overview
|
40 |
DeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.
|
41 |
|
42 |
+
<div style="margin: 0 auto;">
|
43 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/654037be97949fd2304aab7f/r3-vzkItOCrMf1qldW0Mj.png" style="width: 100%;" />
|
44 |
</div>
|
45 |
|
|
|
111 |
|
112 |
## Citation
|
113 |
```bibtex
|
114 |
+
@misc{deepcoder2025,
|
115 |
+
title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},
|
116 |
+
author={Michael Luo, Sijun Tan, Roy Huang, Xiaoxiang Shi, Rachel Xin, Colin Cai, Ameen Patel, Alpay Ariyak, Qingyang Wu, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica, Tianjun Zhang},
|
117 |
+
howpublished={\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},
|
118 |
+
note={Notion Blog},
|
119 |
+
year={2025}
|
120 |
+
}
|
121 |
```
|