Update README.md
Browse files
README.md
CHANGED
@@ -17,8 +17,6 @@ model-index:
|
|
17 |
license: llama3
|
18 |
language:
|
19 |
- en
|
20 |
-
datasets:
|
21 |
-
- berkeley-nest/Nectar
|
22 |
widget:
|
23 |
- example_title: OpenBioLLM-8B
|
24 |
messages:
|
@@ -86,13 +84,13 @@ OpenBioLLM-8B is an advanced open source language model designed specifically fo
|
|
86 |
|
87 |
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5, Gemini, Meditron-70B on biomedical benchmarks.
|
88 |
|
89 |
-
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the
|
90 |
|
91 |
<div align="center">
|
92 |
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
|
93 |
</div>
|
94 |
|
95 |
-
- **Policy Optimization**: [
|
96 |
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
97 |
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
|
98 |
|
|
|
17 |
license: llama3
|
18 |
language:
|
19 |
- en
|
|
|
|
|
20 |
widget:
|
21 |
- example_title: OpenBioLLM-8B
|
22 |
messages:
|
|
|
84 |
|
85 |
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5, Gemini, Meditron-70B on biomedical benchmarks.
|
86 |
|
87 |
+
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
|
88 |
|
89 |
<div align="center">
|
90 |
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
|
91 |
</div>
|
92 |
|
93 |
+
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
|
94 |
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
95 |
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
|
96 |
|