Pankaj Mathur
commited on
Commit
·
37beb66
1
Parent(s):
6b17915
Update README.md
Browse files
README.md
CHANGED
@@ -4,12 +4,13 @@ language:
|
|
4 |
- en
|
5 |
library_name: adapter-transformers
|
6 |
---
|
7 |
-
#
|
|
|
8 |
|
9 |
|
10 |
# Dataset
|
11 |
|
12 |
-
We trained [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) on custom explain tuned Alpaca dataset (~52K) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
|
13 |
|
14 |
We leverage all of the 15 system instructions provided in [Orca Research Paper](https://arxiv.org/abs/2306.02707) to generate custom Alpaca dataset, in contrast to vanilla instruction tuning approaches used by original [Alpaca research paper](https://crfm.stanford.edu/2023/03/13/alpaca.html).
|
15 |
|
|
|
4 |
- en
|
5 |
library_name: adapter-transformers
|
6 |
---
|
7 |
+
# Wizardlm Alpaca Dolly Orca Open_LLaMa_3b
|
8 |
+
An Open_LLaMA-3B model trained on custom explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying [Orca Research Paper](https://arxiv.org/abs/2306.02707) dataset construction approaches.
|
9 |
|
10 |
|
11 |
# Dataset
|
12 |
|
13 |
+
We trained [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) on custom explain tuned [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html) (~52K) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
|
14 |
|
15 |
We leverage all of the 15 system instructions provided in [Orca Research Paper](https://arxiv.org/abs/2306.02707) to generate custom Alpaca dataset, in contrast to vanilla instruction tuning approaches used by original [Alpaca research paper](https://crfm.stanford.edu/2023/03/13/alpaca.html).
|
16 |
|