Pankaj Mathur commited on
Commit
b639e19
·
1 Parent(s): 37beb66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -5,14 +5,14 @@ language:
5
  library_name: adapter-transformers
6
  ---
7
  # Wizardlm Alpaca Dolly Orca Open_LLaMa_3b
8
- An Open_LLaMA-3B model trained on custom explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying [Orca Research Paper](https://arxiv.org/abs/2306.02707) dataset construction approaches.
9
 
10
 
11
  # Dataset
12
 
13
  We trained [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) on custom explain tuned [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html) (~52K) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
14
 
15
- We leverage all of the 15 system instructions provided in [Orca Research Paper](https://arxiv.org/abs/2306.02707) to generate custom Alpaca dataset, in contrast to vanilla instruction tuning approaches used by original [Alpaca research paper](https://crfm.stanford.edu/2023/03/13/alpaca.html).
16
 
17
  This helps student model aka [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b) to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
18
 
 
5
  library_name: adapter-transformers
6
  ---
7
  # Wizardlm Alpaca Dolly Orca Open_LLaMa_3b
8
+ An Open_LLaMA-3B model trained on custom explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
9
 
10
 
11
  # Dataset
12
 
13
  We trained [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) on custom explain tuned [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html) (~52K) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
14
 
15
+ We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom Alpaca dataset, in contrast to vanilla instruction tuning approaches used by original [Alpaca research paper](https://crfm.stanford.edu/2023/03/13/alpaca.html).
16
 
17
  This helps student model aka [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b) to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
18