nakashi
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,8 @@ This llama model was trained 2x faster with [Unsloth](https://github.com/unsloth
|
|
24 |
|
25 |
# how to use
|
26 |
本アダプタを用いて,ELYZA-tasks-100-TVの出力を得る推論コードです.Jupyter Notebook環境を想定しています.
|
27 |
-
|
|
|
28 |
```bash
|
29 |
!pip install -U bitsandbytes
|
30 |
!pip install -U transformers
|
@@ -33,7 +34,7 @@ This llama model was trained 2x faster with [Unsloth](https://github.com/unsloth
|
|
33 |
!pip install -U peft
|
34 |
```
|
35 |
|
36 |
-
準備
|
37 |
```python
|
38 |
from transformers import (
|
39 |
AutoModelForCausalLM,
|
@@ -51,7 +52,7 @@ adapter_id = "shiki07/llm-jp-3-13b-it_lora"
|
|
51 |
eval_data_path = "./elyza-tasks-100-TV_0.jsonl" # elyza-tasks-100-TVのパスを指定
|
52 |
```
|
53 |
|
54 |
-
時間がかかります.
|
55 |
```python
|
56 |
# QLoRA config
|
57 |
bnb_config = BitsAndBytesConfig(
|
@@ -72,7 +73,7 @@ tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True,
|
|
72 |
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)
|
73 |
```
|
74 |
|
75 |
-
データ読み込みと推論
|
76 |
```python
|
77 |
# データセットの読み込み。
|
78 |
datasets = []
|
|
|
24 |
|
25 |
# how to use
|
26 |
本アダプタを用いて,ELYZA-tasks-100-TVの出力を得る推論コードです.Jupyter Notebook環境を想定しています.
|
27 |
+
|
28 |
+
## 使用ライブラリのインストール
|
29 |
```bash
|
30 |
!pip install -U bitsandbytes
|
31 |
!pip install -U transformers
|
|
|
34 |
!pip install -U peft
|
35 |
```
|
36 |
|
37 |
+
## 準備
|
38 |
```python
|
39 |
from transformers import (
|
40 |
AutoModelForCausalLM,
|
|
|
52 |
eval_data_path = "./elyza-tasks-100-TV_0.jsonl" # elyza-tasks-100-TVのパスを指定
|
53 |
```
|
54 |
|
55 |
+
## 時間がかかります.
|
56 |
```python
|
57 |
# QLoRA config
|
58 |
bnb_config = BitsAndBytesConfig(
|
|
|
73 |
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)
|
74 |
```
|
75 |
|
76 |
+
## データ読み込みと推論
|
77 |
```python
|
78 |
# データセットの読み込み。
|
79 |
datasets = []
|