RealDragonMA commited on
Commit
2069eb0
·
verified ·
1 Parent(s): 0ca8a3e

Model save

Browse files
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
  base_model: HuggingFaceTB/SmolLM2-135M-Instruct
3
- datasets: wykonos/movies
4
  library_name: transformers
5
  model_name: Pelliculum-Chatbot
6
  tags:
@@ -12,7 +11,7 @@ licence: license
12
 
13
  # Model Card for Pelliculum-Chatbot
14
 
15
- This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) on the [wykonos/movies](https://huggingface.co/datasets/wykonos/movies) dataset.
16
  It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
  ## Quick start
@@ -28,7 +27,7 @@ print(output["generated_text"])
28
 
29
  ## Training procedure
30
 
31
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/realdragonma-pelliculum/huggingface/runs/7ck14yco)
32
 
33
 
34
  This model was trained with SFT.
 
1
  ---
2
  base_model: HuggingFaceTB/SmolLM2-135M-Instruct
 
3
  library_name: transformers
4
  model_name: Pelliculum-Chatbot
5
  tags:
 
11
 
12
  # Model Card for Pelliculum-Chatbot
13
 
14
+ This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/realdragonma-pelliculum/huggingface/runs/gucz3wwp)
31
 
32
 
33
  This model was trained with SFT.
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3142d4bf44d6f36e6df0f2b228e1114f62327ca55de9d7a19a02e39d248deb52
3
  size 29523136
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7c0d86cfce92d8c1604bab7d4c9cc8c90cf852c6a21ec3bf0e5b294ed2ff9a2
3
  size 29523136
final_checkpoint/adapter_config.json CHANGED
@@ -23,9 +23,9 @@
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
- "v_proj",
27
  "q_proj",
28
  "o_proj",
 
29
  "k_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
 
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
 
26
  "q_proj",
27
  "o_proj",
28
+ "v_proj",
29
  "k_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
final_checkpoint/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:72ddc24bafaf651572e814a2b557a54545e68c2882ae5db26da28df50b090164
3
  size 29523136
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7c0d86cfce92d8c1604bab7d4c9cc8c90cf852c6a21ec3bf0e5b294ed2ff9a2
3
  size 29523136
final_checkpoint/tokenizer.json CHANGED
@@ -1,6 +1,11 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 1024,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
  "padding": null,
10
  "added_tokens": [
11
  {
final_checkpoint/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:874fdbee4ea5bee9cacd9347514fbe9d320cab1b61dd5e029a7c41b7f7db36d3
3
  size 5688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:468d6144abe817c443d42dc9f38248e1b7dacc79a09c721a19ae706c7c779e73
3
  size 5688