Text Generation
Transformers
PyTorch
English
mistral
text-generation-inference

Add link to Github and improve description

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +20 -4
README.md CHANGED
@@ -1,11 +1,11 @@
1
  ---
2
- license: apache-2.0
3
  datasets:
4
  - togethercomputer/RedPajama-Data-1T
5
  language:
6
  - en
7
- pipeline_tag: text-generation
8
  library_name: transformers
 
 
9
  ---
10
 
11
  ## PDS-1.7B
@@ -14,7 +14,8 @@ library_name: transformers
14
 
15
  **PDS-1.7B** is a 1.7B model with [Mistral](https://arxiv.org/abs/2310.06825) achitecture pre-trained from scratch on the data selected from the CC split of [Redpajama](https://github.com/togethercomputer/RedPajama-Data), using the PDS framework.
16
 
17
- The PDS framework is based on the [Pontryagin's maximum principle](https://en.wikipedia.org/wiki/Pontryagin%27s_maximum_principle#:~:text=Pontryagin's%20maximum%20principle%20is%20used,the%20state%20or%20input%20controls.) for optimal pre-training data selection, which not only enjoy strong theoretical support but is also scalable for training large language models.
 
18
 
19
  Please refer to our [paper](https://arxiv.org/abs/2410.07064) for more details.
20
 
@@ -42,6 +43,21 @@ PDS-selected data improves the performance of language models pre-trained from s
42
 
43
  [Conventional Pre-training](https://huggingface.co/Data-Selection/BSL-1.7B)
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ### Citation
46
 
47
  ```bibtex
@@ -51,4 +67,4 @@ PDS-selected data improves the performance of language models pre-trained from s
51
  journal={arXiv preprint arXiv:2410.07064},
52
  year={2024}
53
  }
54
- ```
 
1
  ---
 
2
  datasets:
3
  - togethercomputer/RedPajama-Data-1T
4
  language:
5
  - en
 
6
  library_name: transformers
7
+ license: apache-2.0
8
+ pipeline_tag: text-generation
9
  ---
10
 
11
  ## PDS-1.7B
 
14
 
15
  **PDS-1.7B** is a 1.7B model with [Mistral](https://arxiv.org/abs/2310.06825) achitecture pre-trained from scratch on the data selected from the CC split of [Redpajama](https://github.com/togethercomputer/RedPajama-Data), using the PDS framework.
16
 
17
+ This work investigates the selection of high-quality pre-training data from massive corpora to enhance LMs' capabilities for downstream usage.
18
+ We formulate data selection as a generalized Optimal Control problem, which can be solved theoretically by Pontryagin's Maximum Principle (PMP), yielding a set of necessary conditions that characterize the relationship between optimal data selection and LM training dynamics. Based on these theoretical results, we introduce PMP-based Data Selection (PDS), a framework that approximates optimal data selection by solving the PMP conditions.
19
 
20
  Please refer to our [paper](https://arxiv.org/abs/2410.07064) for more details.
21
 
 
43
 
44
  [Conventional Pre-training](https://huggingface.co/Data-Selection/BSL-1.7B)
45
 
46
+ ### Sample Usage
47
+
48
+ ```python
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+
51
+ model_id = "Data-Selection/PDS-1.7B"
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
54
+ model = AutoModelForCausalLM.from_pretrained(model_id)
55
+
56
+ inputs = tokenizer("Hello, my name is", return_tensors="pt")
57
+ outputs = model.generate(**inputs)
58
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
59
+ ```
60
+
61
  ### Citation
62
 
63
  ```bibtex
 
67
  journal={arXiv preprint arXiv:2410.07064},
68
  year={2024}
69
  }
70
+ ```