Update README.md
Browse files
README.md
CHANGED
@@ -11,62 +11,15 @@ tags:
|
|
11 |
- efficient-nlp
|
12 |
- distilled-models
|
13 |
---
|
14 |
-
# SimpleStories
|
15 |
-
SimpleStories is a large synthetic story dataset comprising 2 million stories designed for efficient NLP research. Created to improve upon TinyStories, it offers greater syntactic and semantic diversity through parameterized prompt generation while maintaining simple language. The dataset features stories annotated with high-level concepts like theme, topic, style, and narrative features, making it ideal for training small language models and studying language understanding.
|
16 |
-
|
17 |
-
# SimpleStories 35M
|
18 |
-
SimpleStories-35M is a 35 million parameter language model trained on the SimpleStories dataset. This model is the largest in the SimpleStories model family,
|
19 |
-
offering the best performance across all evaluation metrics. This is part of the family of small language models trained on [SimpleStories dataset](https://huggingface.co/datasets/lennart-finke/SimpleStories).
|
20 |
-
The models range from 1.25M to 35M parameters, offering a spectrum of capabilities while maintaining efficiency. The model training and evaluation code can be found here: https://github.com/danbraunai/simple_stories_train/tree/main/simple_stories_train
|
21 |
-
|
22 |
-
## Model Variants
|
23 |
-
|
24 |
-
| Model Name | n_params | n_layers | d_model | n_heads | n_ctx | d_vocab |
|
25 |
-
|------------|----------|----------|---------|---------|-------|---------|
|
26 |
-
| SimpleStories-35M | 35 million | 12 | 512 | 8 | 512 | 4096 |
|
27 |
-
| SimpleStories-30M | 30 million | 10 | 512 | 8 | 512 | 4096 |
|
28 |
-
| SimpleStories-11M | 11 million | 6 | 384 | 6 | 512 | 4096 |
|
29 |
-
| SimpleStories-5M | 5 million | 6 | 256 | 4 | 512 | 4096 |
|
30 |
-
| SimpleStories-1.25M | 1.25 million | 4 | 128 | 4 | 512 | 4096 |
|
31 |
-
|
32 |
-
## Performance Comparison
|
33 |
-
|
34 |
-
Our models demonstrate strong performance across various evaluation metrics as shown in the chart below. The trained models are scored using the model as a judge evaluation framework.
|
35 |
-
|
36 |
-
<p align="center">
|
37 |
-
<img width="80%" src="figures/simplestories_comparison.png">
|
38 |
-
</p>
|
39 |
-
|
40 |
-
- **Originality**: Measures the uniqueness and creativity of generated content
|
41 |
-
- **Coherence**: Evaluates the logical flow and consistency of generated stories
|
42 |
-
- **Grammar**: Assesses grammatical correctness and linguistic quality
|
43 |
-
- **Quality**: Holistic evaluation of overall text generation quality
|
44 |
-
|
45 |
-
The larger models (35M, 30M) achieve the best performance, particularly in coherence and grammar, while even our smallest 1.25M parameter model produces readable and coherent content. As shown in the visualization, our SimpleStories-33M model achieves scores of 90.8 in Grammar, 85.7 in Coherence, 81.5 in Quality, and 72.5 in Originality.
|
46 |
-
|
47 |
-
## Dataset
|
48 |
-
|
49 |
-
The SimpleStories dataset is a collection of short stories generated by state-of-the-art language models. It features:
|
50 |
-
|
51 |
-
- Story annotation with high-level concepts: theme, topic, style, etc.
|
52 |
-
- Higher semantic and syntactic diversity through seeded story generation
|
53 |
-
- Generated by 2024 models
|
54 |
-
- Several NLP-metrics pre-computed to aid filtering
|
55 |
-
- ASCII-only guarantee for the English dataset
|
56 |
-
|
57 |
-
## Tokenizer
|
58 |
-
|
59 |
-
We have trained a custom WordPiece tokenizer with a small vocabulary size of 4096. We conducted morphological analysis and coverage gain analysis on the dataset
|
60 |
-
to build a small tokenizer without compromising on the quality of generation.
|
61 |
-
|
62 |
-
## Installation
|
63 |
-
|
64 |
-
Follow the steps at https://github.com/danbraunai/simple_stories_train to install the simple stories package.
|
65 |
|
|
|
|
|
66 |
|
67 |
## Usage
|
68 |
|
69 |
-
|
|
|
|
|
70 |
|
71 |
```python
|
72 |
from transformers import AutoTokenizer
|
@@ -84,7 +37,8 @@ model_config = MODEL_CONFIGS[model_size]
|
|
84 |
# Load appropriate model
|
85 |
model_path = f"SimpleStories/SimpleStories-{model_size}"
|
86 |
model = Llama.from_pretrained(model_path, model_config)
|
87 |
-
|
|
|
88 |
model.eval()
|
89 |
|
90 |
# Load tokenizer
|
@@ -94,15 +48,14 @@ tokenizer = AutoTokenizer.from_pretrained(model_path)
|
|
94 |
prompt = "The curious cat looked at the"
|
95 |
|
96 |
inputs = tokenizer(prompt, return_tensors="pt")
|
97 |
-
input_ids = inputs.input_ids.to(
|
98 |
-
|
99 |
|
100 |
# Generate text
|
101 |
with torch.no_grad():
|
102 |
output_ids = model.generate(
|
103 |
idx=input_ids,
|
104 |
-
max_new_tokens=
|
105 |
-
temperature=0.
|
106 |
top_k=40,
|
107 |
eos_token_id=tokenizer.eos_token_id
|
108 |
)
|
@@ -113,6 +66,36 @@ print(f"Generated text:\n{output_text}")
|
|
113 |
|
114 |
```
|
115 |
|
116 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
|
118 |
-
|
|
|
11 |
- efficient-nlp
|
12 |
- distilled-models
|
13 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
+
# SimpleStories Model Family
|
16 |
+
The SimpleStories models are a tiny model family created for interpretability research, trained on the [SimpleStories dataset](https://huggingface.co/datasets/lennart-finke/SimpleStories).
|
17 |
|
18 |
## Usage
|
19 |
|
20 |
+
```bash
|
21 |
+
pip install simple_stories_train
|
22 |
+
```
|
23 |
|
24 |
```python
|
25 |
from transformers import AutoTokenizer
|
|
|
37 |
# Load appropriate model
|
38 |
model_path = f"SimpleStories/SimpleStories-{model_size}"
|
39 |
model = Llama.from_pretrained(model_path, model_config)
|
40 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu")
|
41 |
+
model.to(device)
|
42 |
model.eval()
|
43 |
|
44 |
# Load tokenizer
|
|
|
48 |
prompt = "The curious cat looked at the"
|
49 |
|
50 |
inputs = tokenizer(prompt, return_tensors="pt")
|
51 |
+
input_ids = inputs.input_ids.to(device)
|
|
|
52 |
|
53 |
# Generate text
|
54 |
with torch.no_grad():
|
55 |
output_ids = model.generate(
|
56 |
idx=input_ids,
|
57 |
+
max_new_tokens=50,
|
58 |
+
temperature=0.0,
|
59 |
top_k=40,
|
60 |
eos_token_id=tokenizer.eos_token_id
|
61 |
)
|
|
|
66 |
|
67 |
```
|
68 |
|
69 |
+
## Model Variants
|
70 |
+
|
71 |
+
| Model Name | n_params | n_layers | d_model | n_heads | n_ctx | d_vocab |
|
72 |
+
|------------|----------|----------|---------|---------|-------|---------|
|
73 |
+
| SimpleStories-35M | 35 million | 12 | 512 | 8 | 512 | 4096 |
|
74 |
+
| SimpleStories-30M | 30 million | 10 | 512 | 8 | 512 | 4096 |
|
75 |
+
| SimpleStories-11M | 11 million | 6 | 384 | 6 | 512 | 4096 |
|
76 |
+
| SimpleStories-5M | 5 million | 6 | 256 | 4 | 512 | 4096 |
|
77 |
+
| SimpleStories-1.25M | 1.25 million | 4 | 128 | 4 | 512 | 4096 |
|
78 |
+
|
79 |
+
## Performance Comparison
|
80 |
+
Model-evaluated generation quality metrics:
|
81 |
+
<p align="center">
|
82 |
+
<img width="80%" src="figures/simplestories_comparison.png">
|
83 |
+
</p>
|
84 |
+
|
85 |
+
|
86 |
+
## Tokenizer
|
87 |
+
|
88 |
+
We use a custom WordPiece tokenizer with a small vocabulary size of 4096. We conducted morphological analysis and coverage gain analysis on the dataset
|
89 |
+
to build a small tokenizer without compromising on the quality of generation.
|
90 |
+
|
91 |
+
## Dataset
|
92 |
+
|
93 |
+
The SimpleStories dataset is a collection of short stories generated by state-of-the-art language models. It features:
|
94 |
+
|
95 |
+
- Story annotation with high-level concepts: theme, topic, style, etc.
|
96 |
+
- Higher semantic and syntactic diversity through seeded story generation
|
97 |
+
- Generated by 2024 models
|
98 |
+
- Several NLP-metrics pre-computed to aid filtering
|
99 |
+
- ASCII-only guarantee for the English dataset
|
100 |
|
101 |
+
Read the dataset paper on [arXiv](https://arxiv.org/abs/2504.09184]).
|