Upload folder using huggingface_hub
Browse files- README.md +207 -0
- cache-3ca313463a4454ec.arrow +3 -0
- cache-72480cd92fefd657.arrow +3 -0
- cache-9d53fb84569235e5.arrow +3 -0
- cache-aa6f16efdc1b0629.arrow +3 -0
- data-00000-of-00001.arrow +3 -0
- dataset_info.json +41 -0
- state.json +13 -0
README.md
ADDED
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
|
4 |
+
task_categories:
|
5 |
+
|
6 |
+
- text-generation
|
7 |
+
|
8 |
+
- fill-mask
|
9 |
+
|
10 |
+
language:
|
11 |
+
|
12 |
+
- en
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
tags:
|
17 |
+
|
18 |
+
- literature
|
19 |
+
|
20 |
+
- dostoyevsky
|
21 |
+
|
22 |
+
- russian-literature
|
23 |
+
|
24 |
+
- author-style
|
25 |
+
|
26 |
+
- causal-lm
|
27 |
+
|
28 |
+
- fine-tuning
|
29 |
+
size_categories:
|
30 |
+
- 1K<n<10K
|
31 |
+
---
|
32 |
+
|
33 |
+
# Dostoyevsky Chunks Dataset
|
34 |
+
|
35 |
+
This dataset contains preprocessed text chunks from four major works by Fyodor Dostoyevsky, prepared for fine-tuning language models on the author's distinctive writing style.
|
36 |
+
|
37 |
+
## Dataset Description
|
38 |
+
|
39 |
+
### Dataset Summary
|
40 |
+
|
41 |
+
A curated collection of text segments extracted from public domain English translations of Dostoyevsky's novels, processed into consistent chunks suitable for causal language modeling tasks.
|
42 |
+
|
43 |
+
### Supported Tasks
|
44 |
+
|
45 |
+
- **Causal Language Modeling**: Primary use case for training autoregressive language models
|
46 |
+
- **Author Style Transfer**: Learning Dostoyevsky's literary style and narrative techniques
|
47 |
+
- **Text Generation**: Generating text in the style of Dostoyevsky
|
48 |
+
|
49 |
+
### Languages
|
50 |
+
|
51 |
+
- **English**: All texts are English translations of original Russian works
|
52 |
+
|
53 |
+
## Dataset Structure
|
54 |
+
|
55 |
+
### Data Instances
|
56 |
+
|
57 |
+
Each instance contains a single text chunk:
|
58 |
+
|
59 |
+
```json
|
60 |
+
{
|
61 |
+
"chunks": "But what can be expected of a man who has such a desperate character that he will hang himself for being always late for dinner..."
|
62 |
+
}
|
63 |
+
```
|
64 |
+
|
65 |
+
### Data Fields
|
66 |
+
|
67 |
+
- `chunks` (string): A preprocessed text segment of approximately 512 tokens
|
68 |
+
|
69 |
+
### Data Splits
|
70 |
+
|
71 |
+
The dataset contains **6,217 text chunks** in a single split, suitable for:
|
72 |
+
|
73 |
+
- Training: Use 90% for fine-tuning
|
74 |
+
- Validation: Reserve 10% for evaluation
|
75 |
+
|
76 |
+
## Source Data
|
77 |
+
|
78 |
+
### Initial Data Collection
|
79 |
+
|
80 |
+
The source texts were downloaded from Project Gutenberg, ensuring all content is in the public domain.
|
81 |
+
|
82 |
+
#### Source Works
|
83 |
+
|
84 |
+
1. **Crime and Punishment** (Project Gutenberg #2554)
|
85 |
+
2. **The Brothers Karamazov** (Project Gutenberg #28054)
|
86 |
+
3. **The Idiot**
|
87 |
+
4. **Notes from the Underground** (Project Gutenberg #600)
|
88 |
+
|
89 |
+
### Data Processing Pipeline
|
90 |
+
|
91 |
+
1. **Header/Footer Removal**: Used `gutenberg-cleaner` to remove Project Gutenberg boilerplate
|
92 |
+
2. **Text Normalization**: Applied `ftfy` for encoding fixes and `unidecode` for ASCII conversion
|
93 |
+
3. **Content Filtering**:
|
94 |
+
- Removed paragraphs shorter than 200 characters
|
95 |
+
- Filtered out chapter titles and section headers
|
96 |
+
- Excluded epigraphs and other metadata
|
97 |
+
4. **Chunking**: Segmented text into approximately 512-token chunks using SmolLM tokenizer
|
98 |
+
5. **Quality Control**: Manual review of sample chunks for content quality
|
99 |
+
|
100 |
+
## Dataset Creation
|
101 |
+
|
102 |
+
### Curation Rationale
|
103 |
+
|
104 |
+
This dataset was created to enable fine-tuning of small language models on Dostoyevsky's writing style, characterized by:
|
105 |
+
|
106 |
+
- Psychological depth and introspection
|
107 |
+
- Philosophical themes and existential questions
|
108 |
+
- Complex character development
|
109 |
+
- Rich narrative voice and distinctive prose rhythm
|
110 |
+
|
111 |
+
### Source Data Analysis
|
112 |
+
|
113 |
+
- **Total chunks**: 6,217
|
114 |
+
- **Average chunk length**: ~512 tokens
|
115 |
+
- **Content diversity**: Narrative prose, dialogue, internal monologue, philosophical passages
|
116 |
+
- **Temporal coverage**: Spans Dostoyevsky's major creative periods
|
117 |
+
|
118 |
+
## Considerations for Using the Data
|
119 |
+
|
120 |
+
### Social Impact of Dataset
|
121 |
+
|
122 |
+
This dataset preserves and makes accessible the literary heritage of one of world literature's greatest authors, enabling:
|
123 |
+
|
124 |
+
- **Educational applications**: Literary analysis and style studies
|
125 |
+
- **Creative writing assistance**: Learning narrative techniques
|
126 |
+
- **Cultural preservation**: Maintaining access to classic literature
|
127 |
+
|
128 |
+
### Discussion of Biases
|
129 |
+
|
130 |
+
- **Historical context**: Reflects 19th-century social norms and perspectives
|
131 |
+
- **Translation artifacts**: Based on English translations, may not capture original Russian nuances
|
132 |
+
- **Selection bias**: Limited to four major works, may not represent complete stylistic evolution
|
133 |
+
- **Cultural specificity**: Reflects Russian cultural context and Orthodox Christian themes
|
134 |
+
|
135 |
+
### Other Known Limitations
|
136 |
+
|
137 |
+
- **Temporal scope**: Limited to works from 1864-1880
|
138 |
+
- **Genre limitation**: Primarily novels, lacks short stories and journalism
|
139 |
+
- **Translation dependency**: Quality dependent on translator choices
|
140 |
+
- **Processing artifacts**: Some text segmentation may split related passages
|
141 |
+
|
142 |
+
## Additional Information
|
143 |
+
|
144 |
+
### Dataset Curators
|
145 |
+
|
146 |
+
- **Created by**: [satyapratheek](https://huggingface.co/satyapratheek)
|
147 |
+
- **Processing date**: July 2025
|
148 |
+
- **Contact**: Available through Hugging Face profile
|
149 |
+
|
150 |
+
### Licensing Information
|
151 |
+
|
152 |
+
All source texts are in the **public domain** (published before 1928). The dataset processing and compilation is released under **public domain** as well.
|
153 |
+
|
154 |
+
### Citation Information
|
155 |
+
|
156 |
+
```bibtex
|
157 |
+
@dataset{dostoyevsky_chunks_2025,
|
158 |
+
title={Dostoyevsky Chunks: Preprocessed Text Dataset for Author Style Fine-tuning},
|
159 |
+
author={satyapratheek},
|
160 |
+
year={2025},
|
161 |
+
publisher={Hugging Face},
|
162 |
+
url={https://huggingface.co/datasets/satyapratheek/dostoyevsky_chunks}
|
163 |
+
}
|
164 |
+
```
|
165 |
+
|
166 |
+
### Contributions
|
167 |
+
|
168 |
+
If you use this dataset, please consider citing both the dataset and the original Project Gutenberg sources.
|
169 |
+
|
170 |
+
## Usage Example
|
171 |
+
|
172 |
+
```python
|
173 |
+
from datasets import load_dataset
|
174 |
+
|
175 |
+
# Load the dataset
|
176 |
+
dataset = load_dataset("satyapratheek/dostoyevsky_chunks")
|
177 |
+
|
178 |
+
# Access the chunks
|
179 |
+
chunks = dataset["train"]["chunks"]
|
180 |
+
print(f"Total chunks: {len(chunks)}")
|
181 |
+
print(f"Sample chunk: {chunks[:200]}...")
|
182 |
+
|
183 |
+
# Use for fine-tuning
|
184 |
+
```python
|
185 |
+
from transformers import AutoTokenizer
|
186 |
+
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-135M")
|
187 |
+
|
188 |
+
def tokenize_function(examples):
|
189 |
+
return tokenizer(examples["chunks"], truncation=True, padding="max_length", max_length=512)
|
190 |
+
|
191 |
+
tokenized_dataset = dataset.map(tokenize_function, batched=True)
|
192 |
+
```
|
193 |
+
|
194 |
+
## Related Models
|
195 |
+
|
196 |
+
This dataset was used to fine-tune:
|
197 |
+
|
198 |
+
- [satyapratheek/smollm-dostoyevsky](https://huggingface.co/satyapratheek/smollm-dostoyevsky) - LoRA fine-tuned SmolLM-135M
|
199 |
+
|
200 |
+
---
|
201 |
+
|
202 |
+
**Processing Stats from Fine-tuning:**
|
203 |
+
|
204 |
+
- Training duration: 4 hours, 56 minutes, 35 seconds
|
205 |
+
- Final training loss: 3.254
|
206 |
+
- Training samples/second: 1.048
|
207 |
+
- Hardware: Apple M1 MacBook Air (8GB RAM)
|
cache-3ca313463a4454ec.arrow
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cf3db8f7789b382f73f56bd1e9e4c6d7106c6976f2ee41e10850f2aee7e8a487
|
3 |
+
size 3818880
|
cache-72480cd92fefd657.arrow
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:76726d743394f8651bb344681012f2093c19a5924c5da83636a4d15c7f9c77fa
|
3 |
+
size 15967968
|
cache-9d53fb84569235e5.arrow
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:04bc7fbbdf9ad8ef62e62ad45ba4388d02301352fa6aea5291842942d05ccb80
|
3 |
+
size 3818880
|
cache-aa6f16efdc1b0629.arrow
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cedf1effa337ecb5c0aef9aebd75ad1418715622c8c59fd206a731b65af3c15b
|
3 |
+
size 15967968
|
data-00000-of-00001.arrow
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ffa18efc935413b2dc27b620659b96e01311696ab7232a22f682eae9c6d62383
|
3 |
+
size 3842144
|
dataset_info.json
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"builder_name": "json",
|
3 |
+
"citation": "",
|
4 |
+
"config_name": "default",
|
5 |
+
"dataset_name": "json",
|
6 |
+
"dataset_size": 3815358,
|
7 |
+
"description": "",
|
8 |
+
"download_checksums": {
|
9 |
+
"/Users/vi/Documents/dost/dostoyevsky.jsonl": {
|
10 |
+
"num_bytes": 3934721,
|
11 |
+
"checksum": null
|
12 |
+
}
|
13 |
+
},
|
14 |
+
"download_size": 3934721,
|
15 |
+
"features": {
|
16 |
+
"chunks": {
|
17 |
+
"feature": {
|
18 |
+
"dtype": "string",
|
19 |
+
"_type": "Value"
|
20 |
+
},
|
21 |
+
"_type": "Sequence"
|
22 |
+
}
|
23 |
+
},
|
24 |
+
"homepage": "",
|
25 |
+
"license": "",
|
26 |
+
"size_in_bytes": 7750079,
|
27 |
+
"splits": {
|
28 |
+
"train": {
|
29 |
+
"name": "train",
|
30 |
+
"num_bytes": 3815358,
|
31 |
+
"num_examples": 6006,
|
32 |
+
"dataset_name": "json"
|
33 |
+
}
|
34 |
+
},
|
35 |
+
"version": {
|
36 |
+
"version_str": "0.0.0",
|
37 |
+
"major": 0,
|
38 |
+
"minor": 0,
|
39 |
+
"patch": 0
|
40 |
+
}
|
41 |
+
}
|
state.json
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_data_files": [
|
3 |
+
{
|
4 |
+
"filename": "data-00000-of-00001.arrow"
|
5 |
+
}
|
6 |
+
],
|
7 |
+
"_fingerprint": "5156f4ac1d631c66",
|
8 |
+
"_format_columns": null,
|
9 |
+
"_format_kwargs": {},
|
10 |
+
"_format_type": null,
|
11 |
+
"_output_all_columns": false,
|
12 |
+
"_split": "train"
|
13 |
+
}
|