Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: original_articles: int64 original_tokens: int64 pruned_articles: int64 pruned_tokens: int64 removed_articles: int64 removed_tokens: int64 vs text: string token_count: int64 wikidata_id: string version_id: string extracted_title: string Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 543, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: original_articles: int64 original_tokens: int64 pruned_articles: int64 pruned_tokens: int64 removed_articles: int64 removed_tokens: int64 vs text: string token_count: int64 wikidata_id: string version_id: string extracted_title: string
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Wikipedia High-Quality Articles (2000+ Tokens)
Dataset Description
This dataset contains high-quality Wikipedia articles filtered to include only substantial, comprehensive content. Each article contains at least 2,000 tokens, ensuring rich, interconnected knowledge rather than simple definitions or stubs.
Key Features
- Quality over Quantity: 239K comprehensive articles instead of 1.26M mixed-quality articles
- Rich Content: Average article length of 4,032 tokens
- Interconnected Knowledge: Articles contain multiple concepts with context and relationships
- Clean Format: Structured with article/section/paragraph markers for easy parsing
Dataset Statistics
Metric | Value |
---|---|
Total Articles | 238,758 |
Total Tokens | 0.96B |
Average Tokens/Article | 4032 |
Min Tokens/Article | 2,000 |
Original Dataset Size | 1,259,957 articles |
Filtering Rate | 81.1% removed |
Token Retention | 49.5% |
Why This Filtering?
Research shows that training on fewer, higher-quality examples for more epochs produces better results than training on more, lower-quality examples:
- Conceptual Density: Long articles teach multiple interconnected concepts
- Context: Facts are presented with relationships and background
- Language Quality: Complex syntax, varied vocabulary, sophisticated discourse
- Efficiency: 20 epochs on this dataset > 10 epochs on full dataset
What Was Removed
- Simple definitions ("An anode is...")
- Lists without context ("List of villages in...")
- Stubs and incomplete articles
- Single-fact biographical entries
What Was Kept
- Comprehensive topic overviews
- Historical events with context
- Scientific concepts with explanations
- Technical topics with examples
Dataset Structure
Each line in the JSONL files contains:
{
"text": "Full article text with _START_ARTICLE_, _START_SECTION_, _START_PARAGRAPH_ markers",
"title": "Article title",
"token_count": 4032,
"url": "Wikipedia URL"
}
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Yxanul/wikipedia-2k-high-quality")
# Example: Load for training
for article in dataset['train']:
text = article['text']
tokens = article['token_count']
# Your training code here
Training Recommendations
Based on our research, we recommend:
- 10-20 epochs for base model pretraining
- Sequence length: 2048-4096 tokens (articles are long)
- Batch size: Can be smaller due to longer sequences
- Curriculum: Start with shorter articles (2K) and progress to longer (10K+)
Source and Processing
This dataset is derived from Wiki40b English dataset, filtered for quality:
- Original: Wiki40b English (2.93M articles)
- First filter: Removed articles <500 tokens
- This version: Kept only articles ≥2000 tokens
- Result: Top 19% of articles containing 50% of tokens
License
This dataset inherits Wikipedia's CC BY-SA 3.0 license.
Citation
If you use this dataset, please cite:
@dataset{wikipedia_2k_hq_2024,
title={Wikipedia High-Quality Articles (2000+ Tokens)},
author={Yxanul},
year={2025},
publisher={HuggingFace}
}
Philosophy
"It's better to deeply understand 239K comprehensive articles than to superficially memorize 1.26M fragments."
This dataset embodies the principle that in machine learning, especially for smaller models, quality dramatically outweighs quantity.
- Downloads last month
- 81