Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Lunaris Ultra-FineWeb 20B Tokenized Dataset

A pre-processed, tokenized version of the Ultra-FineWeb dataset, optimized for efficient training of large language models.

πŸ“‹ Dataset Overview

This dataset contains 20 billion tokens from the Ultra-FineWeb English corpus, pre-tokenized and stored in efficient NumPy format for fast data loading during training. The dataset is split into 20 shards of 1 billion tokens each, making it ideal for distributed training scenarios.

🎯 Key Features

  • 20B tokens from high-quality Ultra-FineWeb English corpus
  • Pre-tokenized with custom BPE tokenizer (65,536 vocabulary)
  • Efficient storage in NumPy .npy format using uint32 dtype
  • Optimized for training with 1B token shards
  • Ready-to-use with included tokenizer and processing scripts

πŸ“ Repository Structure

meryyllebr543/lunaris-ultrafineweb-20b-tokenized/
β”œβ”€β”€ shard_0000.npy          # 1B tokens (first shard)
β”œβ”€β”€ shard_0001.npy          # 1B tokens (second shard)
β”œβ”€β”€ ...
β”œβ”€β”€ shard_0019.npy          # 1B tokens (final shard)
β”œβ”€β”€ lunaris-tokenizer.json  # Trained BPE tokenizer
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ prepare.py                      # Dataset preparation script
β”‚   β”œβ”€β”€ train_ultrafineweb_tokenizer.py # Tokenizer training script
β”‚   └── README.md                       # Scripts documentation
└── README.md              # This file

πŸš€ Quick Start

Loading the Dataset

import numpy as np
from tokenizers import Tokenizer

# Load the tokenizer
tokenizer = Tokenizer.from_file("lunaris-tokenizer.json")

# Load a single shard
shard_0 = np.load("shard_0000.npy")
print(f"Shard 0 shape: {shard_0.shape}")  # Should be (1000000000,)

# Decode some tokens to verify
sample_text = tokenizer.decode(shard_0[:100].tolist())
print(f"Sample text: {sample_text}")

Training Loop Example

import numpy as np
from tokenizers import Tokenizer

def load_tokenized_dataset(shard_dir, num_shards=20):
    """Load all shards into memory or create a generator"""
    shards = []
    for i in range(num_shards):
        shard_path = f"{shard_dir}/shard_{i:04d}.npy"
        shard = np.load(shard_path)
        shards.append(shard)
    return np.concatenate(shards)

# Load tokenizer
tokenizer = Tokenizer.from_file("lunaris-tokenizer.json")

# Load dataset
tokens = load_tokenized_dataset(".")

# Your training loop here
# tokens is now a numpy array with 20B tokens ready for training

πŸ”§ Technical Specifications

Attribute Value
Source Dataset openbmb/Ultra-FineWeb (English)
Total Tokens 20,000,000,000 (20B)
Tokenizer Custom BPE with 65,536 vocabulary
Data Format NumPy .npy files with uint32 dtype
Shard Size 1,000,000,000 tokens per shard
Number of Shards 20
Storage Size ~80GB total (4GB per shard)
Special Tokens <unk>, <pad>, <bos>, <eos>

πŸ› οΈ Tokenizer Details

The included tokenizer (lunaris-tokenizer.json) is a Byte-Pair Encoding (BPE) tokenizer trained specifically on Ultra-FineWeb data:

  • Vocabulary Size: 65,536 tokens
  • Algorithm: BPE (Byte-Pair Encoding)
  • Training Data: 10M samples from Ultra-FineWeb English corpus
  • Special Tokens: <unk> (ID: 0), <pad> (ID: 1), <bos> (ID: 2), <eos> (ID: 3)

Loading the Tokenizer

from tokenizers import Tokenizer

# Load the tokenizer
tokenizer = Tokenizer.from_file("lunaris-tokenizer.json")

# Get vocabulary size
vocab_size = tokenizer.get_vocab_size()
print(f"Vocabulary size: {vocab_size}")  # 65536

# Test encoding/decoding
text = "Hello, world!"
tokens = tokenizer.encode(text)
decoded = tokenizer.decode(tokens.ids)
print(f"Original: {text}")
print(f"Tokens: {tokens.ids}")
print(f"Decoded: {decoded}")

πŸ“Š Dataset Statistics

  • Source: Ultra-FineWeb English corpus (high-quality web text)
  • Processing: Filtered and tokenized with quality controls
  • Token Distribution: Uniform across 20 shards
  • Memory Usage: 3.7GB per shard when loaded
  • Recommended RAM: 32GB+ for single-shard training, 64GB+ for multi-shard

πŸ”„ Processing Pipeline

The dataset was created using the following pipeline:

  1. Data Loading: Streamed from Ultra-FineWeb English corpus
  2. Tokenizer Training: BPE tokenizer trained on 10M samples
  3. Tokenization: Parallel processing with 56 workers
  4. Sharding: Split into 1B token shards for efficient loading
  5. Optimization: Saved in uint32 format for memory efficiency

πŸ’» System Requirements

For Dataset Usage

  • RAM: 32GB minimum (64GB+ recommended)
  • Storage: 80GB available space
  • CPU: Multi-core processor recommended
  • Python: 3.8+

Dependencies

pip install numpy tokenizers datasets tqdm

🎯 Use Cases

This dataset is ideal for:

  • Pre-training large language models
  • Continued training of existing models
  • Fine-tuning with high-quality web text
  • Research in language modeling
  • Benchmarking tokenization approaches

πŸ“ˆ Performance Benefits

  • Fast Loading: Pre-tokenized data eliminates tokenization overhead
  • Memory Efficient: uint32 format optimizes memory usage
  • Parallel Training: 20 shards enable distributed training
  • High Quality: Based on Ultra-FineWeb's filtered web text

πŸ”— Related Resources

🀝 Contributing

The processing scripts are included in the scripts/ directory. Feel free to:

  • Report issues with the dataset
  • Suggest improvements to the processing pipeline
  • Share your training results using this dataset

πŸ“„ License

This dataset inherits the license from the original Ultra-FineWeb dataset. Please refer to the Ultra-FineWeb license for detailed terms.

Since Ultra-FineWeb is built using multiple datasets, users should check the LICENSE of each underlying dataset to ensure proper usage and compliance.

🌟 Citation

If you use this dataset in your research, please cite both this work and the original Ultra-FineWeb paper:

@misc{wang2025ultrafineweb,
  title={{Ultra-FineWeb}: Efficient Data Filtering and Verification for High-Quality LLM Training Data},
  author={Yudong Wang and Zixuan Fu and Jie Cai and Peijun Tang and Hongya Lyu and Yewei Fang and Zhi Zheng and Jie Zhou and Guoyang Zeng and Chaojun Xiao and Xu Han and Zhiyuan Liu},
  year={2025},
  eprint={2505.05427},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
}

πŸ™ Acknowledgements

  • Ultra-FineWeb Team for creating the high-quality source dataset
  • Hugging Face for hosting and infrastructure
  • OpenBMB for the original dataset and classifier

Dataset prepared by: meryyllebr543
Last updated: July 2025

Downloads last month
215

Models trained or fine-tuned on meryyllebr543/lunaris-ultrafineweb-20b-tokenized

Collection including meryyllebr543/lunaris-ultrafineweb-20b-tokenized