--- dataset_info: features: - name: text dtype: string - name: lang dtype: string - name: type dtype: string - name: id dtype: string splits: - name: eval num_bytes: 76408631 num_examples: 10000 download_size: 39911840 dataset_size: 76408631 configs: - config_name: default data_files: - split: eval path: data/eval-* license: odc-by language: - fr - en - es tags: - Python - Java - JavaScript - C/C++ --- # Dataset Card for `dataset-eval` ## Description The `dataset-eval` dataset is a multilingual and multi-domain dataset designed for evaluating language model performance during training. It can be used for performance tracking, generalization diagnostics across languages or domains, and for implementing early stopping mechanisms. The examples included were automatically selected as **High quality** by the [`EuroBERT-210m-Quality`](https://huggingface.co/TempestTeam/EuroBERT-210m-Quality) model, trained to estimate web text quality in multiple languages. ## Dataset Composition - **Natural Languages**: - English: 2,640 examples (from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)) - French: 2,720 examples (from [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)) - Spanish: 2,640 examples (from [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)) - **Programming Languages** (from [The-Stack-v2-dedup](https://huggingface.co/datasets/bigcode/the-stack-v2-dedup)): - Python: 500 examples - Java: 500 examples - JavaScript: 500 examples - C: 250 examples - C++: 250 examples - **Total**: 10,000 high-quality examples ## Data Structure Each example includes the following fields: - **`text`** (*string*): the textual content or source code. - **`lang`** (*string*): the language of the content (e.g., `English`, `French`, `Spanish`, `Python`, `C++`, etc.). - **`type`** (*string*): the type of content: - `"NL"` for natural language - `"CL"` for code language - **`id`** (*string*): a unique identifier generated by hashing the `text` field. ## Use Cases This dataset is intended for **periodic evaluation** during language model training: - Tracking performance on high-quality data - Evaluation per batch or epoch - Validation metric computation for early stopping - Performance comparison by language or domain It is **not intended for direct training**, due to its limited size and its purpose as a filtered evaluation sample. ## Licenses The dataset is built from sources under the following licenses: | Source | License | |:---------------------:|:----------------:| | FineWeb | ODC-BY 1.0 | | FineWeb-2 | ODC-BY 1.0 | | The Stack v2 | Other | | EuroBERT-210m-Quality | Apache-2.0 | Users must ensure they comply with the specific license conditions when reusing or redistributing this data. ## Risks and Limitations ### Sensitive Data The original sources are from the public web and were automatically cleaned. Despite filtering, some data may still contain sensitive, personal, or confidential information. It is strongly recommended **not to use this dataset in production or user-facing systems without manual review**. ### Bias - Quality annotations were produced by an automatic classifier and may reflect its training biases. - The dataset covers only three natural languages and five programming languages. - Cultural, thematic, or syntactic biases may be present depending on the source corpora.