Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
Add test to prevent 1 token documents (#65)
Browse files- Added tests to ensure that 1 tokens document don't appear in the data. This filtered out 0 documents in total. (1e004b0fedcceeac40ff8e21ea22337d1600a403)
CHANGELOG.md
CHANGED
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
|
|
5 |
|
6 |
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
## [v1.1.0] - 2025-04-29
|
9 |
|
10 |
### Added
|
@@ -12,7 +18,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
|
12 |
- Added multiple quality controls
|
13 |
- Removed all empty string
|
14 |
- Removed duplicates across within datasets
|
15 |
-
- Removed
|
16 |
- Restructured datasets
|
17 |
- Removed columns from the dataset to make the structure more lightweight, these include domain, metadata, and license. These have been moved to the individual datasheets. It is still possible to filter for license by using the dataset name
|
18 |
- Added column for number of tokens
|
@@ -35,7 +40,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
|
35 |
- Norwegian Colossal Corpus (newspapers) (~191.08K tokens)
|
36 |
- Norwegian Colossal Corpus (books) (~531.97M tokens)
|
37 |
- Norwegian Colossal Corpus (maalfrid) (~29.26M tokens)
|
38 |
-
- Norwegian Colossal Corpus (parliament) (
|
39 |
|
40 |
## [v1.0.11] - 2025-03-29
|
41 |
|
|
|
5 |
|
6 |
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
|
8 |
+
## [v1.1.1] - 2025-06-16
|
9 |
+
|
10 |
+
### Added
|
11 |
+
|
12 |
+
- Added tests to ensure that 1 tokens document don't appear in the data. This filtered out 0 documents in total.
|
13 |
+
|
14 |
## [v1.1.0] - 2025-04-29
|
15 |
|
16 |
### Added
|
|
|
18 |
- Added multiple quality controls
|
19 |
- Removed all empty string
|
20 |
- Removed duplicates across within datasets
|
|
|
21 |
- Restructured datasets
|
22 |
- Removed columns from the dataset to make the structure more lightweight, these include domain, metadata, and license. These have been moved to the individual datasheets. It is still possible to filter for license by using the dataset name
|
23 |
- Added column for number of tokens
|
|
|
40 |
- Norwegian Colossal Corpus (newspapers) (~191.08K tokens)
|
41 |
- Norwegian Colossal Corpus (books) (~531.97M tokens)
|
42 |
- Norwegian Colossal Corpus (maalfrid) (~29.26M tokens)
|
43 |
+
- Norwegian Colossal Corpus (parliament) (~338.87M tokens)
|
44 |
|
45 |
## [v1.0.11] - 2025-03-29
|
46 |
|
README.md
CHANGED
@@ -174,7 +174,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
|
|
174 |
<!-- START README TABLE -->
|
175 |
| | |
|
176 |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
177 |
-
| **Version**
|
178 |
| **Language** | dan, dansk, Danish |
|
179 |
| **License** | Openly Licensed, See the respective dataset |
|
180 |
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
|
|
|
174 |
<!-- START README TABLE -->
|
175 |
| | |
|
176 |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
177 |
+
| **Version** | 1.1.1 ([Changelog](/CHANGELOG.md)) |
|
178 |
| **Language** | dan, dansk, Danish |
|
179 |
| **License** | Openly Licensed, See the respective dataset |
|
180 |
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
|
data/ncc_newspaper/ncc_newspaper.md
CHANGED
@@ -67,7 +67,7 @@ An entry in the dataset consists of the following fields:
|
|
67 |
</p>
|
68 |
<!-- END-DATASET PLOTS -->
|
69 |
|
70 |
-
|
71 |
|
72 |
## License Information
|
73 |
|
|
|
67 |
</p>
|
68 |
<!-- END-DATASET PLOTS -->
|
69 |
|
70 |
+
# Additional Information
|
71 |
|
72 |
## License Information
|
73 |
|
pyproject.toml
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
[project]
|
2 |
name = "dynaword"
|
3 |
-
version = "1.1.
|
4 |
description = "project code for the danish dynaword project"
|
5 |
readme = "README.md"
|
6 |
requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
|
|
|
1 |
[project]
|
2 |
name = "dynaword"
|
3 |
+
version = "1.1.1"
|
4 |
description = "project code for the danish dynaword project"
|
5 |
readme = "README.md"
|
6 |
requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
|
src/tests/test_quality/__init__.py
ADDED
File without changes
|
src/tests/{test_duplicates.py → test_quality/test_duplicates.py}
RENAMED
@@ -4,8 +4,7 @@ import pytest
|
|
4 |
from datasets import Dataset, load_dataset
|
5 |
|
6 |
from dynaword.paths import repo_path
|
7 |
-
from
|
8 |
-
|
9 |
|
10 |
@pytest.mark.parametrize("dataset_name", DATASET_NAMES)
|
11 |
def test_no_within_data_duplicates(dataset_name: str):
|
|
|
4 |
from datasets import Dataset, load_dataset
|
5 |
|
6 |
from dynaword.paths import repo_path
|
7 |
+
from ..conftest import DATASET_NAMES
|
|
|
8 |
|
9 |
@pytest.mark.parametrize("dataset_name", DATASET_NAMES)
|
10 |
def test_no_within_data_duplicates(dataset_name: str):
|
src/tests/test_quality/test_short_texts.py
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import cast
|
2 |
+
|
3 |
+
import pytest
|
4 |
+
from datasets import Dataset, load_dataset
|
5 |
+
|
6 |
+
from dynaword.paths import repo_path
|
7 |
+
|
8 |
+
from ..conftest import DATASET_NAMES
|
9 |
+
|
10 |
+
|
11 |
+
@pytest.mark.parametrize("dataset_name", DATASET_NAMES)
|
12 |
+
# @pytest.mark.skip("This tests currently fails")
|
13 |
+
def test_no_one_word_documents(dataset_name: str):
|
14 |
+
ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
|
15 |
+
ds = cast(Dataset, ds)
|
16 |
+
|
17 |
+
one_word_docs = ds.filter(lambda x: x["token_count"] <= 1)
|
18 |
+
|
19 |
+
assert (
|
20 |
+
len(one_word_docs) == 0
|
21 |
+
), f"Found {len(one_word_docs)} one-word documents in dataset '{dataset_name}'"
|