Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
Creating a `create.py`script for retsinformation and updating the data, adding 300M tokens.
Browse files- README.md +5 -5
- data/retsinformationdk/create.py +166 -0
- data/retsinformationdk/descriptive_stats.json +4 -4
- data/retsinformationdk/images/dist_document_length.png +2 -2
- data/retsinformationdk/retsinformationdk.md +8 -8
- data/retsinformationdk/retsinformationdk.parquet +2 -2
- descriptive_stats.json +4 -4
- images/dist_document_length.png +2 -2
- images/domain_distribution.png +2 -2
- test_results.log +2 -2
README.md
CHANGED
@@ -211,9 +211,9 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
|
|
211 |
|
212 |
<!-- START-DESC-STATS -->
|
213 |
- **Language**: dan, dansk, Danish
|
214 |
-
- **Number of samples**:
|
215 |
-
- **Number of tokens (Llama 3)**: 4.
|
216 |
-
- **Average document length (characters)**:
|
217 |
<!-- END-DESC-STATS -->
|
218 |
|
219 |
|
@@ -314,8 +314,8 @@ Below follows a brief overview of the sources in the corpus along with their ind
|
|
314 |
| Source | Description | Domain | N. Tokens | License |
|
315 |
|:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------|:------------|:-----------------------|
|
316 |
| [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
|
|
|
317 |
| [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR | Books | 531.97M | [CC-0] |
|
318 |
-
| [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 516.35M | [Danish Copyright Law] |
|
319 |
| [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
|
320 |
| [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr | Other | 338.87M | [NLOD 2.0] |
|
321 |
| [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversation | 271.60M | [CC-0] |
|
@@ -347,7 +347,7 @@ Below follows a brief overview of the sources in the corpus along with their ind
|
|
347 |
| [naat] | Danish speeches from 1930-2022 | Conversation | 286.68K | [CC-0] |
|
348 |
| [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
|
349 |
| [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
|
350 |
-
| **Total** | | | 4.
|
351 |
|
352 |
[ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
|
353 |
[cellar]: data/cellar/cellar.md
|
|
|
211 |
|
212 |
<!-- START-DESC-STATS -->
|
213 |
- **Language**: dan, dansk, Danish
|
214 |
+
- **Number of samples**: 927.89K
|
215 |
+
- **Number of tokens (Llama 3)**: 4.67B
|
216 |
+
- **Average document length (characters)**: 15475.06
|
217 |
<!-- END-DESC-STATS -->
|
218 |
|
219 |
|
|
|
314 |
| Source | Description | Domain | N. Tokens | License |
|
315 |
|:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------|:------------|:-----------------------|
|
316 |
| [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
|
317 |
+
| [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 818.25M | [Danish Copyright Law] |
|
318 |
| [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR | Books | 531.97M | [CC-0] |
|
|
|
319 |
| [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
|
320 |
| [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr | Other | 338.87M | [NLOD 2.0] |
|
321 |
| [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversation | 271.60M | [CC-0] |
|
|
|
347 |
| [naat] | Danish speeches from 1930-2022 | Conversation | 286.68K | [CC-0] |
|
348 |
| [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
|
349 |
| [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
|
350 |
+
| **Total** | | | 4.67B | |
|
351 |
|
352 |
[ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
|
353 |
[cellar]: data/cellar/cellar.md
|
data/retsinformationdk/create.py
ADDED
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# /// script
|
2 |
+
# requires-python = ">=3.12"
|
3 |
+
# dependencies = [
|
4 |
+
# "datasets==3.2.0",
|
5 |
+
# "pandas",
|
6 |
+
# "requests",
|
7 |
+
# "trafilatura",
|
8 |
+
# "dynaword"
|
9 |
+
# ]
|
10 |
+
# [tool.uv.sources]
|
11 |
+
# dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword", rev = "00e7f2aee7f7ad2da423419f77ecbb9c0536de0d" }
|
12 |
+
# ///
|
13 |
+
|
14 |
+
from datetime import date, datetime
|
15 |
+
from io import StringIO
|
16 |
+
import logging
|
17 |
+
from pathlib import Path
|
18 |
+
import pandas as pd
|
19 |
+
import requests
|
20 |
+
from requests.adapters import HTTPAdapter
|
21 |
+
from urllib3 import Retry
|
22 |
+
from trafilatura import extract
|
23 |
+
from datasets import Dataset
|
24 |
+
from tqdm import tqdm
|
25 |
+
|
26 |
+
from dynaword.process_dataset import (
|
27 |
+
add_token_count,
|
28 |
+
ensure_column_order,
|
29 |
+
remove_duplicate_text,
|
30 |
+
remove_empty_texts,
|
31 |
+
)
|
32 |
+
|
33 |
+
TMP_DIR = Path(__file__).parent / "tmp"
|
34 |
+
|
35 |
+
BASE_URL = "https://www.retsinformation.dk/api/document/eli"
|
36 |
+
|
37 |
+
logger = logging.getLogger(__name__)
|
38 |
+
today = date.today()
|
39 |
+
|
40 |
+
|
41 |
+
def create_session_with_retries(retries=2, backoff_factor=0.5):
|
42 |
+
session = requests.Session()
|
43 |
+
retry_strategy = Retry(
|
44 |
+
total=retries,
|
45 |
+
backoff_factor=backoff_factor,
|
46 |
+
status_forcelist=[500, 502, 503, 504],
|
47 |
+
allowed_methods=["GET"],
|
48 |
+
respect_retry_after_header=True,
|
49 |
+
)
|
50 |
+
adapter = HTTPAdapter(max_retries=retry_strategy)
|
51 |
+
session.mount("http://", adapter)
|
52 |
+
session.mount("https://", adapter)
|
53 |
+
return session
|
54 |
+
|
55 |
+
|
56 |
+
def fetch_document_list():
|
57 |
+
download = True
|
58 |
+
csv_content = ""
|
59 |
+
|
60 |
+
df: pd.DataFrame = pd.DataFrame()
|
61 |
+
|
62 |
+
if TMP_DIR.exists():
|
63 |
+
files = list(TMP_DIR.glob("*.csv"))
|
64 |
+
file = sorted(files, reverse=True)[0]
|
65 |
+
|
66 |
+
file_date = datetime.strptime(file.stem, "%Y-%m-%d").date()
|
67 |
+
|
68 |
+
if (today - file_date).days < 180:
|
69 |
+
download = False
|
70 |
+
df = pd.read_csv(file)
|
71 |
+
|
72 |
+
if download:
|
73 |
+
logger.info("Downloading list of files from Retsinformation.dk")
|
74 |
+
response = requests.get(
|
75 |
+
"https://www.retsinformation.dk/api/documentsearch/csv?dt=10&dt=1480&dt=20&dt=30&dt=40&dt=50&dt=90&dt=120&dt=270&dt=60&dt=100&dt=80&dt=110&dt=130&dt=140&dt=150&dt=160&dt=170&dt=180&dt=200&dt=210&dt=220&dt=1510&dt=1490&dt=-10&dt=230&dt=240&dt=250&dt=260&dt=980&dt=360&dt=400&dt=380&dt=420&dt=1530&dt=440&dt=450&dt=430&dt=1540&dt=460&dt=410&dt=370&dt=480&dt=390&dt=500&dt=510&dt=520&dt=490&dt=300&dt=310&dt=320&dt=330&dt=340&dt=350&o=40"
|
76 |
+
)
|
77 |
+
# response = requests.get(url, headers=headers)
|
78 |
+
response.raise_for_status() # Raise error for bad responses
|
79 |
+
|
80 |
+
# The response is a gzip-compressed CSV in plain text
|
81 |
+
csv_content = response.content.decode("utf-16", errors="replace")
|
82 |
+
logger.info("Downloaded list of documents")
|
83 |
+
|
84 |
+
# Optionally parse with pandas
|
85 |
+
df = pd.read_csv(StringIO(csv_content), sep=";") # Assuming semicolon separator
|
86 |
+
|
87 |
+
df.to_csv(TMP_DIR / (today.strftime("%Y-%m-%d") + ".csv"), index=False)
|
88 |
+
|
89 |
+
return df[
|
90 |
+
[
|
91 |
+
"DokumentType",
|
92 |
+
"DokumentId",
|
93 |
+
"Titel",
|
94 |
+
"Ressort",
|
95 |
+
"Historisk",
|
96 |
+
"PubliceretTidspunkt",
|
97 |
+
"EliUrl",
|
98 |
+
]
|
99 |
+
]
|
100 |
+
|
101 |
+
|
102 |
+
def fetch_document(doc_info: pd.Series, session: requests.Session) -> dict:
|
103 |
+
url = BASE_URL + doc_info["EliUrl"].strip().split("eli")[1]
|
104 |
+
|
105 |
+
response = session.post(
|
106 |
+
url,
|
107 |
+
headers={
|
108 |
+
"Accept": "application/json",
|
109 |
+
"Content-Type": "application/json",
|
110 |
+
},
|
111 |
+
json={},
|
112 |
+
)
|
113 |
+
response.raise_for_status()
|
114 |
+
|
115 |
+
return response.json()[0]
|
116 |
+
|
117 |
+
|
118 |
+
def main():
|
119 |
+
save_path = Path(__file__).parent / "retsinformationdk.parquet"
|
120 |
+
documents = fetch_document_list()
|
121 |
+
|
122 |
+
logger.info(f"Found {len(documents)} documents from retsinformationdk")
|
123 |
+
|
124 |
+
session = create_session_with_retries()
|
125 |
+
docs = []
|
126 |
+
for idx, doc_info in tqdm(documents.iterrows(), total=len(documents)):
|
127 |
+
if doc_info["Historisk"]:
|
128 |
+
continue
|
129 |
+
try:
|
130 |
+
doc = fetch_document(doc_info, session)
|
131 |
+
text = extract(doc["documentHtml"], output_format="markdown")
|
132 |
+
docs.append(
|
133 |
+
{
|
134 |
+
"id": doc_info["DokumentId"],
|
135 |
+
"text": text if text else "",
|
136 |
+
"source": "retsinformationdk",
|
137 |
+
"added": today.strftime("%Y-%m-%d"),
|
138 |
+
"created": f"{date.fromisoformat(str(doc_info['PubliceretTidspunkt'])).strftime('%Y-%m-%d')}, {date.fromisoformat(str(doc_info['PubliceretTidspunkt'])).strftime('%Y-%m-%d')}",
|
139 |
+
}
|
140 |
+
)
|
141 |
+
except Exception as e:
|
142 |
+
logger.error(f"Ran in to error: {e}")
|
143 |
+
logger.error(f"Skipping doc {doc_info['DokumentId']}")
|
144 |
+
|
145 |
+
ds = Dataset.from_list(docs)
|
146 |
+
|
147 |
+
# quality checks and processing
|
148 |
+
ds = remove_empty_texts(ds)
|
149 |
+
ds = remove_duplicate_text(ds)
|
150 |
+
ds = add_token_count(ds)
|
151 |
+
ds = ensure_column_order(ds)
|
152 |
+
|
153 |
+
ds.to_parquet(save_path)
|
154 |
+
|
155 |
+
|
156 |
+
if __name__ == "__main__":
|
157 |
+
log_path = Path(__file__).parent / "retsinformationdk.log"
|
158 |
+
logging.basicConfig(
|
159 |
+
level=logging.INFO,
|
160 |
+
format="%(asctime)s - %(levelname)s - %(message)s",
|
161 |
+
handlers=[
|
162 |
+
logging.StreamHandler(),
|
163 |
+
logging.FileHandler(log_path),
|
164 |
+
],
|
165 |
+
)
|
166 |
+
main()
|
data/retsinformationdk/descriptive_stats.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
{
|
2 |
-
"number_of_samples":
|
3 |
-
"average_document_length":
|
4 |
-
"number_of_tokens":
|
5 |
-
"revision": "
|
6 |
}
|
|
|
1 |
{
|
2 |
+
"number_of_samples": 100524,
|
3 |
+
"average_document_length": 23265.030191794995,
|
4 |
+
"number_of_tokens": 818252220,
|
5 |
+
"revision": "2c91001b440e33497c34fbfa9b40dfffffa25620"
|
6 |
}
|
data/retsinformationdk/images/dist_document_length.png
CHANGED
![]() |
Git LFS Details
|
![]() |
Git LFS Details
|
data/retsinformationdk/retsinformationdk.md
CHANGED
@@ -39,9 +39,9 @@ It serves as a central repository for Danish legislation, administrative regulat
|
|
39 |
<!-- START-DESC-STATS -->
|
40 |
- **Language**: dan, dansk, Danish
|
41 |
- **Domains**: Legal
|
42 |
-
- **Number of samples**:
|
43 |
-
- **Number of tokens (Llama 3)**:
|
44 |
-
- **Average document length (characters)**:
|
45 |
<!-- END-DESC-STATS -->
|
46 |
|
47 |
|
@@ -52,12 +52,12 @@ An example from the dataset looks as follows.
|
|
52 |
<!-- START-SAMPLE -->
|
53 |
```py
|
54 |
{
|
55 |
-
"id": "
|
56 |
-
"text": "
|
57 |
"source": "retsinformationdk",
|
58 |
-
"added": "
|
59 |
-
"created": "
|
60 |
-
"token_count":
|
61 |
}
|
62 |
```
|
63 |
|
|
|
39 |
<!-- START-DESC-STATS -->
|
40 |
- **Language**: dan, dansk, Danish
|
41 |
- **Domains**: Legal
|
42 |
+
- **Number of samples**: 100.52K
|
43 |
+
- **Number of tokens (Llama 3)**: 818.25M
|
44 |
+
- **Average document length (characters)**: 23265.03
|
45 |
<!-- END-DESC-STATS -->
|
46 |
|
47 |
|
|
|
52 |
<!-- START-SAMPLE -->
|
53 |
```py
|
54 |
{
|
55 |
+
"id": "AA014851",
|
56 |
+
"text": "Indsamlingsnævnets afgørelse i sag nr. 22-730-00015\n\nIndsamlingsnævnet fandt det kritisabelt, at Gad[...]",
|
57 |
"source": "retsinformationdk",
|
58 |
+
"added": "2025-06-26",
|
59 |
+
"created": "2025-06-25, 2025-06-25",
|
60 |
+
"token_count": 4062
|
61 |
}
|
62 |
```
|
63 |
|
data/retsinformationdk/retsinformationdk.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:191bab8a3e7ae419394a622b74ae0fe64e9b5033066eeab4a3b3d2960153d48a
|
3 |
+
size 1017748370
|
descriptive_stats.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
{
|
2 |
-
"number_of_samples":
|
3 |
-
"average_document_length":
|
4 |
-
"number_of_tokens":
|
5 |
-
"revision": "
|
6 |
}
|
|
|
1 |
{
|
2 |
+
"number_of_samples": 927893,
|
3 |
+
"average_document_length": 15475.058174811104,
|
4 |
+
"number_of_tokens": 4671403830,
|
5 |
+
"revision": "2c91001b440e33497c34fbfa9b40dfffffa25620"
|
6 |
}
|
images/dist_document_length.png
CHANGED
![]() |
Git LFS Details
|
![]() |
Git LFS Details
|
images/domain_distribution.png
CHANGED
![]() |
Git LFS Details
|
![]() |
Git LFS Details
|
test_results.log
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
============================= test session starts ==============================
|
2 |
-
platform darwin -- Python 3.12.
|
3 |
rootdir: /Users/kristianjensen/Documents/danish-dynaword
|
4 |
configfile: pyproject.toml
|
5 |
collected 310 items
|
@@ -21,4 +21,4 @@ src/tests/test_quality/test_short_texts.py: 34 warnings
|
|
21 |
/Users/kristianjensen/Documents/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
|
22 |
|
23 |
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
|
24 |
-
================= 309 passed, 1 skipped, 34 warnings in
|
|
|
1 |
============================= test session starts ==============================
|
2 |
+
platform darwin -- Python 3.12.0, pytest-8.3.4, pluggy-1.5.0
|
3 |
rootdir: /Users/kristianjensen/Documents/danish-dynaword
|
4 |
configfile: pyproject.toml
|
5 |
collected 310 items
|
|
|
21 |
/Users/kristianjensen/Documents/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
|
22 |
|
23 |
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
|
24 |
+
================= 309 passed, 1 skipped, 34 warnings in 44.54s =================
|