Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
Fix memo (#68)
Browse files- delete memo (9e941ac4e56b7b77ec77b84a3a40130ba78a9511)
- Updated memo (42a5122528ae84ba3bed9679869a4386ea004b8f)
- add test logs, format and bumped version (d423cf4c52af795084bde064f4db385aa33d8267)
- CHANGELOG.md +6 -0
- README.md +41 -42
- data/memo/create.py +164 -88
- data/memo/descriptive_stats.json +4 -4
- data/memo/images/dist_document_length.png +2 -2
- data/memo/memo.log +20 -0
- data/memo/memo.md +18 -16
- data/memo/memo.parquet +2 -2
- descriptive_stats.json +4 -4
- images/dist_document_length.png +2 -2
- images/domain_distribution.png +2 -2
- pyproject.toml +1 -1
- src/tests/test_quality/test_duplicates.py +1 -0
- test_results.log +18 -9
- uv.lock +0 -0
CHANGELOG.md
CHANGED
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
|
|
5 |
|
6 |
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
## [v1.1.1] - 2025-06-16
|
9 |
|
10 |
### Added
|
|
|
5 |
|
6 |
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
|
8 |
+
## [v1.2.0] - 2025-06-23
|
9 |
+
|
10 |
+
### Fixed
|
11 |
+
|
12 |
+
- Updated the memo dataset, this second version fixed previous [issues](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67) with the download and processing of the Danish Memo which cut off the text leading to notably smaller documents.
|
13 |
+
|
14 |
## [v1.1.1] - 2025-06-16
|
15 |
|
16 |
### Added
|
README.md
CHANGED
@@ -174,7 +174,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
|
|
174 |
<!-- START README TABLE -->
|
175 |
| | |
|
176 |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
177 |
-
| **Version** | 1.
|
178 |
| **Language** | dan, dansk, Danish |
|
179 |
| **License** | Openly Licensed, See the respective dataset |
|
180 |
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
|
@@ -206,15 +206,14 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
|
|
206 |
- [License information](#license-information)
|
207 |
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
208 |
- [Notice and takedown policy](#notice-and-takedown-policy)
|
209 |
-
- [We will comply with legitimate requests by removing the affected sources from the next release of the corpus](#we-will-comply-with-legitimate-requests-by-removing-the-affected-sources-from-the-next-release-of-the-corpus)
|
210 |
|
211 |
## Dataset Description
|
212 |
|
213 |
<!-- START-DESC-STATS -->
|
214 |
- **Language**: dan, dansk, Danish
|
215 |
-
- **Number of samples**: 891.
|
216 |
-
- **Number of tokens (Llama 3)**: 4.
|
217 |
-
- **Average document length (characters)**:
|
218 |
<!-- END-DESC-STATS -->
|
219 |
|
220 |
|
@@ -312,43 +311,43 @@ This data generally contains no annotation besides the metadata attached to each
|
|
312 |
Below follows a brief overview of the sources in the corpus along with their individual license.
|
313 |
|
314 |
<!-- START-MAIN TABLE -->
|
315 |
-
| Source | Description
|
316 |
-
|
317 |
-
| [cellar] | The official digital repository for European Union legal documents and open data
|
318 |
-
| [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR
|
319 |
-
| [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark
|
320 |
-
| [hest] | Samples from the Danish debate forum www.heste-nettet.dk
|
321 |
-
| [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr
|
322 |
-
| [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles)
|
323 |
-
| [ai-aktindsigt] | Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project
|
324 |
-
| [miljoeportalen] | Data from [Danmarks Miljøportalen](https://www.miljoeportal.dk/om-danmarks-miljoeportal/) (Denmark's Environment Portal)
|
325 |
-
| [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk
|
326 |
-
| [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page)
|
327 |
-
| [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall
|
328 |
-
| [
|
329 |
-
| [
|
330 |
-
| [
|
331 |
-
| [
|
332 |
-
| [
|
333 |
-
| [
|
334 |
-
| [
|
335 |
-
| [
|
336 |
-
| [
|
337 |
-
| [danske-taler] | Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk)
|
338 |
-
| [nota] | The text only part of the [Nota lyd- og tekstdata](https://sprogteknologi.dk/dataset/nota-lyd-og-tekstdata) dataset
|
339 |
-
| [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org)
|
340 |
-
| [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org)
|
341 |
-
| [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page)
|
342 |
-
| [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen)
|
343 |
-
| [spont] | Conversational samples collected as a part of research projects at Aarhus University
|
344 |
-
| [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet
|
345 |
-
| [relig] | Danish religious text from the 1700-2022
|
346 |
-
| [ncc_newspaper] | OCR'd Newspapers derived from [NCC](https://huggingface.co/datasets/NbAiLab/NCC)
|
347 |
-
| [botxt] | The Bornholmsk Ordbog Dictionary Project
|
348 |
-
| [naat] | Danish speeches from 1930-2022
|
349 |
-
| [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT)
|
350 |
-
| [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk
|
351 |
-
| **Total** |
|
352 |
|
353 |
[ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
|
354 |
[cellar]: data/cellar/cellar.md
|
|
|
174 |
<!-- START README TABLE -->
|
175 |
| | |
|
176 |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
177 |
+
| **Version** | 1.2.0 ([Changelog](/CHANGELOG.md)) |
|
178 |
| **Language** | dan, dansk, Danish |
|
179 |
| **License** | Openly Licensed, See the respective dataset |
|
180 |
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
|
|
|
206 |
- [License information](#license-information)
|
207 |
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
208 |
- [Notice and takedown policy](#notice-and-takedown-policy)
|
|
|
209 |
|
210 |
## Dataset Description
|
211 |
|
212 |
<!-- START-DESC-STATS -->
|
213 |
- **Language**: dan, dansk, Danish
|
214 |
+
- **Number of samples**: 891.09K
|
215 |
+
- **Number of tokens (Llama 3)**: 4.37B
|
216 |
+
- **Average document length (characters)**: 15086.31
|
217 |
<!-- END-DESC-STATS -->
|
218 |
|
219 |
|
|
|
311 |
Below follows a brief overview of the sources in the corpus along with their individual license.
|
312 |
|
313 |
<!-- START-MAIN TABLE -->
|
314 |
+
| Source | Description | Domain | N. Tokens | License |
|
315 |
+
|:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------|:------------|:-----------------------|
|
316 |
+
| [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
|
317 |
+
| [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR | Books | 531.97M | [CC-0] |
|
318 |
+
| [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 516.35M | [Danish Copyright Law] |
|
319 |
+
| [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
|
320 |
+
| [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr | Other | 338.87M | [NLOD 2.0] |
|
321 |
+
| [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversation | 271.60M | [CC-0] |
|
322 |
+
| [ai-aktindsigt] | Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project | Web | 139.23M | [Apache 2.0] |
|
323 |
+
| [miljoeportalen] | Data from [Danmarks Miljøportalen](https://www.miljoeportal.dk/om-danmarks-miljoeportal/) (Denmark's Environment Portal) | Web | 127.38M | [CC-0] |
|
324 |
+
| [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | Legal | 122.11M | [CC-0] |
|
325 |
+
| [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | Encyclopedic | 122.00M | [CC-0] |
|
326 |
+
| [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | Conversation | 114.09M | [CC-0] |
|
327 |
+
| [memo] | The MeMo corpus comprising almost all Danish novels from the period 1870-1899, known as the Modern Breakthrough | Books | 113.74M | [CC-BY-SA 4.0] |
|
328 |
+
| [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | Conversation | 100.84M | [CC-0] |
|
329 |
+
| [adl] | Danish literature from 1700-2023 from the [Archive for Danish Literature](https://tekster.kb.dk/text?editorial=no&f%5Bsubcollection_ssi%5D%5B%5D=adl&match=one&search_field=Alt) (ADL) | Books | 58.49M | [CC-0] |
|
330 |
+
| [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | Legal | 56.26M | [CC-0] |
|
331 |
+
| [fm-udgivelser] | The official publication series of the Danish Ministry of Finance containing economic analyses, budget proposals, and fiscal policy documents | Legal | 50.34M | [CC-BY-SA 4.0] |
|
332 |
+
| [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | News | 37.90M | [CC-0] |
|
333 |
+
| [eur-lex-sum-da] | The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries | Legal | 31.37M | [CC-BY-SA 4.0] |
|
334 |
+
| [ncc_maalfrid] | Danish content from Norwegian institutions websites | Web | 29.26M | [NLOD 2.0] |
|
335 |
+
| [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | News | 21.67M | [CC-BY-SA 4.0] |
|
336 |
+
| [danske-taler] | Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk) | Conversation | 8.23M | [CC-0] |
|
337 |
+
| [nota] | The text only part of the [Nota lyd- og tekstdata](https://sprogteknologi.dk/dataset/nota-lyd-og-tekstdata) dataset | Readaloud | 7.30M | [CC-0] |
|
338 |
+
| [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | Books | 6.76M | [Gutenberg] |
|
339 |
+
| [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | Books | 6.24M | [CC-0] |
|
340 |
+
| [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | Encyclopedic | 5.34M | [CC-0] |
|
341 |
+
| [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | Books | 3.55M | [CC-BY-SA 4.0] |
|
342 |
+
| [spont] | Conversational samples collected as a part of research projects at Aarhus University | Conversation | 1.56M | [CC-0] |
|
343 |
+
| [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | Other | 1.48M | [DanNet 1.0] |
|
344 |
+
| [relig] | Danish religious text from the 1700-2022 | Books | 1.24M | [CC-0] |
|
345 |
+
| [ncc_newspaper] | OCR'd Newspapers derived from [NCC](https://huggingface.co/datasets/NbAiLab/NCC) | News | 1.05M | [CC-0] |
|
346 |
+
| [botxt] | The Bornholmsk Ordbog Dictionary Project | Dialect | 847.97K | [CC-0] |
|
347 |
+
| [naat] | Danish speeches from 1930-2022 | Conversation | 286.68K | [CC-0] |
|
348 |
+
| [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
|
349 |
+
| [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
|
350 |
+
| **Total** | | | 4.37B | |
|
351 |
|
352 |
[ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
|
353 |
[cellar]: data/cellar/cellar.md
|
data/memo/create.py
CHANGED
@@ -2,107 +2,183 @@
|
|
2 |
# requires-python = "==3.12"
|
3 |
# dependencies = [
|
4 |
# "datasets==3.2.0",
|
|
|
5 |
# ]
|
|
|
|
|
6 |
# ///
|
7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
from pathlib import Path
|
9 |
-
from typing import
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
#
|
44 |
-
|
45 |
-
#
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
)
|
81 |
|
82 |
-
|
|
|
83 |
|
84 |
|
85 |
-
def
|
86 |
-
|
87 |
-
|
88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
89 |
|
90 |
-
|
91 |
-
|
92 |
|
93 |
-
|
94 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
|
96 |
-
|
97 |
-
ds = ds.select_columns(column_order[1:] + ["text_new"])
|
98 |
-
ds = ds.rename_columns({"text_new": "text"})
|
99 |
-
# ensure order
|
100 |
-
ds = ds.select_columns(column_order)
|
101 |
|
102 |
-
|
103 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
104 |
ds.to_parquet(save_path)
|
105 |
|
106 |
|
107 |
if __name__ == "__main__":
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
main()
|
|
|
2 |
# requires-python = "==3.12"
|
3 |
# dependencies = [
|
4 |
# "datasets==3.2.0",
|
5 |
+
# "dynaword"
|
6 |
# ]
|
7 |
+
# [tool.uv.sources]
|
8 |
+
# dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword", rev = "00e7f2aee7f7ad2da423419f77ecbb9c0536de0d" }
|
9 |
# ///
|
10 |
+
"""
|
11 |
+
Script for downloading and processing the Danish Memo repository.
|
12 |
+
|
13 |
+
Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
|
14 |
+
|
15 |
+
```bash
|
16 |
+
GIT_LFS_SKIP_SMUDGE=1 uv run data/memo/create.py
|
17 |
+
```
|
18 |
+
|
19 |
+
This second version fixed previous issues with the download and processing of the Danish Memo repository:
|
20 |
+
https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67
|
21 |
+
"""
|
22 |
+
|
23 |
+
import logging
|
24 |
+
import subprocess
|
25 |
+
from datetime import datetime
|
26 |
from pathlib import Path
|
27 |
+
from typing import Any
|
28 |
+
|
29 |
+
import pandas as pd
|
30 |
+
from datasets import Dataset
|
31 |
+
|
32 |
+
from dynaword.process_dataset import (
|
33 |
+
add_token_count,
|
34 |
+
ensure_column_order,
|
35 |
+
remove_duplicate_text,
|
36 |
+
remove_empty_texts,
|
37 |
+
)
|
38 |
+
|
39 |
+
logger = logging.getLogger(__name__)
|
40 |
+
|
41 |
+
download_path = Path(__file__).parent / "tmp"
|
42 |
+
|
43 |
+
|
44 |
+
def download_repo(
|
45 |
+
download_path: Path = download_path,
|
46 |
+
repo_url: str = "https://huggingface.co/datasets/MiMe-MeMo/Corpus-v1.1",
|
47 |
+
revision="7205897f1f3ee65e296072f3e96d49488e54e8ce",
|
48 |
+
) -> Path:
|
49 |
+
"""
|
50 |
+
Downloads the repository from the given URL to the specified path.
|
51 |
+
"""
|
52 |
+
logger.info(f"Downloading repository to {download_path}")
|
53 |
+
if not download_path.exists():
|
54 |
+
download_path.mkdir(parents=True, exist_ok=True)
|
55 |
+
|
56 |
+
repo_path = download_path / repo_url.split("/")[-1]
|
57 |
+
if repo_path.exists():
|
58 |
+
logger.info(f"Repository already exists at {repo_path}, skipping download.")
|
59 |
+
return repo_path
|
60 |
+
|
61 |
+
# Use git to clone the repository running it from the download path
|
62 |
+
subprocess.run(["git", "clone", repo_url], check=True, cwd=download_path)
|
63 |
+
# Checkout the specific revision
|
64 |
+
subprocess.run(["git", "checkout", revision], check=True, cwd=repo_path)
|
65 |
+
logger.info("Download complete.")
|
66 |
+
return repo_path
|
67 |
+
|
68 |
+
|
69 |
+
def load_texts(repo_path: Path) -> list[dict[str, str]]:
|
70 |
+
"""
|
71 |
+
Loads texts from the downloaded repository.
|
72 |
+
"""
|
73 |
+
text_files_path = repo_path / "texts"
|
74 |
+
text_files = list(text_files_path.glob("*.txt"))
|
75 |
+
texts = []
|
76 |
+
for file in text_files:
|
77 |
+
name = file.stem
|
78 |
+
with file.open("r") as f:
|
79 |
+
content = f.read()
|
80 |
+
texts.append({"name": name, "text": content})
|
81 |
+
logger.info(f"Loaded {len(texts)} texts from the repository.")
|
82 |
+
return texts
|
83 |
+
|
84 |
+
|
85 |
+
def load_memo(repo_path: Path) -> pd.DataFrame:
|
86 |
+
texts = load_texts(repo_path)
|
87 |
+
|
88 |
+
metadata_csv = repo_path / "MeMo-corpus-metadata-v1.1-2023-06-20.csv"
|
89 |
+
metadata = pd.read_csv(metadata_csv)
|
90 |
+
# remove .pdf from "filename"
|
91 |
+
metadata["filename"] = metadata["filename"].str.replace(".pdf", "", regex=False)
|
92 |
+
texts_df = pd.DataFrame(texts)
|
93 |
+
|
94 |
+
text_df_fileames = set(texts_df["name"])
|
95 |
+
metadata_filenames = set(metadata["filename"])
|
96 |
+
|
97 |
+
text_without_metadata = [t for t in text_df_fileames if t not in metadata_filenames]
|
98 |
+
|
99 |
+
assert (
|
100 |
+
len(text_without_metadata) == 0
|
101 |
+
), f"Some texts in the repository do not have metadata: {text_without_metadata}"
|
102 |
+
|
103 |
+
# merge texts with metadata
|
104 |
+
merged_df = pd.merge(
|
105 |
+
texts_df, metadata, left_on="name", right_on="filename", how="inner"
|
106 |
)
|
107 |
|
108 |
+
logger.info(f"Loaded {len(merged_df)} rows from the MeMo dataset.")
|
109 |
+
return merged_df
|
110 |
|
111 |
|
112 |
+
def convert_to_dynaword_format(memo_df: pd.DataFrame) -> Dataset:
|
113 |
+
# convert to dynaword samples
|
114 |
+
samples: list[dict[str, Any]] = []
|
115 |
+
for _, row in memo_df.iterrows():
|
116 |
+
text = row["text"]
|
117 |
+
assert isinstance(text, str), f"Text is not a string: {text}"
|
118 |
+
|
119 |
+
# if there is a title then add it to the text
|
120 |
+
title = row["title"] if pd.notna(row["title"]) else "Ukendt titel"
|
121 |
+
subtitle = row["subtitle"] if pd.notna(row["subtitle"]) else ""
|
122 |
+
title = f"{title} {subtitle}".strip()
|
123 |
+
|
124 |
+
first_name = row["firstname"]
|
125 |
+
last_name = row["surname"]
|
126 |
+
pseudonym = row["pseudonym"]
|
127 |
+
|
128 |
+
full_name = f"{first_name} {last_name}".strip()
|
129 |
+
if not full_name:
|
130 |
+
full_name = pseudonym if pd.notna(pseudonym) else "Ukendt forfatter"
|
131 |
+
else:
|
132 |
+
# add pseudonym if it exists
|
133 |
+
if pd.notna(pseudonym) and pseudonym != full_name:
|
134 |
+
full_name += f" (Pseudonym: {pseudonym})"
|
135 |
|
136 |
+
# create a new text with the title and author
|
137 |
+
text_new = f"{title}\n\nSkrevet af {full_name}\nPubliceret {row['year']} af {row['publisher']}\n ------- \n\n{text}"
|
138 |
|
139 |
+
today = datetime.now().date()
|
140 |
+
sample = {
|
141 |
+
"id": row["filename"],
|
142 |
+
"text": text_new,
|
143 |
+
"source": "memo",
|
144 |
+
"added": today.isoformat(),
|
145 |
+
"created": f"{row['year']}-01-01, {row['year']}-12-31",
|
146 |
+
}
|
147 |
|
148 |
+
samples.append(sample)
|
|
|
|
|
|
|
|
|
149 |
|
150 |
+
ds = Dataset.from_list(samples)
|
151 |
+
logger.info(f"Converted to dynaword format with {len(ds)} samples.")
|
152 |
+
return ds
|
153 |
+
|
154 |
+
ds = convert_to_dynaword_format(memo_df)
|
155 |
+
|
156 |
+
|
157 |
+
def main():
|
158 |
+
save_path = Path(__file__).parent / "memo.parquet"
|
159 |
+
|
160 |
+
repo_path = download_repo(download_path)
|
161 |
+
memo_df = load_memo(repo_path)
|
162 |
+
ds = convert_to_dynaword_format(memo_df)
|
163 |
+
|
164 |
+
# quality checks and processing
|
165 |
+
ds = remove_empty_texts(ds)
|
166 |
+
ds = remove_duplicate_text(ds)
|
167 |
+
ds = add_token_count(ds)
|
168 |
+
ds = ensure_column_order(ds)
|
169 |
+
|
170 |
+
# save to parquet
|
171 |
ds.to_parquet(save_path)
|
172 |
|
173 |
|
174 |
if __name__ == "__main__":
|
175 |
+
log_path = Path(__file__).parent / "memo.log"
|
176 |
+
logging.basicConfig(
|
177 |
+
level=logging.INFO,
|
178 |
+
format="%(asctime)s - %(levelname)s - %(message)s",
|
179 |
+
handlers=[
|
180 |
+
logging.StreamHandler(),
|
181 |
+
logging.FileHandler(log_path),
|
182 |
+
],
|
183 |
+
)
|
184 |
main()
|
data/memo/descriptive_stats.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
{
|
2 |
-
"number_of_samples":
|
3 |
-
"average_document_length":
|
4 |
-
"number_of_tokens":
|
5 |
-
"revision": "
|
6 |
}
|
|
|
1 |
{
|
2 |
+
"number_of_samples": 858,
|
3 |
+
"average_document_length": 375749.0874125874,
|
4 |
+
"number_of_tokens": 113742425,
|
5 |
+
"revision": "9e941ac4e56b7b77ec77b84a3a40130ba78a9511"
|
6 |
}
|
data/memo/images/dist_document_length.png
CHANGED
![]() |
Git LFS Details
|
![]() |
Git LFS Details
|
data/memo/memo.log
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
2025-06-23 15:14:07,867 - INFO - Downloading repository to /Users/au561649/Github/danish-dynaword/data/memo/tmp
|
2 |
+
2025-06-23 15:14:07,867 - INFO - Repository already exists at /Users/au561649/Github/danish-dynaword/data/memo/tmp/Corpus-v1.1, skipping download.
|
3 |
+
2025-06-23 15:14:19,489 - INFO - Loaded 858 texts from the repository.
|
4 |
+
2025-06-23 15:14:19,512 - INFO - Loaded 858 rows from the MeMo dataset.
|
5 |
+
2025-06-23 15:14:20,848 - INFO - Converted to dynaword format with 858 samples.
|
6 |
+
2025-06-23 15:14:20,903 - INFO - Removing empty texts
|
7 |
+
2025-06-23 15:14:25,977 - INFO - Filtered 0 empty examples
|
8 |
+
2025-06-23 15:14:25,977 - INFO - Removing duplicate texts
|
9 |
+
2025-06-23 15:14:26,434 - INFO - Filtered 0 duplicate examples
|
10 |
+
2025-06-23 15:15:40,637 - INFO - Ensuring columns are in the correct order and are present
|
11 |
+
2025-06-23 15:33:08,880 - INFO - Downloading repository to /Users/au561649/Github/danish-dynaword/data/memo/tmp
|
12 |
+
2025-06-23 15:33:08,880 - INFO - Repository already exists at /Users/au561649/Github/danish-dynaword/data/memo/tmp/Corpus-v1.1, skipping download.
|
13 |
+
2025-06-23 15:33:19,998 - INFO - Loaded 858 texts from the repository.
|
14 |
+
2025-06-23 15:33:20,025 - INFO - Loaded 858 rows from the MeMo dataset.
|
15 |
+
2025-06-23 15:33:21,332 - INFO - Converted to dynaword format with 858 samples.
|
16 |
+
2025-06-23 15:33:21,373 - INFO - Removing empty texts
|
17 |
+
2025-06-23 15:33:25,745 - INFO - Filtered 0 empty examples
|
18 |
+
2025-06-23 15:33:25,746 - INFO - Removing duplicate texts
|
19 |
+
2025-06-23 15:33:26,174 - INFO - Filtered 0 duplicate examples
|
20 |
+
2025-06-23 15:34:37,788 - INFO - Ensuring columns are in the correct order and are present
|
data/memo/memo.md
CHANGED
@@ -23,23 +23,16 @@ The MeMo corpus comprising almost all Danish novels from the period 1870-1899, k
|
|
23 |
|
24 |
The MeMo corpus is established to investigate literary and cultural change in a seminal epoch of Scandinavian cultural and social history (known as 'the modern breakthrough') using natural language processing and other computational methods. The corpus consists of original novels by Norwegian and Danish authors printed in Denmark in the period 1870-99. It includes 858 volumes, totaling 4.5 million sentences and 65 million words.
|
25 |
|
26 |
-
|
27 |
-
Lex.dk is a Danish online encyclopedia platform providing access to reliable and authoritative knowledge on a wide range of topics. It is created and curated by experts, ensuring high-quality, accurate content. The platform serves as a central hub for general and specialized information in Danish, making it a valuable resource for education, research, and general learning.
|
28 |
-
|
29 |
-
|
30 |
-
Additional information about this dataset can be found on their [project page](https://nors.ku.dk/english/research/projects/measuring-modernity/) or on their huggingface [dataset](https://huggingface.co/datasets/MiMe-MeMo/Corpus-v1.1).
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
|
35 |
## Dataset Description
|
36 |
|
37 |
<!-- START-DESC-STATS -->
|
38 |
- **Language**: dan, dansk, Danish
|
39 |
- **Domains**: Books
|
40 |
-
- **Number of samples**:
|
41 |
-
- **Number of tokens (Llama 3)**:
|
42 |
-
- **Average document length (characters)**:
|
43 |
<!-- END-DESC-STATS -->
|
44 |
|
45 |
|
@@ -49,12 +42,12 @@ An example from the dataset looks as follows.
|
|
49 |
<!-- START-SAMPLE -->
|
50 |
```py
|
51 |
{
|
52 |
-
"id": "
|
53 |
-
"text": "
|
54 |
"source": "memo",
|
55 |
-
"added": "2025-
|
56 |
-
"created": "
|
57 |
-
"token_count":
|
58 |
}
|
59 |
```
|
60 |
|
@@ -79,6 +72,15 @@ An entry in the dataset consists of the following fields:
|
|
79 |
</p>
|
80 |
<!-- END-DATASET PLOTS -->
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
|
83 |
## Additional Information
|
84 |
|
|
|
23 |
|
24 |
The MeMo corpus is established to investigate literary and cultural change in a seminal epoch of Scandinavian cultural and social history (known as 'the modern breakthrough') using natural language processing and other computational methods. The corpus consists of original novels by Norwegian and Danish authors printed in Denmark in the period 1870-99. It includes 858 volumes, totaling 4.5 million sentences and 65 million words.
|
25 |
|
26 |
+
Additional information about this dataset can be found on their [project page](https://nors.ku.dk/english/research/projects/measuring-modernity/) or on their huggingface [dataset](https://huggingface.co/datasets/MiMe-MeMo/Corpus-v1.1). The dataset can be inspected online using [the Korp platform](https://alf.hum.ku.dk/korp/?mode=memo_all#?cqp=%5B%5D&corpus=memo_all).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
## Dataset Description
|
29 |
|
30 |
<!-- START-DESC-STATS -->
|
31 |
- **Language**: dan, dansk, Danish
|
32 |
- **Domains**: Books
|
33 |
+
- **Number of samples**: 858
|
34 |
+
- **Number of tokens (Llama 3)**: 113.74M
|
35 |
+
- **Average document length (characters)**: 375749.09
|
36 |
<!-- END-DESC-STATS -->
|
37 |
|
38 |
|
|
|
42 |
<!-- START-SAMPLE -->
|
43 |
```py
|
44 |
{
|
45 |
+
"id": "1887_Paulsen_EnFremtidskvinde",
|
46 |
+
"text": "En fremtidskvinde?\n\nSkrevet af John Paulsen\nPubliceret 1887 af Schubothe\n ------- \n\nDen skandinavisk[...]",
|
47 |
"source": "memo",
|
48 |
+
"added": "2025-06-23",
|
49 |
+
"created": "1887-01-01, 1887-12-31",
|
50 |
+
"token_count": 98454
|
51 |
}
|
52 |
```
|
53 |
|
|
|
72 |
</p>
|
73 |
<!-- END-DATASET PLOTS -->
|
74 |
|
75 |
+
### Processing
|
76 |
+
|
77 |
+
In addition to the text itself we prefix the document with the title, year, author name, pseudonym and publisher. This is to allow the model to learn the relation between the document and relevant metadata.
|
78 |
+
|
79 |
+
|
80 |
+
### Updated and Corrections
|
81 |
+
|
82 |
+
This version fixed a previous [issues]( https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67) in MeMo where the documents where incorrectly truncated and normalized. Removing this truncation led to a >10x increase in number of tokens.
|
83 |
+
|
84 |
|
85 |
## Additional Information
|
86 |
|
data/memo/memo.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:44002e00b3e876bb6ebd70949723a08310bb022e4e91502c5ec7a64efb6d4706
|
3 |
+
size 202092223
|
descriptive_stats.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
{
|
2 |
-
"number_of_samples":
|
3 |
-
"average_document_length":
|
4 |
-
"number_of_tokens":
|
5 |
-
"revision": "
|
6 |
}
|
|
|
1 |
{
|
2 |
+
"number_of_samples": 891094,
|
3 |
+
"average_document_length": 15086.31267857263,
|
4 |
+
"number_of_tokens": 4369008328,
|
5 |
+
"revision": "9e941ac4e56b7b77ec77b84a3a40130ba78a9511"
|
6 |
}
|
images/dist_document_length.png
CHANGED
![]() |
Git LFS Details
|
![]() |
Git LFS Details
|
images/domain_distribution.png
CHANGED
![]() |
Git LFS Details
|
![]() |
Git LFS Details
|
pyproject.toml
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
[project]
|
2 |
name = "dynaword"
|
3 |
-
version = "1.
|
4 |
description = "project code for the danish dynaword project"
|
5 |
readme = "README.md"
|
6 |
requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
|
|
|
1 |
[project]
|
2 |
name = "dynaword"
|
3 |
+
version = "1.2.0"
|
4 |
description = "project code for the danish dynaword project"
|
5 |
readme = "README.md"
|
6 |
requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
|
src/tests/test_quality/test_duplicates.py
CHANGED
@@ -6,6 +6,7 @@ from datasets import Dataset, load_dataset
|
|
6 |
from dynaword.paths import repo_path
|
7 |
from ..conftest import DATASET_NAMES
|
8 |
|
|
|
9 |
@pytest.mark.parametrize("dataset_name", DATASET_NAMES)
|
10 |
def test_no_within_data_duplicates(dataset_name: str):
|
11 |
ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
|
|
|
6 |
from dynaword.paths import repo_path
|
7 |
from ..conftest import DATASET_NAMES
|
8 |
|
9 |
+
|
10 |
@pytest.mark.parametrize("dataset_name", DATASET_NAMES)
|
11 |
def test_no_within_data_duplicates(dataset_name: str):
|
12 |
ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
|
test_results.log
CHANGED
@@ -2,15 +2,24 @@
|
|
2 |
platform darwin -- Python 3.12.0, pytest-8.3.4, pluggy-1.5.0
|
3 |
rootdir: /Users/au561649/Github/danish-dynaword
|
4 |
configfile: pyproject.toml
|
5 |
-
|
|
|
6 |
|
7 |
-
src/tests/test_dataset_schema.py ....................................... [
|
8 |
-
............................. [
|
9 |
-
src/tests/test_datasheets.py ........................................... [
|
10 |
-
........................................................................ [
|
11 |
-
....................................................... [
|
12 |
-
src/tests/
|
13 |
-
src/tests/
|
|
|
|
|
|
|
14 |
src/tests/test_unique_ids.py . [100%]
|
15 |
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
2 |
platform darwin -- Python 3.12.0, pytest-8.3.4, pluggy-1.5.0
|
3 |
rootdir: /Users/au561649/Github/danish-dynaword
|
4 |
configfile: pyproject.toml
|
5 |
+
plugins: anyio-4.9.0
|
6 |
+
collected 310 items
|
7 |
|
8 |
+
src/tests/test_dataset_schema.py ....................................... [ 12%]
|
9 |
+
............................. [ 21%]
|
10 |
+
src/tests/test_datasheets.py ........................................... [ 35%]
|
11 |
+
........................................................................ [ 59%]
|
12 |
+
....................................................... [ 76%]
|
13 |
+
src/tests/test_load.py .. [ 77%]
|
14 |
+
src/tests/test_quality/test_duplicates.py .............................. [ 87%]
|
15 |
+
....s [ 88%]
|
16 |
+
src/tests/test_quality/test_short_texts.py ............................. [ 98%]
|
17 |
+
..... [ 99%]
|
18 |
src/tests/test_unique_ids.py . [100%]
|
19 |
|
20 |
+
=============================== warnings summary ===============================
|
21 |
+
src/tests/test_quality/test_short_texts.py: 34 warnings
|
22 |
+
/Users/au561649/Github/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
|
23 |
+
|
24 |
+
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
|
25 |
+
============ 309 passed, 1 skipped, 34 warnings in 77.84s (0:01:17) ============
|
uv.lock
CHANGED
The diff for this file is too large to render.
See raw diff
|
|