KennethEnevoldsen commited on
Commit
16931a4
·
verified ·
1 Parent(s): 00e7f2a
CHANGELOG.md CHANGED
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
5
 
6
  The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
7
 
 
 
 
 
 
 
8
  ## [v1.1.1] - 2025-06-16
9
 
10
  ### Added
 
5
 
6
  The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
7
 
8
+ ## [v1.2.0] - 2025-06-23
9
+
10
+ ### Fixed
11
+
12
+ - Updated the memo dataset, this second version fixed previous [issues](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67) with the download and processing of the Danish Memo which cut off the text leading to notably smaller documents.
13
+
14
  ## [v1.1.1] - 2025-06-16
15
 
16
  ### Added
README.md CHANGED
@@ -174,7 +174,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
174
  <!-- START README TABLE -->
175
  | | |
176
  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
177
- | **Version** | 1.1.1 ([Changelog](/CHANGELOG.md)) |
178
  | **Language** | dan, dansk, Danish |
179
  | **License** | Openly Licensed, See the respective dataset |
180
  | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
@@ -206,15 +206,14 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
206
  - [License information](#license-information)
207
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
208
  - [Notice and takedown policy](#notice-and-takedown-policy)
209
- - [We will comply with legitimate requests by removing the affected sources from the next release of the corpus](#we-will-comply-with-legitimate-requests-by-removing-the-affected-sources-from-the-next-release-of-the-corpus)
210
 
211
  ## Dataset Description
212
 
213
  <!-- START-DESC-STATS -->
214
  - **Language**: dan, dansk, Danish
215
- - **Number of samples**: 891.08K
216
- - **Number of tokens (Llama 3)**: 4.26B
217
- - **Average document length (characters)**: 14755.73
218
  <!-- END-DESC-STATS -->
219
 
220
 
@@ -312,43 +311,43 @@ This data generally contains no annotation besides the metadata attached to each
312
  Below follows a brief overview of the sources in the corpus along with their individual license.
313
 
314
  <!-- START-MAIN TABLE -->
315
- | Source | Description | Domain | N. Tokens | License |
316
- |:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------|:------------|:-----------------------|
317
- | [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
318
- | [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR | Books | 531.97M | [CC-0] |
319
- | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 516.35M | [Danish Copyright Law] |
320
- | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
321
- | [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr | Other | 338.87M | [NLOD 2.0] |
322
- | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversation | 271.60M | [CC-0] |
323
- | [ai-aktindsigt] | Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project | Web | 139.23M | [Apache 2.0] |
324
- | [miljoeportalen] | Data from [Danmarks Miljøportalen](https://www.miljoeportal.dk/om-danmarks-miljoeportal/) (Denmark's Environment Portal) | Web | 127.38M | [CC-0] |
325
- | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | Legal | 122.11M | [CC-0] |
326
- | [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | Encyclopedic | 122.00M | [CC-0] |
327
- | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | Conversation | 114.09M | [CC-0] |
328
- | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | Conversation | 100.84M | [CC-0] |
329
- | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | Books | 58.49M | [CC-0] |
330
- | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | Legal | 56.26M | [CC-0] |
331
- | [fm-udgivelser] | The official publication series of the Danish Ministry of Finance containing economic analyses, budget proposals, and fiscal policy documents | Legal | 50.34M | [CC-BY-SA 4.0] |
332
- | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | News | 37.90M | [CC-0] |
333
- | [eur-lex-sum-da] | The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries | Legal | 31.37M | [CC-BY-SA 4.0] |
334
- | [ncc_maalfrid] | Danish content from Norwegian institutions websites | Web | 29.26M | [NLOD 2.0] |
335
- | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | News | 21.67M | [CC-BY-SA 4.0] |
336
- | [memo] | The MeMo corpus comprising almost all Danish novels from the period 1870-1899, known as the Modern Breakthrough | Books | 9.28M | [CC-BY-SA 4.0] |
337
- | [danske-taler] | Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk) | Conversation | 8.23M | [CC-0] |
338
- | [nota] | The text only part of the [Nota lyd- og tekstdata](https://sprogteknologi.dk/dataset/nota-lyd-og-tekstdata) dataset | Readaloud | 7.30M | [CC-0] |
339
- | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | Books | 6.76M | [Gutenberg] |
340
- | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | Books | 6.24M | [CC-0] |
341
- | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | Encyclopedic | 5.34M | [CC-0] |
342
- | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | Books | 3.55M | [CC-BY-SA 4.0] |
343
- | [spont] | Conversational samples collected as a part of research projects at Aarhus University | Conversation | 1.56M | [CC-0] |
344
- | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | Other | 1.48M | [DanNet 1.0] |
345
- | [relig] | Danish religious text from the 1700-2022 | Books | 1.24M | [CC-0] |
346
- | [ncc_newspaper] | OCR'd Newspapers derived from [NCC](https://huggingface.co/datasets/NbAiLab/NCC) | News | 1.05M | [CC-0] |
347
- | [botxt] | The Bornholmsk Ordbog Dictionary Project | Dialect | 847.97K | [CC-0] |
348
- | [naat] | Danish speeches from 1930-2022 | Conversation | 286.68K | [CC-0] |
349
- | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
350
- | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
351
- | **Total** | | | 4.26B | |
352
 
353
  [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
354
  [cellar]: data/cellar/cellar.md
 
174
  <!-- START README TABLE -->
175
  | | |
176
  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
177
+ | **Version** | 1.2.0 ([Changelog](/CHANGELOG.md)) |
178
  | **Language** | dan, dansk, Danish |
179
  | **License** | Openly Licensed, See the respective dataset |
180
  | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
 
206
  - [License information](#license-information)
207
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
208
  - [Notice and takedown policy](#notice-and-takedown-policy)
 
209
 
210
  ## Dataset Description
211
 
212
  <!-- START-DESC-STATS -->
213
  - **Language**: dan, dansk, Danish
214
+ - **Number of samples**: 891.09K
215
+ - **Number of tokens (Llama 3)**: 4.37B
216
+ - **Average document length (characters)**: 15086.31
217
  <!-- END-DESC-STATS -->
218
 
219
 
 
311
  Below follows a brief overview of the sources in the corpus along with their individual license.
312
 
313
  <!-- START-MAIN TABLE -->
314
+ | Source | Description | Domain | N. Tokens | License |
315
+ |:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------|:------------|:-----------------------|
316
+ | [cellar] | The official digital repository for European Union legal documents and open data | Legal | 1.15B | [CC-BY-SA 4.0] |
317
+ | [ncc_books] | Danish books extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from OCR | Books | 531.97M | [CC-0] |
318
+ | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | Legal | 516.35M | [Danish Copyright Law] |
319
+ | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | Social Media | 389.32M | [CC-0] |
320
+ | [ncc_parliament] | Collections from the Norwegian parliament in Danish. Extracted from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC) derived from ocr | Other | 338.87M | [NLOD 2.0] |
321
+ | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | Conversation | 271.60M | [CC-0] |
322
+ | [ai-aktindsigt] | Multiple web scrapes from municipality websites collected as a part of the [AI-aktindsigt](https://ai-aktindsigt.dk) project | Web | 139.23M | [Apache 2.0] |
323
+ | [miljoeportalen] | Data from [Danmarks Miljøportalen](https://www.miljoeportal.dk/om-danmarks-miljoeportal/) (Denmark's Environment Portal) | Web | 127.38M | [CC-0] |
324
+ | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | Legal | 122.11M | [CC-0] |
325
+ | [wiki] | The Danish subsection of [wikipedia](https://en.wikipedia.org/wiki/Main_Page) | Encyclopedic | 122.00M | [CC-0] |
326
+ | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | Conversation | 114.09M | [CC-0] |
327
+ | [memo] | The MeMo corpus comprising almost all Danish novels from the period 1870-1899, known as the Modern Breakthrough | Books | 113.74M | [CC-BY-SA 4.0] |
328
+ | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | Conversation | 100.84M | [CC-0] |
329
+ | [adl] | Danish literature from 1700-2023 from the [Archive for Danish Literature](https://tekster.kb.dk/text?editorial=no&f%5Bsubcollection_ssi%5D%5B%5D=adl&match=one&search_field=Alt) (ADL) | Books | 58.49M | [CC-0] |
330
+ | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | Legal | 56.26M | [CC-0] |
331
+ | [fm-udgivelser] | The official publication series of the Danish Ministry of Finance containing economic analyses, budget proposals, and fiscal policy documents | Legal | 50.34M | [CC-BY-SA 4.0] |
332
+ | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | News | 37.90M | [CC-0] |
333
+ | [eur-lex-sum-da] | The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries | Legal | 31.37M | [CC-BY-SA 4.0] |
334
+ | [ncc_maalfrid] | Danish content from Norwegian institutions websites | Web | 29.26M | [NLOD 2.0] |
335
+ | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | News | 21.67M | [CC-BY-SA 4.0] |
336
+ | [danske-taler] | Danish Speeches from [dansketaler.dk](https://www.dansketaler.dk) | Conversation | 8.23M | [CC-0] |
337
+ | [nota] | The text only part of the [Nota lyd- og tekstdata](https://sprogteknologi.dk/dataset/nota-lyd-og-tekstdata) dataset | Readaloud | 7.30M | [CC-0] |
338
+ | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | Books | 6.76M | [Gutenberg] |
339
+ | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | Books | 6.24M | [CC-0] |
340
+ | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | Encyclopedic | 5.34M | [CC-0] |
341
+ | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | Books | 3.55M | [CC-BY-SA 4.0] |
342
+ | [spont] | Conversational samples collected as a part of research projects at Aarhus University | Conversation | 1.56M | [CC-0] |
343
+ | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | Other | 1.48M | [DanNet 1.0] |
344
+ | [relig] | Danish religious text from the 1700-2022 | Books | 1.24M | [CC-0] |
345
+ | [ncc_newspaper] | OCR'd Newspapers derived from [NCC](https://huggingface.co/datasets/NbAiLab/NCC) | News | 1.05M | [CC-0] |
346
+ | [botxt] | The Bornholmsk Ordbog Dictionary Project | Dialect | 847.97K | [CC-0] |
347
+ | [naat] | Danish speeches from 1930-2022 | Conversation | 286.68K | [CC-0] |
348
+ | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | Other | 185.45K | [CC-BY-SA 4.0] |
349
+ | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | Other | 52.02K | [CC-0] |
350
+ | **Total** | | | 4.37B | |
351
 
352
  [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
353
  [cellar]: data/cellar/cellar.md
data/memo/create.py CHANGED
@@ -2,107 +2,183 @@
2
  # requires-python = "==3.12"
3
  # dependencies = [
4
  # "datasets==3.2.0",
 
5
  # ]
 
 
6
  # ///
7
- from datetime import datetime, timedelta
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  from pathlib import Path
9
- from typing import cast
10
-
11
- from datasets import Dataset, load_dataset
12
-
13
- column_order = [
14
- "text",
15
- "source",
16
- "id",
17
- "added",
18
- "created",
19
- "license",
20
- "domain",
21
- "metadata",
22
- ]
23
-
24
-
25
- def convert_sample(example: dict) -> dict:
26
- # from sample:
27
- # {
28
- # "filename": "1894_Aagaard_UnderligeFyre",
29
- # "full_firstnames": "Oscar",
30
- # "auth_first": "Oscar",
31
- # "auth_last_modern": "Aagaard",
32
- # "pseudonym": None,
33
- # "publ_date": 1894,
34
- # "title_modern": "Underlige Fyre",
35
- # "published_under_gender": "male",
36
- # "real_gender": "male",
37
- # "nationality": "no",
38
- # "auth_id": 1,
39
- # "auth_last": "Aagaard",
40
- # "title": "Underlige Fyre",
41
- # "surname": "Aagaard",
42
- # "title.1": "Underlige Fyre",
43
- # "subtitle": "Fortælling",
44
- # "volume": None,
45
- # "year": 1894,
46
- # "pages": 263.0,
47
- # "illustrations": "n",
48
- # "typeface": "roman",
49
- # "publisher": "Gyldendal",
50
- # "price": 3.0,
51
- # "source": "KB",
52
- # "notes": None,
53
- # "filepath": None,
54
- # "historical": None,
55
- # "period": "nan",
56
- # "period_notes": "nan",
57
- # "novel_start": 13.0,
58
- # "novel_end": 275.0,
59
- # "serialno": 854.0,
60
- # "category": "O",
61
- # "e_canon": 0,
62
- # "ce_canon": 0,
63
- # "lex_canon": 0,
64
- # "text": "Første kapitel. Argus & co. Waterclerker — hvormange er der vel, ...",
65
- # }
66
-
67
- min_date = datetime.fromisoformat(f"{example["year"]}-01-01")
68
- max_date = datetime.fromisoformat(f"{example["year"]}-12-31")
69
- text = f"{example['title_modern']}\n\nSkrevet af {example['full_firstnames']} {example['auth_last_modern']}\nPubliceret {example['year']} af {example['publisher']}\n\n{example['text']}"
70
-
71
- new_example = dict(
72
- text_new=text,
73
- id=example["filename"],
74
- source="memo",
75
- domain="Wiki & Books",
76
- license="cc-by-sa-4.0",
77
- added="2025-03-08",
78
- created=f"{min_date.date()}, {max_date.date()}",
79
- metadata={"source-pretty": "MeMo Canonical Novels"},
 
 
 
 
 
 
 
 
80
  )
81
 
82
- return new_example
 
83
 
84
 
85
- def main():
86
- ds = load_dataset("chcaa/memo-canonical-novels", split="train")
87
- ds = cast(Dataset, ds)
88
- dates = [datetime.fromisoformat(f"{date}-01-01").date() for date in ds["year"]]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
- max_date = max(dates) + timedelta(days=364)
91
- print(str(min(dates)), ",", str(max_date)) # 1870-01-01 , 1899-12-31
92
 
93
- assert len(set(ds["filename"])) == len(ds), "IDs are not unique"
94
- assert len(set(ds["text"])) == len(ds), "Texts are not unique"
 
 
 
 
 
 
95
 
96
- ds = ds.map(convert_sample, num_proc=4)
97
- ds = ds.select_columns(column_order[1:] + ["text_new"])
98
- ds = ds.rename_columns({"text_new": "text"})
99
- # ensure order
100
- ds = ds.select_columns(column_order)
101
 
102
- dir = Path(__file__).parent
103
- save_path = dir / f"{dir.name}.parquet"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
  ds.to_parquet(save_path)
105
 
106
 
107
  if __name__ == "__main__":
 
 
 
 
 
 
 
 
 
108
  main()
 
2
  # requires-python = "==3.12"
3
  # dependencies = [
4
  # "datasets==3.2.0",
5
+ # "dynaword"
6
  # ]
7
+ # [tool.uv.sources]
8
+ # dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword", rev = "00e7f2aee7f7ad2da423419f77ecbb9c0536de0d" }
9
  # ///
10
+ """
11
+ Script for downloading and processing the Danish Memo repository.
12
+
13
+ Note: To run this script, you need to set `GIT_LFS_SKIP_SMUDGE=1` to be able to install dynaword:
14
+
15
+ ```bash
16
+ GIT_LFS_SKIP_SMUDGE=1 uv run data/memo/create.py
17
+ ```
18
+
19
+ This second version fixed previous issues with the download and processing of the Danish Memo repository:
20
+ https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67
21
+ """
22
+
23
+ import logging
24
+ import subprocess
25
+ from datetime import datetime
26
  from pathlib import Path
27
+ from typing import Any
28
+
29
+ import pandas as pd
30
+ from datasets import Dataset
31
+
32
+ from dynaword.process_dataset import (
33
+ add_token_count,
34
+ ensure_column_order,
35
+ remove_duplicate_text,
36
+ remove_empty_texts,
37
+ )
38
+
39
+ logger = logging.getLogger(__name__)
40
+
41
+ download_path = Path(__file__).parent / "tmp"
42
+
43
+
44
+ def download_repo(
45
+ download_path: Path = download_path,
46
+ repo_url: str = "https://huggingface.co/datasets/MiMe-MeMo/Corpus-v1.1",
47
+ revision="7205897f1f3ee65e296072f3e96d49488e54e8ce",
48
+ ) -> Path:
49
+ """
50
+ Downloads the repository from the given URL to the specified path.
51
+ """
52
+ logger.info(f"Downloading repository to {download_path}")
53
+ if not download_path.exists():
54
+ download_path.mkdir(parents=True, exist_ok=True)
55
+
56
+ repo_path = download_path / repo_url.split("/")[-1]
57
+ if repo_path.exists():
58
+ logger.info(f"Repository already exists at {repo_path}, skipping download.")
59
+ return repo_path
60
+
61
+ # Use git to clone the repository running it from the download path
62
+ subprocess.run(["git", "clone", repo_url], check=True, cwd=download_path)
63
+ # Checkout the specific revision
64
+ subprocess.run(["git", "checkout", revision], check=True, cwd=repo_path)
65
+ logger.info("Download complete.")
66
+ return repo_path
67
+
68
+
69
+ def load_texts(repo_path: Path) -> list[dict[str, str]]:
70
+ """
71
+ Loads texts from the downloaded repository.
72
+ """
73
+ text_files_path = repo_path / "texts"
74
+ text_files = list(text_files_path.glob("*.txt"))
75
+ texts = []
76
+ for file in text_files:
77
+ name = file.stem
78
+ with file.open("r") as f:
79
+ content = f.read()
80
+ texts.append({"name": name, "text": content})
81
+ logger.info(f"Loaded {len(texts)} texts from the repository.")
82
+ return texts
83
+
84
+
85
+ def load_memo(repo_path: Path) -> pd.DataFrame:
86
+ texts = load_texts(repo_path)
87
+
88
+ metadata_csv = repo_path / "MeMo-corpus-metadata-v1.1-2023-06-20.csv"
89
+ metadata = pd.read_csv(metadata_csv)
90
+ # remove .pdf from "filename"
91
+ metadata["filename"] = metadata["filename"].str.replace(".pdf", "", regex=False)
92
+ texts_df = pd.DataFrame(texts)
93
+
94
+ text_df_fileames = set(texts_df["name"])
95
+ metadata_filenames = set(metadata["filename"])
96
+
97
+ text_without_metadata = [t for t in text_df_fileames if t not in metadata_filenames]
98
+
99
+ assert (
100
+ len(text_without_metadata) == 0
101
+ ), f"Some texts in the repository do not have metadata: {text_without_metadata}"
102
+
103
+ # merge texts with metadata
104
+ merged_df = pd.merge(
105
+ texts_df, metadata, left_on="name", right_on="filename", how="inner"
106
  )
107
 
108
+ logger.info(f"Loaded {len(merged_df)} rows from the MeMo dataset.")
109
+ return merged_df
110
 
111
 
112
+ def convert_to_dynaword_format(memo_df: pd.DataFrame) -> Dataset:
113
+ # convert to dynaword samples
114
+ samples: list[dict[str, Any]] = []
115
+ for _, row in memo_df.iterrows():
116
+ text = row["text"]
117
+ assert isinstance(text, str), f"Text is not a string: {text}"
118
+
119
+ # if there is a title then add it to the text
120
+ title = row["title"] if pd.notna(row["title"]) else "Ukendt titel"
121
+ subtitle = row["subtitle"] if pd.notna(row["subtitle"]) else ""
122
+ title = f"{title} {subtitle}".strip()
123
+
124
+ first_name = row["firstname"]
125
+ last_name = row["surname"]
126
+ pseudonym = row["pseudonym"]
127
+
128
+ full_name = f"{first_name} {last_name}".strip()
129
+ if not full_name:
130
+ full_name = pseudonym if pd.notna(pseudonym) else "Ukendt forfatter"
131
+ else:
132
+ # add pseudonym if it exists
133
+ if pd.notna(pseudonym) and pseudonym != full_name:
134
+ full_name += f" (Pseudonym: {pseudonym})"
135
 
136
+ # create a new text with the title and author
137
+ text_new = f"{title}\n\nSkrevet af {full_name}\nPubliceret {row['year']} af {row['publisher']}\n ------- \n\n{text}"
138
 
139
+ today = datetime.now().date()
140
+ sample = {
141
+ "id": row["filename"],
142
+ "text": text_new,
143
+ "source": "memo",
144
+ "added": today.isoformat(),
145
+ "created": f"{row['year']}-01-01, {row['year']}-12-31",
146
+ }
147
 
148
+ samples.append(sample)
 
 
 
 
149
 
150
+ ds = Dataset.from_list(samples)
151
+ logger.info(f"Converted to dynaword format with {len(ds)} samples.")
152
+ return ds
153
+
154
+ ds = convert_to_dynaword_format(memo_df)
155
+
156
+
157
+ def main():
158
+ save_path = Path(__file__).parent / "memo.parquet"
159
+
160
+ repo_path = download_repo(download_path)
161
+ memo_df = load_memo(repo_path)
162
+ ds = convert_to_dynaword_format(memo_df)
163
+
164
+ # quality checks and processing
165
+ ds = remove_empty_texts(ds)
166
+ ds = remove_duplicate_text(ds)
167
+ ds = add_token_count(ds)
168
+ ds = ensure_column_order(ds)
169
+
170
+ # save to parquet
171
  ds.to_parquet(save_path)
172
 
173
 
174
  if __name__ == "__main__":
175
+ log_path = Path(__file__).parent / "memo.log"
176
+ logging.basicConfig(
177
+ level=logging.INFO,
178
+ format="%(asctime)s - %(levelname)s - %(message)s",
179
+ handlers=[
180
+ logging.StreamHandler(),
181
+ logging.FileHandler(log_path),
182
+ ],
183
+ )
184
  main()
data/memo/descriptive_stats.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
- "number_of_samples": 839,
3
- "average_document_length": 32813.50774731824,
4
- "number_of_tokens": 9283194,
5
- "revision": "1546256ca9562ecef403e433276c36770859089e"
6
  }
 
1
  {
2
+ "number_of_samples": 858,
3
+ "average_document_length": 375749.0874125874,
4
+ "number_of_tokens": 113742425,
5
+ "revision": "9e941ac4e56b7b77ec77b84a3a40130ba78a9511"
6
  }
data/memo/images/dist_document_length.png CHANGED

Git LFS Details

  • SHA256: c0d134e8760070996542c5a4c3cbcd633762b7e75f333355b81976af60195785
  • Pointer size: 131 Bytes
  • Size of remote file: 546 kB

Git LFS Details

  • SHA256: 3ddef7f93590da187c143a5a1e45fbb29eb2d16cf37f3372c823a1cb282c5f73
  • Pointer size: 131 Bytes
  • Size of remote file: 541 kB
data/memo/memo.log ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-06-23 15:14:07,867 - INFO - Downloading repository to /Users/au561649/Github/danish-dynaword/data/memo/tmp
2
+ 2025-06-23 15:14:07,867 - INFO - Repository already exists at /Users/au561649/Github/danish-dynaword/data/memo/tmp/Corpus-v1.1, skipping download.
3
+ 2025-06-23 15:14:19,489 - INFO - Loaded 858 texts from the repository.
4
+ 2025-06-23 15:14:19,512 - INFO - Loaded 858 rows from the MeMo dataset.
5
+ 2025-06-23 15:14:20,848 - INFO - Converted to dynaword format with 858 samples.
6
+ 2025-06-23 15:14:20,903 - INFO - Removing empty texts
7
+ 2025-06-23 15:14:25,977 - INFO - Filtered 0 empty examples
8
+ 2025-06-23 15:14:25,977 - INFO - Removing duplicate texts
9
+ 2025-06-23 15:14:26,434 - INFO - Filtered 0 duplicate examples
10
+ 2025-06-23 15:15:40,637 - INFO - Ensuring columns are in the correct order and are present
11
+ 2025-06-23 15:33:08,880 - INFO - Downloading repository to /Users/au561649/Github/danish-dynaword/data/memo/tmp
12
+ 2025-06-23 15:33:08,880 - INFO - Repository already exists at /Users/au561649/Github/danish-dynaword/data/memo/tmp/Corpus-v1.1, skipping download.
13
+ 2025-06-23 15:33:19,998 - INFO - Loaded 858 texts from the repository.
14
+ 2025-06-23 15:33:20,025 - INFO - Loaded 858 rows from the MeMo dataset.
15
+ 2025-06-23 15:33:21,332 - INFO - Converted to dynaword format with 858 samples.
16
+ 2025-06-23 15:33:21,373 - INFO - Removing empty texts
17
+ 2025-06-23 15:33:25,745 - INFO - Filtered 0 empty examples
18
+ 2025-06-23 15:33:25,746 - INFO - Removing duplicate texts
19
+ 2025-06-23 15:33:26,174 - INFO - Filtered 0 duplicate examples
20
+ 2025-06-23 15:34:37,788 - INFO - Ensuring columns are in the correct order and are present
data/memo/memo.md CHANGED
@@ -23,23 +23,16 @@ The MeMo corpus comprising almost all Danish novels from the period 1870-1899, k
23
 
24
  The MeMo corpus is established to investigate literary and cultural change in a seminal epoch of Scandinavian cultural and social history (known as 'the modern breakthrough') using natural language processing and other computational methods. The corpus consists of original novels by Norwegian and Danish authors printed in Denmark in the period 1870-99. It includes 858 volumes, totaling 4.5 million sentences and 65 million words.
25
 
26
-
27
- Lex.dk is a Danish online encyclopedia platform providing access to reliable and authoritative knowledge on a wide range of topics. It is created and curated by experts, ensuring high-quality, accurate content. The platform serves as a central hub for general and specialized information in Danish, making it a valuable resource for education, research, and general learning.
28
-
29
-
30
- Additional information about this dataset can be found on their [project page](https://nors.ku.dk/english/research/projects/measuring-modernity/) or on their huggingface [dataset](https://huggingface.co/datasets/MiMe-MeMo/Corpus-v1.1).
31
-
32
-
33
-
34
 
35
  ## Dataset Description
36
 
37
  <!-- START-DESC-STATS -->
38
  - **Language**: dan, dansk, Danish
39
  - **Domains**: Books
40
- - **Number of samples**: 839
41
- - **Number of tokens (Llama 3)**: 9.28M
42
- - **Average document length (characters)**: 32813.51
43
  <!-- END-DESC-STATS -->
44
 
45
 
@@ -49,12 +42,12 @@ An example from the dataset looks as follows.
49
  <!-- START-SAMPLE -->
50
  ```py
51
  {
52
- "id": "1894_Aagaard_UnderligeFyre",
53
- "text": "Underlige Fyre\n\nSkrevet af Oscar Aagaard\nPubliceret 1894 af Gyldendal\n\nFørste kapitel. Argus & co. W[...]",
54
  "source": "memo",
55
- "added": "2025-03-08",
56
- "created": "1894-01-01, 1894-12-31",
57
- "token_count": 11058
58
  }
59
  ```
60
 
@@ -79,6 +72,15 @@ An entry in the dataset consists of the following fields:
79
  </p>
80
  <!-- END-DATASET PLOTS -->
81
 
 
 
 
 
 
 
 
 
 
82
 
83
  ## Additional Information
84
 
 
23
 
24
  The MeMo corpus is established to investigate literary and cultural change in a seminal epoch of Scandinavian cultural and social history (known as 'the modern breakthrough') using natural language processing and other computational methods. The corpus consists of original novels by Norwegian and Danish authors printed in Denmark in the period 1870-99. It includes 858 volumes, totaling 4.5 million sentences and 65 million words.
25
 
26
+ Additional information about this dataset can be found on their [project page](https://nors.ku.dk/english/research/projects/measuring-modernity/) or on their huggingface [dataset](https://huggingface.co/datasets/MiMe-MeMo/Corpus-v1.1). The dataset can be inspected online using [the Korp platform](https://alf.hum.ku.dk/korp/?mode=memo_all#?cqp=%5B%5D&corpus=memo_all).
 
 
 
 
 
 
 
27
 
28
  ## Dataset Description
29
 
30
  <!-- START-DESC-STATS -->
31
  - **Language**: dan, dansk, Danish
32
  - **Domains**: Books
33
+ - **Number of samples**: 858
34
+ - **Number of tokens (Llama 3)**: 113.74M
35
+ - **Average document length (characters)**: 375749.09
36
  <!-- END-DESC-STATS -->
37
 
38
 
 
42
  <!-- START-SAMPLE -->
43
  ```py
44
  {
45
+ "id": "1887_Paulsen_EnFremtidskvinde",
46
+ "text": "En fremtidskvinde?\n\nSkrevet af John Paulsen\nPubliceret 1887 af Schubothe\n ------- \n\nDen skandinavisk[...]",
47
  "source": "memo",
48
+ "added": "2025-06-23",
49
+ "created": "1887-01-01, 1887-12-31",
50
+ "token_count": 98454
51
  }
52
  ```
53
 
 
72
  </p>
73
  <!-- END-DATASET PLOTS -->
74
 
75
+ ### Processing
76
+
77
+ In addition to the text itself we prefix the document with the title, year, author name, pseudonym and publisher. This is to allow the model to learn the relation between the document and relevant metadata.
78
+
79
+
80
+ ### Updated and Corrections
81
+
82
+ This version fixed a previous [issues]( https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/67) in MeMo where the documents where incorrectly truncated and normalized. Removing this truncation led to a >10x increase in number of tokens.
83
+
84
 
85
  ## Additional Information
86
 
data/memo/memo.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f818bfe699c81cdb61ffdf260d8d3407118d295e61f596b1e28d100b6c5067d
3
- size 17799996
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44002e00b3e876bb6ebd70949723a08310bb022e4e91502c5ec7a64efb6d4706
3
+ size 202092223
descriptive_stats.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
- "number_of_samples": 891075,
3
- "average_document_length": 14755.728222652415,
4
- "number_of_tokens": 4264549097,
5
- "revision": "1b6610f644fd8ab48414ddb77c5e1cbb53e64b04"
6
  }
 
1
  {
2
+ "number_of_samples": 891094,
3
+ "average_document_length": 15086.31267857263,
4
+ "number_of_tokens": 4369008328,
5
+ "revision": "9e941ac4e56b7b77ec77b84a3a40130ba78a9511"
6
  }
images/dist_document_length.png CHANGED

Git LFS Details

  • SHA256: 1f3eab570f183f092c1bbcb52292b149c8042cb3ef2b49c234c388da43b5dcd1
  • Pointer size: 132 Bytes
  • Size of remote file: 1.89 MB

Git LFS Details

  • SHA256: 2dbdb1d263165561e626a37174aa91e94507d2679dc8e59bf4656b11666bb7df
  • Pointer size: 132 Bytes
  • Size of remote file: 1.88 MB
images/domain_distribution.png CHANGED

Git LFS Details

  • SHA256: ade8d00943fd9b123bdcf45095bafbf27c35976448d5d374b5bf8a5aae4e92cb
  • Pointer size: 131 Bytes
  • Size of remote file: 326 kB

Git LFS Details

  • SHA256: 5e3bb3991b3ce3f55b60af0fc6bfde5a45017e453e2dd1bfb382711e44596ab1
  • Pointer size: 131 Bytes
  • Size of remote file: 331 kB
pyproject.toml CHANGED
@@ -1,6 +1,6 @@
1
  [project]
2
  name = "dynaword"
3
- version = "1.1.1"
4
  description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
  requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
 
1
  [project]
2
  name = "dynaword"
3
+ version = "1.2.0"
4
  description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
  requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
src/tests/test_quality/test_duplicates.py CHANGED
@@ -6,6 +6,7 @@ from datasets import Dataset, load_dataset
6
  from dynaword.paths import repo_path
7
  from ..conftest import DATASET_NAMES
8
 
 
9
  @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
10
  def test_no_within_data_duplicates(dataset_name: str):
11
  ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
 
6
  from dynaword.paths import repo_path
7
  from ..conftest import DATASET_NAMES
8
 
9
+
10
  @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
11
  def test_no_within_data_duplicates(dataset_name: str):
12
  ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
test_results.log CHANGED
@@ -2,15 +2,24 @@
2
  platform darwin -- Python 3.12.0, pytest-8.3.4, pluggy-1.5.0
3
  rootdir: /Users/au561649/Github/danish-dynaword
4
  configfile: pyproject.toml
5
- collected 276 items
 
6
 
7
- src/tests/test_dataset_schema.py ....................................... [ 14%]
8
- ............................. [ 24%]
9
- src/tests/test_datasheets.py ........................................... [ 40%]
10
- ........................................................................ [ 66%]
11
- ....................................................... [ 86%]
12
- src/tests/test_duplicates.py ..................................s [ 98%]
13
- src/tests/test_load.py .. [ 99%]
 
 
 
14
  src/tests/test_unique_ids.py . [100%]
15
 
16
- ======================= 275 passed, 1 skipped in 54.24s ========================
 
 
 
 
 
 
2
  platform darwin -- Python 3.12.0, pytest-8.3.4, pluggy-1.5.0
3
  rootdir: /Users/au561649/Github/danish-dynaword
4
  configfile: pyproject.toml
5
+ plugins: anyio-4.9.0
6
+ collected 310 items
7
 
8
+ src/tests/test_dataset_schema.py ....................................... [ 12%]
9
+ ............................. [ 21%]
10
+ src/tests/test_datasheets.py ........................................... [ 35%]
11
+ ........................................................................ [ 59%]
12
+ ....................................................... [ 76%]
13
+ src/tests/test_load.py .. [ 77%]
14
+ src/tests/test_quality/test_duplicates.py .............................. [ 87%]
15
+ ....s [ 88%]
16
+ src/tests/test_quality/test_short_texts.py ............................. [ 98%]
17
+ ..... [ 99%]
18
  src/tests/test_unique_ids.py . [100%]
19
 
20
+ =============================== warnings summary ===============================
21
+ src/tests/test_quality/test_short_texts.py: 34 warnings
22
+ /Users/au561649/Github/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
23
+
24
+ -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
25
+ ============ 309 passed, 1 skipped, 34 warnings in 77.84s (0:01:17) ============
uv.lock CHANGED
The diff for this file is too large to render. See raw diff