Alaamer commited on
Commit
1941e45
·
verified ·
1 Parent(s): 219a2a4

Upload .huggingface/README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. .huggingface/README.md +98 -0
.huggingface/README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ annotations_creators:
6
+ - no-annotation
7
+ language_creators:
8
+ - found
9
+ pretty_name: Medium Articles Dataset
10
+ size_categories:
11
+ - n>1K
12
+ source_datasets:
13
+ - original
14
+ task_categories:
15
+ - text-classification
16
+ - text-generation
17
+ task_ids:
18
+ - topic-classification
19
+ - language-modeling
20
+ tags:
21
+ - medium
22
+ - articles
23
+ - blog-posts
24
+ dataset_info:
25
+ features:
26
+ - name: text
27
+ dtype: string
28
+ - name: title
29
+ dtype: string
30
+ - name: url
31
+ dtype: string
32
+ ---
33
+
34
+ # Medium Articles Dataset
35
+
36
+ ## Dataset Description
37
+
38
+ ### Dataset Summary
39
+
40
+ This dataset is a comprehensive collection of Medium articles, combining and normalizing data from multiple sources on both Kaggle and Hugging Face. A key feature is that all entries in the `text` column are unique - there are no duplicate articles in the final dataset.
41
+
42
+ ### Languages
43
+
44
+ The dataset primarily contains articles in English.
45
+
46
+ ### Dataset Structure
47
+
48
+ The dataset is provided in Parquet format with unique entries in the text column.
49
+
50
+ ### Data Fields
51
+
52
+ - `text`: The main content of the article (unique across the dataset)
53
+ - `title`: The title of the article (if available in source dataset)
54
+ - `url`: URL of the original article (if available in source dataset)
55
+ - Additional fields may vary based on source datasets
56
+
57
+ ### Dataset Creation
58
+
59
+ This dataset was created by combining and normalizing multiple existing datasets from Kaggle and Hugging Face. The process includes:
60
+ 1. Downloading source datasets
61
+ 2. Normalizing data format
62
+ 3. Removing duplicate articles based on text content
63
+ 4. Handling missing values
64
+ 5. Converting to Parquet format
65
+
66
+ ### Source Data
67
+
68
+ #### Kaggle Sources:
69
+ - aiswaryaramachandran/medium-articles-with-content
70
+ - hsankesara/medium-articles
71
+ - meruvulikith/1300-towards-datascience-medium-articles-dataset
72
+
73
+ #### Hugging Face Sources:
74
+ - fabiochiu/medium-articles
75
+ - Falah/medium_articles_posts
76
+
77
+ ### Licensing Information
78
+
79
+ This dataset is released under MIT License.
80
+
81
+ ### Citation Information
82
+
83
+ If you use this dataset in your research, please cite:
84
+
85
+ ```bibtex
86
+ @dataset{medium_articles_2025,
87
+ author = {Alaamer},
88
+ title = {Medium Articles Dataset},
89
+ year = {2025},
90
+ publisher = {Hugging Face},
91
+ journal = {Hugging Face Data Repository},
92
+ howpublished = {\url{https://huggingface.co/datasets/Alaamer/medium-articles-posts-with-content}}
93
+ }
94
+ ```
95
+
96
+ ### Contributions
97
+
98
+ Thanks to all the original dataset creators and contributors. Contributions are welcome via pull requests.