metadata
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: mit
multilinguality:
- monolingual
pretty_name: Medium Articles Dataset
size_categories:
- n>1K
source_datasets:
- original
tags:
- medium
- articles
- blog-posts
task_categories:
- text-classification
- text-generation
task_ids:
- topic-classification
- language-modeling
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audioVersionDurationSec
dtype: float64
- name: codeBlock
dtype: string
- name: codeBlockCount
dtype: float64
- name: collectionId
dtype: string
- name: createdDate
dtype: string
- name: createdDatetime
dtype: string
- name: firstPublishedDate
dtype: string
- name: firstPublishedDatetime
dtype: string
- name: imageCount
dtype: float64
- name: isSubscriptionLocked
dtype: bool
- name: language
dtype: string
- name: latestPublishedDate
dtype: string
- name: latestPublishedDatetime
dtype: string
- name: linksCount
dtype: float64
- name: postId
dtype: string
- name: readingTime
dtype: float64
- name: recommends
dtype: float64
- name: responsesCreatedCount
dtype: float64
- name: socialRecommendsCount
dtype: float64
- name: subTitle
dtype: string
- name: tagsCount
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: totalClapCount
dtype: float64
- name: uniqueSlug
dtype: string
- name: updatedDate
dtype: string
- name: updatedDatetime
dtype: string
- name: url
dtype: string
- name: vote
dtype: bool
- name: wordCount
dtype: float64
- name: publicationdescription
dtype: string
- name: publicationdomain
dtype: string
- name: publicationfacebookPageName
dtype: string
- name: publicationfollowerCount
dtype: float64
- name: publicationname
dtype: string
- name: publicationpublicEmail
dtype: string
- name: publicationslug
dtype: string
- name: publicationtags
dtype: string
- name: publicationtwitterUsername
dtype: string
- name: tag_name
dtype: string
- name: slug
dtype: string
- name: name
dtype: string
- name: postCount
dtype: float64
- name: author
dtype: string
- name: bio
dtype: string
- name: userId
dtype: string
- name: userName
dtype: string
- name: usersFollowedByCount
dtype: float64
- name: usersFollowedCount
dtype: float64
- name: scrappedDate
dtype: float64
- name: claps
dtype: string
- name: reading_time
dtype: float64
- name: link
dtype: string
- name: authors
dtype: string
- name: timestamp
dtype: string
- name: tags
dtype: string
splits:
- name: train
num_bytes: 2654611084
num_examples: 444593
download_size: 1482558340
dataset_size: 2654611084
Medium Articles Dataset Generator
This project combines multiple datasets from Kaggle and Hugging Face to create a comprehensive collection of Medium articles. The combined dataset is available on Hugging Face Hub.
Dataset Description
This dataset is a unique compilation that not only combines multiple sources but also ensures data quality through normalization and deduplication. A key feature is that all entries in the text
column are unique - there are no duplicate articles in the final dataset.
Data Sources:
Kaggle Sources:
- aiswaryaramachandran/medium-articles-with-content
- hsankesara/medium-articles
- meruvulikith/1300-towards-datascience-medium-articles-dataset
Hugging Face Sources:
- fabiochiu/medium-articles
- Falah/medium_articles_posts
Features
- Combines multiple data sources into a single, unified dataset
- Ensures uniqueness: Each article appears only once in the dataset
- Quality control:
- Removes duplicate entries based on article text
- Handles missing values
- Normalizes data format
- Saves the final dataset in efficient Parquet format
- Publishes the dataset to Hugging Face Hub
Requirements
pip install datasets
pip install kagglehub huggingface_hub tqdm
Usage
- Set up your Hugging Face authentication token
- Run the script:
python combined_medium_ds_generator.py
Data Processing Steps
- Downloads datasets from Kaggle and Hugging Face
- Normalizes each dataset by:
- Removing null values
- Eliminating duplicates
- Standardizing column names
- Combines all datasets into a single DataFrame
- Saves the result as a Parquet file
- Uploads the final dataset to Hugging Face Hub
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Author
Acknowledgments
Special thanks to the original dataset creators:
- aiswaryaramachandran
- hsankesara
- meruvulikith
- fabiochiu
- Falah