Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
dipteshkanojia commited on
Commit
74376e5
·
1 Parent(s): 53c371d
Files changed (1) hide show
  1. README.md +123 -0
README.md CHANGED
@@ -12,6 +12,129 @@ We provide two variants of our dataset - Filtered and Unfiltered. They are descr
12
 
13
  2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ### Installation
16
 
17
  We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
 
12
 
13
  2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
14
 
15
+ annotations_creators:
16
+ - other
17
+ language_creators:
18
+ - found
19
+ languages:
20
+ - en
21
+ licenses:
22
+ - cc-by-sa 4.0
23
+ multilinguality:
24
+ - monolingual
25
+ paperswithcode_id: acronym-identification
26
+ pretty_name: 'PLOD: An Abbreviation Detection Dataset'
27
+ size_categories:
28
+ - 100K<n<1M
29
+ source_datasets:
30
+ - original
31
+ task_categories:
32
+ - token-classification
33
+ task_ids:
34
+ - named-entity-recognition
35
+
36
+ # Dataset Card for PLOD-filtered
37
+
38
+ ## Table of Contents
39
+ - [Dataset Description](#dataset-description)
40
+ - [Dataset Summary](#dataset-summary)
41
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
42
+ - [Languages](#languages)
43
+ - [Dataset Structure](#dataset-structure)
44
+ - [Data Instances](#data-instances)
45
+ - [Data Fields](#data-instances)
46
+ - [Data Splits](#data-instances)
47
+ - [Dataset Creation](#dataset-creation)
48
+ - [Curation Rationale](#curation-rationale)
49
+ - [Source Data](#source-data)
50
+ - [Annotations](#annotations)
51
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
52
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
53
+ - [Social Impact of Dataset](#social-impact-of-dataset)
54
+ - [Discussion of Biases](#discussion-of-biases)
55
+ - [Other Known Limitations](#other-known-limitations)
56
+ - [Additional Information](#additional-information)
57
+ - [Dataset Curators](#dataset-curators)
58
+ - [Licensing Information](#licensing-information)
59
+ - [Citation Information](#citation-information)
60
+
61
+ ## Dataset Description
62
+
63
+ - **Homepage:** [Needs More Information]
64
+ - **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection
65
+ - **Paper:** XX
66
+ - **Leaderboard:** YY
67
+ - **Point of Contact:** [Diptesh Kanojia](mailto:d.kanojia@surrey.ac.uk)
68
+
69
+ ### Dataset Summary
70
+
71
+ This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.
72
+
73
+ ### Supported Tasks and Leaderboards
74
+
75
+ This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.
76
+
77
+
78
+ ### Languages
79
+
80
+ English
81
+
82
+ ## Dataset Structure
83
+
84
+ ### Data Instances
85
+
86
+ A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`.
87
+
88
+ An example from the dataset:
89
+ {'id': '1',
90
+ 'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],
91
+ 'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],
92
+ 'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
93
+ }
94
+
95
+ ### Data Fields
96
+
97
+ - id: the row identifier for the dataset point.
98
+ - tokens: The tokens contained in the text.
99
+ - pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.
100
+ - ner_tags: The tags for abbreviations and long-forms.
101
+
102
+
103
+ ### Data Splits
104
+
105
+ | | Train | Valid | Test |
106
+ | ----- | ------ | ----- | ---- |
107
+ | Filtered | 112652 | 24140 | 24140|
108
+ | Unfiltered | 113860 | 24399 | 24399|
109
+
110
+
111
+ ## Dataset Creation
112
+
113
+ ### Source Data
114
+
115
+ #### Initial Data Collection and Normalization
116
+
117
+ Extracting the data from PLOS Journals online and then tokenization, normalization.
118
+
119
+ #### Who are the source language producers?
120
+
121
+ PLOS Journal
122
+
123
+ ## Additional Information
124
+
125
+ ### Dataset Curators
126
+
127
+ The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,
128
+ Diptesh Kanojia, Constantin Orasan.
129
+
130
+ ### Licensing Information
131
+
132
+ CC-BY-SA 4.0
133
+
134
+ ### Citation Information
135
+
136
+ [Needs More Information]
137
+
138
  ### Installation
139
 
140
  We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>