Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
File size: 1,730 Bytes
e0a37a9
775baa3
 
 
e0a37a9
 
775baa3
e0a37a9
 
 
 
 
 
775baa3
 
e0a37a9
775baa3
 
 
e0a37a9
 
775baa3
 
 
 
e0a37a9
775baa3
fe2f1d0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<p align="center"><img src="imgs/plod.png" alt="logo" width="50" height="84"/></p>

# PLOD: An Abbreviation Detection Dataset  

This is the repository for PLOD Dataset submitted to LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection.

### Dataset

We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.

1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>

2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>

### Installation

We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
Please see the instructions at these websites to setup your own custom training with our dataset.

### Model(s)

The working model is present [here at this link](https://huggingface.co/surrey-nlp/en_abbreviation_detection_roberta_lar).<br/>
On the link provided above, the model can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/>

#### Usage (in Python)

You can use the HuggingFace Model link above to find the instructions for using this model in Python locally.