Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
utcd / README.md
StefanH's picture
Fix: README typo
e6412a6
|
raw
history blame
8.4 kB
metadata
license: mit
task_categories:
  - text-classification
language:
  - en
size_categories:
  - 1M<n<10M
annotations_creators:
  - no-annotation
multilinguality:
  - monolingual
pretty_name: UTCD

Universal Text Classification Dataset (UTCD)

Load dataset

from datasets import load_dataset
dataset = load_dataset('claritylab/utcd', name='in-domain')

Description

UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. TUTCD consists of ~ 6M/800K train/test examples.

UTCD was introduced in the Findings of ACL'23 Paper Label Agnostic Pre-training for Zero-shot Text Classification by Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars. Project Homepage.

UTCD Datasets & Principles:

In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain & aspect transfer. As such, in the construction of UTCD we enforce the following principles:

  • Textual labels: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language.
  • Diverse domains and Sequence lengths: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above.

Structure

Data Samples

Each dataset sample contains the text, the label encoded as an integer, and the dataset name encoded as an integer.

{
    'text': "My favourite food is anything I didn't have to cook myself.",
    'labels': [215],
    'dataset_name': 0
}

Datasets Contained

The UTCD dataset contains 18 datasets, 9 in-domain, 9 out-of-domain, spanning 3 aspects: sentiment, intent and topic.

Below are statistics on the datasets.

In-Domain Datasets

Dataset Aspect #Samples in Train/Test #labels average #token in text in Train/Test
GoEmotions sentiment 43K/5.4K 28 12/12
TweetEval sentiment 45K/12K 3 19/14
Emotion sentiment 16K/2K 6 17/17
SGD intent 16K/4.2K 26 8/9
Clinc-150 intent 15K/4.5K 150 8/8
SLURP intent 12K/2.6K 75 7/7
AG News topic 120K7.6K 4 38/37
DBpedia topic 560K/70K 14 45/45
Yahoo topic 1.4M/60K 10 10/10

Out-of-Domain Datasets

Dataset Aspect #Samples in Train/Test #labels average #token in text
Amazon Polarity sentiment 3.6M/400K 2 71/71
Financial Phrase Bank sentiment 1.8K/453 3 19/19
Yelp sentiment 650K/50K 3 128/128
Banking77 intent 10K/3.1K 77 11/10
SNIPS intent 14K/697 7 8/8
NLU Eval intent 21K/5.2K 68 7/7
MultiEURLEX topic 55K/5K 21 1198/1853
Big Patent topic 25K/5K 9 2872/2892
Consumer Finance topic 630K/160K 18 190/189

Configurations

The in-domain and out-of-domain configurations has 2 splits: train and test.

The aspect-normalized configurations (aspect-normalized-in-domain, aspect-normalized-out-of-domain) has 3 splits: train, validation and test.

Below are statistics on the configuration splits.

In-Domain Configuration

Split #samples
Train 2,192,703
Test 168,365

Out-of-Domain Configuration

Split #samples
Train 4,996,673
Test 625,911

Aspect-Normalized In-Domain Configuration

Split #samples
Train 115,127
Validation 12,806
Test 168,365

Aspect-Normalized Out-of-Domain Configuration

Split #samples
Train 119,167
Validation 13,263
Test 625,911