gsaltintas's picture
Uploading temporal_expressions subset
435e7ea verified
|
raw
history blame
6.85 kB
metadata
license: cc
multilinguality: multilingual
task_categories:
  - multiple-choice
pretty_name: Tokenization Robustness
tags:
  - multilingual
  - tokenization
configs:
  - config_name: social_media_informal_text
    data_files:
      - split: tokenizer_robustness_social_media_informal_text
        path: >-
          social_media_informal_text/tokenizer_robustness_social_media_informal_text-*
      - split: dev
        path: social_media_informal_text/dev-*
  - config_name: temporal_expressions
    data_files:
      - split: tokenizer_robustness_temporal_expressions
        path: temporal_expressions/tokenizer_robustness_temporal_expressions-*
      - split: dev
        path: temporal_expressions/dev-*
dataset_info:
  - config_name: social_media_informal_text
    features:
      - name: question
        dtype: string
      - name: choices
        sequence: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: coding_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
      - name: __index_level_0__
        dtype: int64
    splits:
      - name: tokenizer_robustness_social_media_informal_text
        num_bytes: 13202
        num_examples: 59
      - name: dev
        num_bytes: 10168
        num_examples: 47
    download_size: 22875
    dataset_size: 23370
  - config_name: temporal_expressions
    features:
      - name: question
        dtype: string
      - name: choices
        sequence: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: coding_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
      - name: __index_level_0__
        dtype: int64
    splits:
      - name: tokenizer_robustness_temporal_expressions
        num_bytes: 4603
        num_examples: 21
      - name: dev
        num_bytes: 1306
        num_examples: 6
    download_size: 15229
    dataset_size: 5909

Dataset Card for Tokenization Robustness

A comprehensive evaluation dataset for testing robustness of different tokenization strategies.

Dataset Details

Dataset Description

This dataset evaluates how robust language models are to different tokenization strategies and edge cases. It includes questions with multiple choice answers designed to test various aspects of tokenization handling.

  • Curated by: R3
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: cc

Dataset Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

The dataset contains multiple-choice questions with associated metadata about tokenization types and categories.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

The dataset focuses primarily on English text and may not generalize to other languages or tokenization schemes not covered in the evaluation.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]