Datasets:
Tasks:
Question Answering
Formats:
json
Sub-tasks:
extractive-qa
Languages:
Czech
Size:
1K - 10K
License:
Commit
·
521a0d6
1
Parent(s):
fbcbcd6
Update README.md
Browse files
README.md
CHANGED
@@ -22,3 +22,80 @@ task_categories:
|
|
22 |
task_ids:
|
23 |
- extractive-qa
|
24 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
task_ids:
|
23 |
- extractive-qa
|
24 |
---
|
25 |
+
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
# Dataset Card for Czech Simple Question Answering Dataset 2.0
|
30 |
+
|
31 |
+
This a processed and filtered adaptation of an existing dataset. For raw and larger dataset, see `Dataset Source` section.
|
32 |
+
|
33 |
+
|
34 |
+
## Dataset Description
|
35 |
+
The data contains questions and answers based on Czech wikipeadia articles.
|
36 |
+
Each question has an answer (or more) and a selected part of the context as the evidence.
|
37 |
+
A majority of the answers are extractive - i.e. they are present in the context in the exact form. The remaining cases are
|
38 |
+
|
39 |
+
- yes/no questions
|
40 |
+
- answer is almost in the exact form present in the text, but the form of words was changed to suit the question (declension, ...)
|
41 |
+
- answered in own words (should be rare, but is not)
|
42 |
+
|
43 |
+
All questions in the dataset are answerable from the context. Small minority of questions have multiple answers.
|
44 |
+
Sometimes it means that any of them is correct (e.g. either "Pacifik" or "Tichý oceán" are correct terms for Pacific Ocean)
|
45 |
+
and sometimes it means that all of them together are a correct answer (e.g., Who was Leonardo da Vinci? ["painter", "engineer"])
|
46 |
+
|
47 |
+
Total number of examples is around:
|
48 |
+
|
49 |
+
- 6,250 in train
|
50 |
+
- 570 in validation
|
51 |
+
- 850 in test.
|
52 |
+
|
53 |
+
|
54 |
+
## Dataset Features
|
55 |
+
Each example contains:
|
56 |
+
- `item_id`: string id of the
|
57 |
+
- `context`: "reasonably" big chunk (string) of wikipedia article that contains the answer
|
58 |
+
- `question`: string
|
59 |
+
- `answers`: list of all answers (string). mostly list of length 1
|
60 |
+
- `evidence_text`: substring of context (typically one sentence) that is sufficient to answer the question
|
61 |
+
- `evidence_start`: index in context, such that `context[evidence_start:evidence_end] == evidence_text`
|
62 |
+
- `evidence_end`: index in context
|
63 |
+
- `occurences`:
|
64 |
+
list of (dictionaries) occurences of the answer(s) in the evidence.
|
65 |
+
Each answer was searched with word boundaries ("\b" in regex) and case-sensitive in the evidence.
|
66 |
+
If nothing found, try again but case-insensitive.
|
67 |
+
If nothing found, try again but case-sensitive without word boundaries.
|
68 |
+
If nothing found, try again but case-insensitive without word boundaries.
|
69 |
+
This process should supress "false positive" occurences of the answer in the evidence.
|
70 |
+
- `start`: index in context
|
71 |
+
- `end`: index in context
|
72 |
+
- `text`: the answer looked for
|
73 |
+
- `url`: link to the wikipedia article
|
74 |
+
- `original_article`: original parsed wikipedia article from which the context is taken
|
75 |
+
- `question_type`: type of the question, one of: ['ABBREVIATION', 'DATETIME', 'DENOTATION', 'ENTITY', 'LOCATION', 'NUMERIC', 'ORGANIZATION', 'OTHER', 'PERSON', 'YES_NO']
|
76 |
+
- `answer_type`: type of the answer, one of: ['ABBREVIATION', 'ADJ_PHRASE', 'CLAUSE', 'DATETIME', 'ENTITY', 'LOCATION', 'NUMERIC', 'OTHER', 'PERSON', 'VERB_PHRASE']
|
77 |
+
|
78 |
+
|
79 |
+
## Dataset Source
|
80 |
+
|
81 |
+
The dataset is a preprocessed adaptation of existing SQAD 3.0 dataset [link to data](https://lindat.cz/repository/xmlui/handle/11234/1-3069).
|
82 |
+
This adaptation contains (almost) same data, but converted to a convenient format.
|
83 |
+
The data was also filtered to remove a statistical bias where the answer was contained
|
84 |
+
in the first sentence in the article (around 50% of all data in the original dataset, likely
|
85 |
+
caused by the data collection process).
|
86 |
+
|
87 |
+
|
88 |
+
## Citation
|
89 |
+
|
90 |
+
Cite authors of the [original dataset](https://lindat.cz/repository/xmlui/handle/11234/1-3069):
|
91 |
+
|
92 |
+
```bibtex
|
93 |
+
@misc{11234/1-3069,
|
94 |
+
title = {sqad 3.0},
|
95 |
+
author = {Medve{\v d}, Marek and Hor{\'a}k, Ale{\v s}},
|
96 |
+
url = {http://hdl.handle.net/11234/1-3069},
|
97 |
+
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
|
98 |
+
copyright = {{GNU} Library or "Lesser" General Public License 3.0 ({LGPL}-3.0)},
|
99 |
+
year = {2019}
|
100 |
+
}
|
101 |
+
```
|