File size: 8,644 Bytes
1600224
 
3a7c842
9322db7
3a7c842
 
 
 
 
 
 
4661d78
 
 
e16e627
 
 
 
 
9322db7
 
 
 
 
 
 
 
204a52d
 
 
9a7fd59
 
 
 
 
c3a9827
 
 
 
 
 
 
 
e7014e4
 
 
2234a18
 
 
 
 
455bba6
 
 
 
 
 
 
 
6b04d95
 
 
7824500
 
 
 
 
c68671d
 
 
 
 
 
 
 
6640ab5
 
 
ee2dc31
 
 
 
 
df65882
 
 
 
 
 
 
 
f5f9170
 
 
b7fc3a6
 
 
 
 
3a7c842
 
 
 
 
4661d78
 
e16e627
 
9322db7
 
 
 
204a52d
 
9a7fd59
 
c3a9827
 
 
 
e7014e4
 
2234a18
 
455bba6
 
 
 
6b04d95
 
7824500
 
c68671d
 
 
 
6640ab5
 
ee2dc31
 
df65882
 
 
 
f5f9170
 
b7fc3a6
 
2656f94
 
 
 
 
 
 
 
 
 
 
 
 
 
549673b
38d2ded
549673b
 
 
 
4cfb2f1
549673b
 
 
 
 
f6d2259
9c87c19
f6d2259
 
 
 
 
 
9c87c19
f6d2259
 
 
 
 
 
549673b
 
 
 
38d2ded
8595849
 
4cfb2f1
 
 
 
 
549673b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8595849
549673b
 
 
 
 
5735b2d
 
 
549673b
 
 
3c21840
549673b
 
 
 
 
 
 
 
3c21840
549673b
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
---
license: apache-2.0
dataset_info:
- config_name: lcsalign-en
  features:
  - name: text
    dtype: string
  splits:
  - name: test
    num_bytes: 305023
    num_examples: 2507
  - name: train
    num_bytes: 455104487
    num_examples: 4200000
  - name: valid
    num_bytes: 21217
    num_examples: 280
  download_size: 318440274
  dataset_size: 455430727
- config_name: lcsalign-hi
  features:
  - name: text
    dtype: string
  splits:
  - name: test
    num_bytes: 770118
    num_examples: 2507
  - name: train
    num_bytes: 1084853757
    num_examples: 4200000
  - name: valid
    num_bytes: 45670
    num_examples: 280
  download_size: 470820787
  dataset_size: 1085669545
- config_name: lcsalign-hicm
  features:
  - name: text
    dtype: string
  splits:
  - name: test
    num_bytes: 561442
    num_examples: 2507
  - name: train
    num_bytes: 872213032
    num_examples: 4200000
  - name: valid
    num_bytes: 34530
    num_examples: 280
  download_size: 455501891
  dataset_size: 872809004
- config_name: lcsalign-hicmdvg
  features:
  - name: text
    dtype: string
  splits:
  - name: test
    num_bytes: 798126
    num_examples: 2507
  - name: train
    num_bytes: 1104443176
    num_examples: 4200000
  - name: valid
    num_bytes: 47513
    num_examples: 280
  download_size: 491775164
  dataset_size: 1105288815
- config_name: lcsalign-hicmrom
  features:
  - name: text
    dtype: string
  splits:
  - name: test
    num_bytes: 338176
    num_examples: 2507
  - name: train
    num_bytes: 467370942
    num_examples: 4200000
  - name: valid
    num_bytes: 20431
    num_examples: 280
  download_size: 337385029
  dataset_size: 467729549
- config_name: lcsalign-noisyhicmrom
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 462418855
    num_examples: 4200000
  - name: test
    num_bytes: 334401
    num_examples: 2507
  - name: valid
    num_bytes: 20246
    num_examples: 280
  download_size: 379419827
  dataset_size: 462773502
configs:
- config_name: lcsalign-en
  data_files:
  - split: test
    path: lcsalign-en/test-*
  - split: train
    path: lcsalign-en/train-*
  - split: valid
    path: lcsalign-en/valid-*
- config_name: lcsalign-hi
  data_files:
  - split: test
    path: lcsalign-hi/test-*
  - split: train
    path: lcsalign-hi/train-*
  - split: valid
    path: lcsalign-hi/valid-*
- config_name: lcsalign-hicm
  data_files:
  - split: test
    path: lcsalign-hicm/test-*
  - split: train
    path: lcsalign-hicm/train-*
  - split: valid
    path: lcsalign-hicm/valid-*
- config_name: lcsalign-hicmdvg
  data_files:
  - split: test
    path: lcsalign-hicmdvg/test-*
  - split: train
    path: lcsalign-hicmdvg/train-*
  - split: valid
    path: lcsalign-hicmdvg/valid-*
- config_name: lcsalign-hicmrom
  data_files:
  - split: test
    path: lcsalign-hicmrom/test-*
  - split: train
    path: lcsalign-hicmrom/train-*
  - split: valid
    path: lcsalign-hicmrom/valid-*
- config_name: lcsalign-noisyhicmrom
  data_files:
  - split: train
    path: lcsalign-noisyhicmrom/train-*
  - split: test
    path: lcsalign-noisyhicmrom/test-*
  - split: valid
    path: lcsalign-noisyhicmrom/valid-*
task_categories:
- translation
language:
- hi
- en
tags:
- codemix
- indicnlp
- hindi
- english
- multilingual
pretty_name: Hindi-English Codemix Datasets
size_categories:
- 1M<n<10M
---
# Dataset Card for Hindi English Codemix Dataset - HINMIX

**HINMIX is a massive parallel codemixed dataset for Hindi-English code switching.**

See the [📚 paper on arxiv](https://arxiv.org/abs/2403.16771) to dive deep into this synthetic codemix data generation pipeline. 
Dataset contains 4.2M fully parallel sentences in 6 Hindi-English forms.

Further, we release gold standard codemix dev and test set manually translated by proficient bilingual annotators.
- Dev Set consists of 280 examples
- Test set consists of 2507 examples

To load the dataset:
```python
!pip install datasets
from datasets import load_dataset
hinmix_ds = load_dataset("kartikagg98/HINMIX_hi-en","lcsalign-hicmrom") #choose one from lcsalign-en, lcsalign-hicm, lcsalign-hi, lcsalign-hicmrom, lcsalign-noisyhicmrom, lcsalign-hicmdvg
print ([hinmix_ds[i][10]['text'] for i in ['train','valid','test']])
```
Output:
```bash
>>> ['events hi samay men kahin south malabar men ghati hai.',
 'beherhaal, pulis ne body ko sector-16 ke hospital ki mortuary men rakhva diya hai.',
 'yah hamare country ke liye reality men mandatory thing hai.']

```

## Dataset Details

### Dataset Description

We construct a large synthetic Hinglish-English dataset by leveraging a bilingual Hindi-English corpus.
Split: Train, test, valid
Subsets: 
 - **Hi** - Hindi in devanagiri script (**Example**: *अमेरिकी लोग अब पहले जितनी गैस नहीं खरीदते।*)
 - **Hicm** - Hindi sentences with codemix words substituted in English (**Example**: *American people अब पहले जितनी gas नहीं खरीदते।*)
 - **Hicmrom** - Hicm with romanized hindi words (**Example**: *American people ab pahle jitni gas nahin kharidte.*)
 - **Hicmdvg** - Hicm with transliterated english words to devangiri (**Example**: *अमेरिकन पेओपल अब पहले जितनी गैस नहीं खरीदते।*)
 - **NoisyHicmrom** - synthetic noise added to Hicmrom sentences to improve model robustness (**Example**: *Aerican people ab phle jtni gas nain khridte.*)


### Dataset Sources [optional]

- **Repository:** https://github.com/Kartikaggarwal98/Robust_Codemix_MT
- **Paper:** https://arxiv.org/abs/2403.16771

## Uses

Dataset can be used individually to train machine translation models for codemix hindi translation in any direction.
Dataset can be appended with other languages from similar language family to transfer codemixing capabilities in a zero shot manner. 
Zero-shot translation on bangla-english showed great performance without even developing bangla codemix corpus. 
An indic-multilingual model with this data as a subset can improve codemixing by a significant margin.

### Source Data

[IITB Parallel corpus](https://www.cfilt.iitb.ac.in/iitb_parallel/) is chosen as the base dataset to translate into codemix forms. 
The corpus contains widely diverse content from news articles, judicial domain, indian government websites, wikipedia, book translations, etc.

#### Data Collection and Processing

1. Given a source- target sentence pair S || T , we generate the synthetic code-mixed data by substituting words in the matrix language sentence with the corresponding words from the embedded language sentence. 
Here, hindi is the matrix language which forms the syntactic and morphological structure of CM sentence. English becomes the embedded language from which we borrow words.
1. Create inclusion list of nouns, adjectives and quantifiers which are candidates for substitution.
1. POS-tag the corpus using any tagger. We used [LTRC](http://ltrc.iiit.ac.in/analyzer/) for hindi tagging.
1. Use fast-align for learning alignment model b/w parallel corpora (Hi-En). Once words are aligned, next task is switch words from english sentences to hindi sentence based on inclusion list.
1. Use heuristics to replace n-gram words and create multiple codemix mappings of the same hindi sentence.
1. Filter sentences using deterministic and perplexity metrics from a multilingual model like XLM.
1. Add synthetic noise like omission, switch, typo, random replacement to consider the noisy nature of codemix text.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61565c721b6f2789680793eb/KhhuM9Ze2-UrllHh6vRGL.png)


### Recommendations

It's important to recognize that this work, conducted three years ago, utilized the state-of-the-art tools available at the time for each step of the pipeline. 
Consequently, the quality was inherently tied to the performance of these tools. Given the advancements in large language models (LLMs) today, there is potential to enhance the dataset. 
Implementing rigorous filtering processes, such as deduplication of similar sentences and removal of ungrammatical sentences, could significantly improve the training of high-quality models.

## Citation Information

```
@misc{kartik2024synthetic,
      title={Synthetic Data Generation and Joint Learning for Robust Code-Mixed Translation}, 
      author={Kartik and Sanjana Soni and Anoop Kunchukuttan and Tanmoy Chakraborty and Md Shad Akhtar},
      year={2024},
      eprint={2403.16771},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

## Dataset Card Contact

kartik@ucsc.edu