File size: 6,191 Bytes
d6018e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6c3bb2c
 
 
 
 
 
 
 
d6018e9
6c3bb2c
 
 
 
cd8798c
 
6c3bb2c
 
 
 
0d1f04c
6c3bb2c
0d1f04c
 
6c3bb2c
0d1f04c
 
6c3bb2c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fec243e
0d1f04c
6c3bb2c
 
 
0d1f04c
6c3bb2c
0d1f04c
6c3bb2c
 
 
 
 
 
cdb1931
6c3bb2c
 
 
 
 
 
 
cdb1931
6c3bb2c
 
 
cdb1931
6c3bb2c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d1f04c
6c3bb2c
0d1f04c
6c3bb2c
 
 
 
 
0d1f04c
6c3bb2c
0d1f04c
6c3bb2c
0d1f04c
6c3bb2c
 
 
 
 
0d1f04c
6c3bb2c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
dataset_info:
  features:
  - name: query
    dtype: string
  - name: positive
    dtype: string
  - name: negative1
    dtype: string
  - name: negative2
    dtype: string
  - name: negative3
    dtype: string
  - name: negative4
    dtype: string
  splits:
  - name: train
    num_bytes: 64433976
    num_examples: 12373
  download_size: 33216385
  dataset_size: 64433976
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: apache-2.0
task_categories:
- feature-extraction
- sentence-similarity
language:
- ar
size_categories:
- 1K<n<10K
---
# Arabic With Ranked Hard Negatives

## Dataset Summary

The **Arabic Hard Negative Dataset** is derived from the Arabic subset of the Mr. TyDi dataset [Mr. TyDi dataset](https://huggingface.co/datasets/castorini/mr-tydi). Using an **advanced Arabic embedding model** [GATE](Omartificial-Intelligence-Space/GATE-AraBert-v1), this dataset restructures the original data to include a `query`, a `positive passage`, and the **top 4** `hard negatives` for each query based on similarity scores. These hard negatives are the most semantically similar non-relevant passages to the positive passage, providing a challenging dataset for retrieval and re-ranking tasks.
This dataset is tailored for applications in retrieval model training, re-ranking, and contrastive learning where the presence of **hard negatives** can significantly improve the performance of machine learning models.

## Dataset Structure

- The dataset contains the following fields:
  
- **query**: The user query string.

  
- **positive**: The relevant passage for the query.

  
- **negative1, negative2, negative3, negative4**: The top 4 semantically similar but non-relevant passages to the positive.

### Example Data

```json
{
  "query": "ما هي نظرية الحقل الكمي؟",
  "positive": {
    "text": "بدأت نظرية الحقل الكمي بشكل طبيعي بدراسة التفاعلات الكهرومغناطيسية ..."
  },
  "negative1": {
    "text": "تم تطوير النهج مؤخرًا ليشمل نسخة جبرية من الحقل الكمي ..."
  },
  "negative2": {
    "text": "نظرية الحقول الكمومية لها تطبيقات واسعة تشمل العديد من العلوم الفيزيائية ..."
  },
  "negative3": {
    "text": "النظرية الكهرومغناطيسية لها دور محوري في نظرية الحقول الكمومية ..."
  },
  "negative4": {
    "text": "الحقل الكمي يستخدم الآن في الفيزياء النظرية وتطبيقات أخرى ..."
  },
  "similarity1": 0.75,
  "similarity2": 0.72,
  "similarity3": 0.70,
  "similarity4": 0.68
}
```

## Dataset Statistics

🔸Number of rows: 12.4K

🔸Fields: 6 (query, positive, 4 negatives)

Similarity Ranges:

🔸`negative1`: Average similarity: ~0.7

🔸`negative4`: Average similarity: ~0.65

Languages: Arabic (Modern Standard Arabic).

## Dataset Analysis and Insights

### 1. Average Similarity Across Negatives:

![Gate-sim-results](https://i.ibb.co/7SKdT2F/Gate-sim-results.png)
   
🔸The average similarity between the positive passage and the negatives decreases as the rank increases. Below is a bar chart visualizing the average similarity for the top 30 negatives in the original dataset, focusing on the top 4 for this version.

![Gate-sim-results-dis](https://i.ibb.co/gTQD4GH/Gate-sim-result-dis.png)

### 2. Similarity Distributions:

🔸The similarity scores for each negative passage are distributed differently. Below are the histograms for the similarity distributions of the top 30 negatives, emphasizing the scores for negative1 to negative4.

### 3. Insights
   
The top-ranked negatives (negative1 and negative2) are significantly closer in similarity to the positive passage, making them challenging and ideal for training advanced retrieval models.
The similarity drops slightly for negative3 and negative4, but they remain "hard negatives," offering diverse yet challenging non-relevant passages for contrastive learning.

## How to Use This Dataset

```python
from datasets import load_dataset

dataset = load_dataset('Omartificial-Intelligence-Space/Arabic-With-Ranked-Hard-Negatives')
dataset
```

## Recommended Applications
▪️ Training Retrieval Models: Use the triplet structure (query, positive, negative) to train retrieval models with loss functions like triplet loss or contrastive loss.

▪️ Fine-Tuning Re-Ranking Models: Use the ranked negatives to train models to rank positives above hard negatives.

▪️ Evaluation Benchmarks: Use the dataset as a benchmark to evaluate retrieval models’ ability to handle hard negatives.

## Dataset Creation Process

✔️ Original Data: The Arabic subset of the Mr. TyDi dataset [Mr. TyDi dataset](https://huggingface.co/datasets/castorini/mr-tydi) was used as the foundation.

✔️ Embedding Model: An Arabic embedding model [GATE](Omartificial-Intelligence-Space/GATE-AraBert-v1) was employed to calculate similarity scores between the positive and all negatives.

✔️ Ranking Negatives: For each query, the negatives were ranked by descending similarity, and the top 4 were selected as hard negatives.

✔️ Filtering and Validation: The dataset was validated to ensure the semantic integrity of negatives.

## Limitations and Considerations

▪️ Domain-Specific Bias: The embedding model might favor specific domains, impacting the selection of negatives.

▪️ Similarity Metric: The dataset relies on the embedding model's similarity scores, which may not perfectly align with human judgment.

### Citation Information

If you use this dataset in your research, please cite the original Mr. TyDi paper and this dataset as follows:

```
@article{mrtydi,
      title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, 
      author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
      year={2021},
      journal={arXiv:2108.08787},
}

@dataset{Omartificial-Intelligence-Space,
      title={Arabic With Ranked Hard Negatives},
      author={Omer Nacar},
      year={2024},
      note={Hugging Face Dataset Repository}
}
```