Datasets:
update readme.md
Browse files
README.md
CHANGED
@@ -43,8 +43,13 @@ This dataset is tailored for applications in retrieval model training, re-rankin
|
|
43 |
## Dataset Structure
|
44 |
|
45 |
- The dataset contains the following fields:
|
|
|
46 |
- **query**: The user query string.
|
|
|
|
|
47 |
- **positive**: The relevant passage for the query.
|
|
|
|
|
48 |
- **negative1, negative2, negative3, negative4**: The top 4 semantically similar but non-relevant passages to the positive.
|
49 |
|
50 |
### Example Data
|
@@ -77,10 +82,13 @@ This dataset is tailored for applications in retrieval model training, re-rankin
|
|
77 |
## Dataset Statistics
|
78 |
|
79 |
🔸Number of rows: 362,000
|
|
|
80 |
🔸Fields: 6 (query, positive, 4 negatives)
|
81 |
|
82 |
Similarity Ranges:
|
|
|
83 |
🔸`negative1`: Average similarity: ~0.7
|
|
|
84 |
🔸`negative4`: Average similarity: ~0.65
|
85 |
|
86 |
Languages: Arabic (Modern Standard Arabic).
|
@@ -115,19 +123,25 @@ dataset
|
|
115 |
|
116 |
## Recommended Applications
|
117 |
▪️ Training Retrieval Models: Use the triplet structure (query, positive, negative) to train retrieval models with loss functions like triplet loss or contrastive loss.
|
|
|
118 |
▪️ Fine-Tuning Re-Ranking Models: Use the ranked negatives to train models to rank positives above hard negatives.
|
|
|
119 |
▪️ Evaluation Benchmarks: Use the dataset as a benchmark to evaluate retrieval models’ ability to handle hard negatives.
|
120 |
|
121 |
## Dataset Creation Process
|
122 |
|
123 |
✔️ Original Data: The Arabic subset of the Mr. TyDi dataset [Mr. TyDi dataset](https://huggingface.co/datasets/castorini/mr-tydi) was used as the foundation.
|
|
|
124 |
✔️ Embedding Model: An Arabic embedding model [GATE](Omartificial-Intelligence-Space/GATE-AraBert-v1) was employed to calculate similarity scores between the positive and all negatives.
|
|
|
125 |
✔️ Ranking Negatives: For each query, the negatives were ranked by descending similarity, and the top 4 were selected as hard negatives.
|
|
|
126 |
✔️ Filtering and Validation: The dataset was validated to ensure the semantic integrity of negatives.
|
127 |
|
128 |
## Limitations and Considerations
|
129 |
|
130 |
▪️ Domain-Specific Bias: The embedding model might favor specific domains, impacting the selection of negatives.
|
|
|
131 |
▪️ Similarity Metric: The dataset relies on the embedding model's similarity scores, which may not perfectly align with human judgment.
|
132 |
|
133 |
### Citation Information
|
|
|
43 |
## Dataset Structure
|
44 |
|
45 |
- The dataset contains the following fields:
|
46 |
+
|
47 |
- **query**: The user query string.
|
48 |
+
|
49 |
+
|
50 |
- **positive**: The relevant passage for the query.
|
51 |
+
|
52 |
+
|
53 |
- **negative1, negative2, negative3, negative4**: The top 4 semantically similar but non-relevant passages to the positive.
|
54 |
|
55 |
### Example Data
|
|
|
82 |
## Dataset Statistics
|
83 |
|
84 |
🔸Number of rows: 362,000
|
85 |
+
|
86 |
🔸Fields: 6 (query, positive, 4 negatives)
|
87 |
|
88 |
Similarity Ranges:
|
89 |
+
|
90 |
🔸`negative1`: Average similarity: ~0.7
|
91 |
+
|
92 |
🔸`negative4`: Average similarity: ~0.65
|
93 |
|
94 |
Languages: Arabic (Modern Standard Arabic).
|
|
|
123 |
|
124 |
## Recommended Applications
|
125 |
▪️ Training Retrieval Models: Use the triplet structure (query, positive, negative) to train retrieval models with loss functions like triplet loss or contrastive loss.
|
126 |
+
|
127 |
▪️ Fine-Tuning Re-Ranking Models: Use the ranked negatives to train models to rank positives above hard negatives.
|
128 |
+
|
129 |
▪️ Evaluation Benchmarks: Use the dataset as a benchmark to evaluate retrieval models’ ability to handle hard negatives.
|
130 |
|
131 |
## Dataset Creation Process
|
132 |
|
133 |
✔️ Original Data: The Arabic subset of the Mr. TyDi dataset [Mr. TyDi dataset](https://huggingface.co/datasets/castorini/mr-tydi) was used as the foundation.
|
134 |
+
|
135 |
✔️ Embedding Model: An Arabic embedding model [GATE](Omartificial-Intelligence-Space/GATE-AraBert-v1) was employed to calculate similarity scores between the positive and all negatives.
|
136 |
+
|
137 |
✔️ Ranking Negatives: For each query, the negatives were ranked by descending similarity, and the top 4 were selected as hard negatives.
|
138 |
+
|
139 |
✔️ Filtering and Validation: The dataset was validated to ensure the semantic integrity of negatives.
|
140 |
|
141 |
## Limitations and Considerations
|
142 |
|
143 |
▪️ Domain-Specific Bias: The embedding model might favor specific domains, impacting the selection of negatives.
|
144 |
+
|
145 |
▪️ Similarity Metric: The dataset relies on the embedding model's similarity scores, which may not perfectly align with human judgment.
|
146 |
|
147 |
### Citation Information
|