Omartificial-Intelligence-Space's picture
Update readme.md
fec243e verified
metadata
dataset_info:
  features:
    - name: query
      dtype: string
    - name: positive
      dtype: string
    - name: negative1
      dtype: string
    - name: negative2
      dtype: string
    - name: negative3
      dtype: string
    - name: negative4
      dtype: string
  splits:
    - name: train
      num_bytes: 64433976
      num_examples: 12373
  download_size: 33216385
  dataset_size: 64433976
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: apache-2.0
task_categories:
  - feature-extraction
  - sentence-similarity
language:
  - ar
size_categories:
  - 1K<n<10K

Arabic With Ranked Hard Negatives

Dataset Summary

The Arabic Hard Negative Dataset is derived from the Arabic subset of the Mr. TyDi dataset Mr. TyDi dataset. Using an advanced Arabic embedding model GATE, this dataset restructures the original data to include a query, a positive passage, and the top 4 hard negatives for each query based on similarity scores. These hard negatives are the most semantically similar non-relevant passages to the positive passage, providing a challenging dataset for retrieval and re-ranking tasks. This dataset is tailored for applications in retrieval model training, re-ranking, and contrastive learning where the presence of hard negatives can significantly improve the performance of machine learning models.

Dataset Structure

  • The dataset contains the following fields:

  • query: The user query string.

  • positive: The relevant passage for the query.

  • negative1, negative2, negative3, negative4: The top 4 semantically similar but non-relevant passages to the positive.

Example Data

{
  "query": "ما هي نظرية الحقل الكمي؟",
  "positive": {
    "text": "بدأت نظرية الحقل الكمي بشكل طبيعي بدراسة التفاعلات الكهرومغناطيسية ..."
  },
  "negative1": {
    "text": "تم تطوير النهج مؤخرًا ليشمل نسخة جبرية من الحقل الكمي ..."
  },
  "negative2": {
    "text": "نظرية الحقول الكمومية لها تطبيقات واسعة تشمل العديد من العلوم الفيزيائية ..."
  },
  "negative3": {
    "text": "النظرية الكهرومغناطيسية لها دور محوري في نظرية الحقول الكمومية ..."
  },
  "negative4": {
    "text": "الحقل الكمي يستخدم الآن في الفيزياء النظرية وتطبيقات أخرى ..."
  },
  "similarity1": 0.75,
  "similarity2": 0.72,
  "similarity3": 0.70,
  "similarity4": 0.68
}

Dataset Statistics

🔸Number of rows: 12.4K

🔸Fields: 6 (query, positive, 4 negatives)

Similarity Ranges:

🔸negative1: Average similarity: ~0.7

🔸negative4: Average similarity: ~0.65

Languages: Arabic (Modern Standard Arabic).

Dataset Analysis and Insights

1. Average Similarity Across Negatives:

Gate-sim-results

🔸The average similarity between the positive passage and the negatives decreases as the rank increases. Below is a bar chart visualizing the average similarity for the top 30 negatives in the original dataset, focusing on the top 4 for this version.

Gate-sim-results-dis

2. Similarity Distributions:

🔸The similarity scores for each negative passage are distributed differently. Below are the histograms for the similarity distributions of the top 30 negatives, emphasizing the scores for negative1 to negative4.

3. Insights

The top-ranked negatives (negative1 and negative2) are significantly closer in similarity to the positive passage, making them challenging and ideal for training advanced retrieval models. The similarity drops slightly for negative3 and negative4, but they remain "hard negatives," offering diverse yet challenging non-relevant passages for contrastive learning.

How to Use This Dataset

from datasets import load_dataset

dataset = load_dataset('Omartificial-Intelligence-Space/Arabic-With-Ranked-Hard-Negatives')
dataset

Recommended Applications

▪️ Training Retrieval Models: Use the triplet structure (query, positive, negative) to train retrieval models with loss functions like triplet loss or contrastive loss.

▪️ Fine-Tuning Re-Ranking Models: Use the ranked negatives to train models to rank positives above hard negatives.

▪️ Evaluation Benchmarks: Use the dataset as a benchmark to evaluate retrieval models’ ability to handle hard negatives.

Dataset Creation Process

✔️ Original Data: The Arabic subset of the Mr. TyDi dataset Mr. TyDi dataset was used as the foundation.

✔️ Embedding Model: An Arabic embedding model GATE was employed to calculate similarity scores between the positive and all negatives.

✔️ Ranking Negatives: For each query, the negatives were ranked by descending similarity, and the top 4 were selected as hard negatives.

✔️ Filtering and Validation: The dataset was validated to ensure the semantic integrity of negatives.

Limitations and Considerations

▪️ Domain-Specific Bias: The embedding model might favor specific domains, impacting the selection of negatives.

▪️ Similarity Metric: The dataset relies on the embedding model's similarity scores, which may not perfectly align with human judgment.

Citation Information

If you use this dataset in your research, please cite the original Mr. TyDi paper and this dataset as follows:

@article{mrtydi,
      title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, 
      author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
      year={2021},
      journal={arXiv:2108.08787},
}

@dataset{Omartificial-Intelligence-Space,
      title={Arabic With Ranked Hard Negatives},
      author={Omer Nacar},
      year={2024},
      note={Hugging Face Dataset Repository}
}