Datasets:

Modalities:
Tabular
Formats:
csv
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

NASA-IR benchmark

NASA SMD and IBM Research developed a domain-specific information retrieval benchmark, NASA-IR, spanning almost 500 question-answer pairs related to the Earth science, planetary science, heliophysics, astrophysics, and biological physical sciences domains. Specifically, we sampled a set of 166 paragraphs from AGU, AMS, ADS, PMC, and PubMed and manually annotated with 3 questions that are answerable from each of these paragraphs, resulting in 498 questions. We used 398 of these questions as the training set and the remaining 100 as the validation set.

To comprehensively evaluate the information retrieval systems and mimic the real-world data, we combined 26,839 random ADS abstracts with these annotated paragraphs. On average, each query is 12 words long, and each paragraph is 120 words long. We used Recall@10 as the evaluation metric since each question has only one relevant document.

Evaluation results

image/png

Note This dataset is released in support of the training and evaluation of the encoder language model "Indus".

Accompanying paper can be found here: https://arxiv.org/abs/2405.10725

Downloads last month
91

Models trained or fine-tuned on nasa-impact/nasa-smd-IR-benchmark

Collection including nasa-impact/nasa-smd-IR-benchmark