--- license: apache-2.0 dataset_info: features: - name: id dtype: string - name: prompt dtype: string - name: figure dtype: image - name: text dtype: string - name: gt dtype: string - name: difficulty_level dtype: string splits: - name: train num_bytes: 128377449 num_examples: 138 download_size: 128251166 dataset_size: 128377449 configs: - config_name: biogr data_files: - split: train path: data/biogr-* - config_name: dft data_files: - split: train path: data/dft-* - config_name: geo data_files: - split: train path: data/geo-* - config_name: hfd data_files: - split: train path: data/hfd-* - config_name: hfe data_files: - split: train path: data/hfe-* - config_name: mpve data_files: - split: train path: data/mpve-* - config_name: pdb data_files: - split: train path: data/pdb-* - config_name: qecc data_files: - split: train path: data/qecc-* task_categories: - question-answering language: - en tags: - biology - physics - science - reasoning size_categories: - n<1K --- # Curie Dataset HF version of the dataset: [CURIE: Evaluating LLMs On Multitask Scientific Long Context Understanding and Reasoning](https://arxiv.org/pdf/2503.13517). Also available via [GitHub](https://github.com/google/curie/tree/main) (Apache-2.0 license). ## Dataset Structure CURIE consists of 10 tasks that are mapped to 8 datasets. The datasets are: | Dataset ID | Task Name | Domain | Description | |-------------|------------------------------------|----------------------------|-----------------------------------------------------------------------------| | biogr | Biodiversity Georeferencing | Biodiversity | Determine the latitude, longitude bounding box encompassing the region in the map image. | | dft | Density Functional Theory Analysis | Condensed Matter Physics | 3 Tasks related to DFT. | | pdb | Protein Sequence Reconstruction | Protein Sequencing | Reconstruct a protein’s amino acid sequence form the 3D structure. | | geo | Geospatial Dataset Extraction | Geospatial Analysis | Extract information for all geospatial datasets used along with the spatial and temporal extents. | | mpve | Materials Property Value Extraction| Materials Science | Identify all instances of materials,their properties, and descriptors | | qecc | Quantum Error Correction Codes | Quantum Computing | Create a YAML file with the Error Correction Code’s properties. | | hfd | Hartree-Fock Tasks Derivation | Condensed Matter Physics | Derive the Hartree-Fock mean-field Hamiltonian for a quantum many-body system | | hfe | Hartree-Fock Tasks Extraction | Condensed Matter Physics | Extract the most general mean-field Hamiltonian. | Each dataset contains the fields: `id (str)`: The sample id. \ `prompt (str)`: The prompt containing the task description for the LLM. \ `text (str)`: The sample specific information needed to solve the task. \ `gt (str)`: The groundtruth answer as a json-string. To obtain the structured representation load the string with json5 (see Example). \ `difficulty_level (str)`: Difficulty level of the task. Special fields for some datasets: | Dataset | Additional Fields | Content | |---------------|--------------------------|--------------------------------| | biogr | figure (PIL Image) | Figure containing geographical map | | dft | prompt_metadata (str), prompt_structure_data (str) | Prompts for the subtasks dft-structure & dft-metadata | | mpve | prompt_exclude_trivia (str)
prompt_bandgap_refractive (str) | Ablation prompts | ## Example Usage Example: Query `gpt-4o` for a response on the hfd dataset using a LangChain chat model: ```python import json5 from datasets import load_dataset from langchain.chat_models.base import init_chat_model dataset = load_dataset('nhop/curie','hfd') llm = init_chat_model('gpt-4o') for sample in dataset["train"]: print(sample["prompt"]) prompt = sample["prompt"].replace("{{text}}",sample["text"]) response = llm.invoke(prompt) print(response.content) groundtruth = json5.loads(sample["gt"]) print(groundtruth) break ``` ## Citation ``` @inproceedings{cui2025curie, title={CURIE: Evaluating LLMs on Multitask Scientific Long-Context Understanding and Reasoning}, author={Cui, Hao and Shamsi, Zahra and Cheon, Gowoon and Ma, Xuejian and Li, Shutong and Tikhanovskaya, Maria and Norgaard, Peter Christian and Mudur, Nayantara and Plomecka, Martyna Beata and Raccuglia, Paul and others}, booktitle={The Thirteenth International Conference on Learning Representations} year={2025} } ```