Dataset Viewer (First 5GB)
Auto-converted to Parquet
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 3.28 GiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ICPC World FinalsDataset

Dataset Description

The ICPC World Finals Dataset serves as a challenging benchmark for code generation, encompassing 146 problems from the International Collegiate Programming Contest (ICPC) World Finals spanning from 2011 to 2023. The ICPC World Finals represents one of the most prestigious and difficult competitive programming contests globally, making this dataset particularly valuable for assessing the advanced problem-solving and code generation capabilities of language models.

Dataset Statistics

  • Total Problems: 146 problems
  • Time Span: 2011-2023
  • Average Problem Complexity: High (competitive programming world finals level)
  • Languages: Problem statements in English, solutions expected in Python

Dataset Structure

from datasets import load_dataset
ds = load_dataset("HumanLastCodeExam/icpc-world-finals")
# Basic exploration
print(f"Dataset size: {len(ds['train'])} problems")
print(f"Sample problem title: {ds['train'][0]['question_title']}")

Data Fields

"question_title": "Ship Traffic",
"platform": "ICPC_world_final_2015",
"question_id": "2015_I",
"question_content": "## Problem Description\n\nFerries crossing the Strait of Gibraltar from Morocco to xxx```",
"test_cases": [{"input":"xxx","output":"xxxx"}],
"prompt":"You are an expert Python programmer.\n\n- You will be given a problem statement,xxx"+"## Problem Description\n\nFerries crossing the Strait of Gibraltar from Morocco to xxx",
"instruct":You are an expert Python programmer.\n\n- You will be given a problem statement,xxx".

Data Fields Explain

  • question_title: The title of the programming problem.
  • platform: The competitive programming platform.
  • question_id: A unique identifier assigned to the problem, facilitating its reference and retrieval.
  • question_content: A comprehensive description outlining the requirements and specifications of the problem, detailing the task to be accomplished.
  • test_cases: A collection of test cases, typically including sample inputs and outputs that serve as benchmarks for validating solutions.
  • prompt: Combine the content of instruct with question_content. Utilize this field to generate the code.
  • instruct: The provided code generates instruct, but you may also use your own instruct.

Paper

@misc{li2025humanityscodeexamadvanced,
      title={Humanity's Last Code Exam: Can Advanced LLMs Conquer Human's Hardest Code Competition?}, 
      author={Xiangyang Li and Xiaopeng Li and Kuicai Dong and Quanhu Zhang and Rongju Ruan and Xinyi Dai and Xiaoshuang Liu and Shengchun Xu and Yasheng Wang and Ruiming Tang},
      year={2025},
      eprint={2506.12713},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2506.12713}, 
}

GitHub Repository

For more information, examples, and evaluation scripts:

https://github.com/Humanity-s-Last-Code-Exam/HLCE
Downloads last month
119