The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: RuntimeError Message: Dataset scripts are no longer supported, but found SyntacticAgreement.py Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 989, in dataset_module_factory raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}") RuntimeError: Dataset scripts are no longer supported, but found SyntacticAgreement.py
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SyntacticAgreement
This dataset provides manually curated syntactic agreement test suites for four morphologically rich languages: Italian, Spanish, Portuguese, and Russian.
It is designed to evaluate the ability of neural language models to capture hierarchical syntactic dependencies, with a focus on agreement phenomena that go beyond English subject–verb agreement.
This dataset is designed for targeted syntactic evaluation, which does not fit standard supervised NLP tasks. For this reason, we use the "other" task category.
Motivation
Agreement is a key linguistic phenomenon for testing whether models capture hierarchical structure rather than relying on surface-level patterns (Linzen et al., 2016; Goldberg, 2019).
Unlike English, agreement in Romance and Slavic languages is morphologically richer and involves more diverse features.
Our dataset aims to evaluate state-of-the-art models on these features, providing different agreement tests per language, each organized into test suites, some of which have an adversarial version.
The test suites were manually created by linguists to ensure grammaticality, semantic plausibility, and lexical diversity, contrasting with previous approaches relying on automatically generated stimuli.
Sample test sentences
The following examples from one of our Spanish test suites (Subject - Predicative Complement agreement) illustrate a regular test sentence and its adversarial counterpart:
Standard example:
Grammatical vs. ungrammatical sentence (gender mismatch)Las voluntarias cayeron enfermas.
*Las voluntarias cayeron enfermos.
'The volonteers fell ill.'Adversarial:
A relative clause (between brackets) increases distance and introduces an agreement attractorLas voluntarias [que ayudaron a los refugiados] cayeron enfermas.
*Las voluntarias [que ayudaron a los refugiados] cayeron enfermos.
'The volonteers [who helped the refugees] fell ill.'
Dataset structure
Each language is distributed as a .zip
file containing JSON test suites.
A test suite JSON has the following structure:
{
"meta": {
"name": "attribute_agreement",
"metric": "sum",
"author": "Alba Táboas García",
"reference": "",
"language": "Italian",
"comment": "Basic suite for testing nominal agreement (number and gender) between subject and attribute in copulative constructions"
},
"region_meta": {
"1": "Subject",
"2": "Copula",
"3": "Attribute"
},
"predictions": [
{
"type": "formula",
"formula": "(3;%match%) < (3;%mismatch_num%)",
"comment": "Disagreement in number is more surprising than full agreement"
},
{
"type": "formula",
"formula": "(3;%match%) < (3;%mismatch_gend%)",
"comment": "Disagreement in gender is more surprising than full agreement"
},
{
"type": "formula",
"formula": "(3;%match%) < (3;%mismatch_num_gend%)",
"comment": "Disagreement in gender and number is more surprising than full agreement"
}
],
"items": [
{
"item_number": 1,
"conditions": [
{
"condition_name": "match",
"regions": [
{
"region_number": 1,
"content": "La storia"
},
{
"region_number": 2,
"content": "era"
},
{
"region_number": 3,
"content": "lunga."
}
]
},
{
"condition_name": "mismatch_num",
"regions": [
{
"region_number": 1,
"content": "La storia"
},
{
"region_number": 2,
"content": "era"
},
{
"region_number": 3,
"content": "lunghe."
}
]
},
{
"condition_name": "mismatch_gend",
"regions": [
{
"region_number": 1,
"content": "La storia"
},
{
"region_number": 2,
"content": "era"
},
{
"region_number": 3,
"content": "lungo."
}
]
},
{
"condition_name": "mismatch_num_gend",
"regions": [
{
"region_number": 1,
"content": "La storia"
},
{
"region_number": 2,
"content": "era"
},
{
"region_number": 3,
"content": "lunghi."
}
]
}
]
}
]
}
- meta: suite-level metadata (name, author, language, description).
- region_meta: mapping of region indices to linguistic roles.
- predictions: formulas defining the expected surprisal relations across conditions.
- items: each test item contains a set of conditions (grammatical vs. systematically ungrammatical variants).
This structure follows the same structure as in the SyntaxGym test suites introduced by Hu et al. (2020) and extended to Spanish by Pérez-Mayos et al. (2021).
Loading the dataset
The dataset can be loaded directly from the Hugging Face Hub:
from datasets import load_dataset
# Load the Spanish test suites
ds = load_dataset("albalbalba/SyntacticAgreement", name="spanish", split='train', trust_remote_code=True)
# List all the available test suites for the selected language:
print(set(ds[:]['suite_name']))
# Select one test suite in particular: attribute agreement
attribute_suite = ds.filter(lambda example: example['suite_name'] == 'atribute_agreement')
Each example has the following schema:
- suite_name (
string
) - item_number (
int32
) - conditions (
list
)- condition_name (
string
) - content (
string
) - regions (list of
{region_number, content}
)
- condition_name (
- predictions (
list[string]
)
Evaluation methodology
We recommend evaluating models with:
- minicons (Misra, 2022) — for surprisal and probability computations.
- Bidirectional models: use the modified scoring technique by Kauf & Ivanova (2023) (masking rightward tokens within the same word).
- Causal models: apply the correction of Pimentel & Meister (2024) to handle tokenization effects.
Recommended scoring metric
Instead of binary accuracy, we recommend the mean probability ratio:
- $x_t$: grammatical target
- $x_i$: ungrammatical alternative
- $c$: context (left for causal models, both left and right for bidirectional ones)
- $I$: set of $n$ incorrect alternatives included in $item$
Values $> 0.5$ indicate the model prefers the grammatical form, with higher values meaning stronger preference.
Minimal evaluation pipeline example
Coming soon...
Citation
If you use this dataset, please cite: Assessing the Agreement Competence of Large Language Models (Táboas García & Wanner, DepLing-SyntaxFest 2025)
- Downloads last month
- 73