Model Card for impresso-project/ner-stacked-bert-multilingual
The Impresso NER model is a multilingual named entity recognition model trained for historical document processing. It is based on a stacked Transformer architecture and is designed to identify fine-grained and coarse-grained entity types in digitized historical texts, including names, titles, and locations.
Model Details
Model Description
- Developed by: EPFL from the Impresso team. The project is an interdisciplinary project focused on historical media analysis across languages, time, and modalities. Funded by the Swiss National Science Foundation (CRSII5_173719, CRSII5_213585) and the Luxembourg National Research Fund (grant No. 17498891).
- Model type: Stacked BERT-based token classification for named entity recognition
- Languages: French, German, English (with support for multilingual historical texts)
- License: AGPL v3+
- Finetuned from:
dbmdz/bert-medium-historic-multilingual-cased
Model Architecture
The model architecture consists of the following components:
- A pre-trained BERT encoder (multilingual historic BERT) as the base.
- One or two Transformer encoder layers stacked on top of the BERT encoder.
- A Conditional Random Field (CRF) decoder layer to model label dependencies.
- Learned absolute positional embeddings for improved handling of noisy inputs.
These additional Transformer layers help in mitigating the effects of OCR noise, spelling variation, and non-standard linguistic usage found in historical documents. The entire stack is fine-tuned end-to-end for token classification.
Entity Types Supported
The model supports both coarse-grained and fine-grained entity types defined in the HIPE-2020/2022 guidelines. The output format of the model includes structured predictions with contextual and semantic details. Each prediction is a dictionary with the following fields:
{
'type': 'pers' | 'org' | 'loc' | 'time' | 'prod',
'confidence_ner': float, # Confidence score
'surface': str, # Surface form in text
'lOffset': int, # Start character offset
'rOffset': int, # End character offset
'name': str, # Optional: full name (for persons)
'title': str, # Optional: title (for persons)
'function': str # Optional: function (if detected)
}
Coarse-Grained Entity Types:
- pers: Person entities (individuals, collectives, authors)
- org: Organizations (administrative, enterprise, press agencies)
- prod: Products (media)
- time: Time expressions (absolute dates)
- loc: Locations (towns, regions, countries, physical, facilities)
If present in the text, surrounding an entity, model returns person-specific attributes such as:
name
: canonical full nametitle
: honorific or title (e.g., "king", "chancellor")function
: role or function in context (if available)
Model Sources
- Repository: https://huggingface.co/impresso-project/ner-stacked-bert-multilingual
- Paper: CoNLL 2020
- Demo: Impresso project
Uses
Direct Use
The model is intended to be used directly with the Hugging Face pipeline
for token-classification
, specifically with generic-ner
tasks on historical texts.
Downstream Use
Can be used for downstream tasks such as:
- Historical information extraction
- Biographical reconstruction
- Place and person mention detection across historical archives
Out-of-Scope Use
- Not suitable for contemporary named entity recognition in domains such as social media or modern news.
- Not optimized for OCR-free modern corpora.
Bias, Risks, and Limitations
Due to training on historical documents, the model may reflect historical biases and inaccuracies. It may underperform on contemporary or non-European languages.
Recommendations
- Users should be cautious of historical and typographical biases.
- Consider post-processing to filter false positives from OCR noise.
How to Get Started with the Model
from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
MODEL_NAME = "impresso-project/ner-stacked-bert-multilingual"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
ner_pipeline = pipeline("generic-ner", model=MODEL_NAME, tokenizer=tokenizer, trust_remote_code=True, device='cpu')
sentence = "En l'an 1348, au plus fort des ravages de la peste noire à travers l'Europe, le Royaume de France se trouvait à la fois au bord du désespoir et face à une opportunité. À la cour du roi Philippe VI, les murs du Louvre étaient animés par les rapports sombres venus de Paris et des villes environnantes. La peste ne montrait aucun signe de répit, et le chancelier Guillaume de Nogaret, le conseiller le plus fidèle du roi, portait le lourd fardeau de gérer la survie du royaume."
entities = ner_pipeline(sentence)
print(entities)
Example Output
[
{'type': 'time', 'confidence_ner': 85.0, 'surface': "an 1348", 'lOffset': 0, 'rOffset': 12},
{'type': 'loc', 'confidence_ner': 90.75, 'surface': "Europe", 'lOffset': 69, 'rOffset': 75},
{'type': 'loc', 'confidence_ner': 75.45, 'surface': "Royaume de France", 'lOffset': 80, 'rOffset': 97},
{'type': 'pers', 'confidence_ner': 85.27, 'surface': "roi Philippe VI", 'lOffset': 181, 'rOffset': 196, 'title': "roi", 'name': "roi Philippe VI"},
{'type': 'loc', 'confidence_ner': 30.59, 'surface': "Louvre", 'lOffset': 210, 'rOffset': 216},
{'type': 'loc', 'confidence_ner': 94.46, 'surface': "Paris", 'lOffset': 266, 'rOffset': 271},
{'type': 'pers', 'confidence_ner': 96.1, 'surface': "chancelier Guillaume de Nogaret", 'lOffset': 350, 'rOffset': 381, 'title': "chancelier", 'name': "Guillaume de Nogaret"},
{'type': 'loc', 'confidence_ner': 49.35, 'surface': "Royaume", 'lOffset': 80, 'rOffset': 87},
{'type': 'loc', 'confidence_ner': 24.18, 'surface': "France", 'lOffset': 91, 'rOffset': 97}
]
Training Details
Training Data
The model was trained on the Impresso HIPE-2020 dataset, a subset of the HIPE-2022 corpus, which includes richly annotated OCR-transcribed historical newspaper content.
Training Procedure
Preprocessing
OCR content was cleaned and segmented. Entity types follow the HIPE-2020 typology.
Training Hyperparameters
- Training regime: Mixed precision (fp16)
- Epochs: 5
- Max sequence length: 512
- Base model:
dbmdz/bert-medium-historic-multilingual-cased
- Stacked Transformer layers: 2
Speeds, Sizes, Times
- Model size: ~500MB
- Training time: ~1h on 1 GPU (NVIDIA TITAN X)
Evaluation
Testing Data
Held-out portion of HIPE-2020 (French, German)
Metrics
- F1-score (micro, macro)
- Entity-level precision/recall
Results
Language | Precision | Recall | F1-score |
---|---|---|---|
French | 84.2 | 81.6 | 82.9 |
German | 82.0 | 78.7 | 80.3 |
Summary
The model performs robustly across noisy OCR historical content with support for fine-grained entity typologies.
Environmental Impact
- Hardware Type: NVIDIA TITAN X (Pascal, 12GB)
- Hours used: ~1 hour
- Cloud Provider: EPFL, Switzerland
- Carbon Emitted: ~0.022 kg COâ‚‚eq (estimated)
Technical Specifications
Model Architecture and Objective
Stacked BERT architecture with multitask token classification head supporting HIPE-type entity labels.
Compute Infrastructure
Hardware
1x NVIDIA TITAN X (Pascal, 12GB)
Software
- Python 3.11
- PyTorch 2.0
- Transformers 4.36
Citation
BibTeX:
@inproceedings{boros2020alleviating,
title={Alleviating digitization errors in named entity recognition for historical documents},
author={Boros, Emanuela and Hamdi, Ahmed and Pontes, Elvys Linhares and Cabrera-Diego, Luis-Adri{\'a}n and Moreno, Jose G and Sidere, Nicolas and Doucet, Antoine},
booktitle={Proceedings of the 24th conference on computational natural language learning},
pages={431--441},
year={2020}
}
Contact
- Website: https://impresso-project.ch
- Downloads last month
- 26,798