|
--- |
|
language: |
|
- es |
|
license: "cc-by-4.0" |
|
tags: |
|
- "national library of spain" |
|
- "spanish" |
|
- "bne" |
|
datasets: |
|
- "bne" |
|
metrics: |
|
- "ppl" |
|
widget: |
|
- text: "Este año las campanadas de La Sexta las presentará <mask>." |
|
- text: "David Broncano es un presentador de La <mask>." |
|
- text: "Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje." |
|
- text: "Hay base legal dentro del marco <mask> actual." |
|
|
|
--- |
|
|
|
# RoBERTa base trained with data from National Library of Spain (BNE) |
|
|
|
## Model Description |
|
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa]() base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain from 2009 to 2019. |
|
|
|
## Training corpora and preprocessing |
|
We cleaned 59TB of WARC files and we deduplicated them at computing node level. This resulted into 2TB of Spanish clean corpus. After that, we performed a global deduplication resulting into 570GB of text. |
|
|
|
Some of the statistics of the corpus: |
|
|
|
| Corpora | Number of documents | Number of tokens | Size (GB) | |
|
|---------|---------------------|------------------|-----------| |
|
| BNE | 201,080,084 | 135,733,450,668 | 570GB | |
|
|
|
## Tokenization and pre-training |
|
We trained a BBPE tokenizer with a size of 50,262 tokens. We used 10,000 documents for validation and we trained the model for 48 hours into 16 computing nodes with 4 Nvidia V100 GPUs per node. |
|
|
|
## Evaluation and results |
|
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish). |
|
|
|
## Citing |
|
Check out our paper for all the details: https://arxiv.org/abs/2107.07253 |
|
|
|
``` |
|
@misc{gutierrezfandino2021spanish, |
|
title={Spanish Language Models}, |
|
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas}, |
|
year={2021}, |
|
eprint={2107.07253}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |