|
|
|
HerBERT |
|
Overview |
|
The HerBERT model was proposed in KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and |
|
Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic |
|
masking of whole words. |
|
The abstract from the paper is the following: |
|
In recent years, a series of Transformer-based models unlocked major improvements in general natural language |
|
understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which |
|
allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of |
|
languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language |
|
understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing |
|
datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new |
|
sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and |
|
promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and |
|
applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, |
|
which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an |
|
extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based |
|
models. |
|
This model was contributed by rmroczkowski. The original code can be found |
|
here. |
|
Usage example |
|
thon |
|
|
|
from transformers import HerbertTokenizer, RobertaModel |
|
tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") |
|
model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") |
|
encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt") |
|
outputs = model(encoded_input) |
|
HerBERT can also be loaded using AutoTokenizer and AutoModel: |
|
import torch |
|
from transformers import AutoModel, AutoTokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") |
|
model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") |
|
|
|
Herbert implementation is the same as BERT except for the tokenization method. Refer to BERT documentation |
|
for API reference and examples. |
|
|
|
HerbertTokenizer |
|
[[autodoc]] HerbertTokenizer |
|
HerbertTokenizerFast |
|
[[autodoc]] HerbertTokenizerFast |