RAG / knowledge_base /model_doc_xmod.txt
Ahmadzei's picture
update 1
57bdca5
X-MOD
Overview
The X-MOD model was proposed in Lifting the Curse of Multilinguality by Pre-training Modular Transformers by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe.
X-MOD extends multilingual masked language models like XLM-R to include language-specific modular components (language adapters) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen.
The abstract from the paper is the following:
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
This model was contributed by jvamvas.
The original code can be found here and the original documentation is found here.
Usage tips
Tips:
- X-MOD is similar to XLM-R, but a difference is that the input language needs to be specified so that the correct language adapter can be activated.
- The main models – base and large – have adapters for 81 languages.
Adapter Usage
Input language
There are two ways to specify the input language:
1. By setting a default language before using the model:
thon
from transformers import XmodModel
model = XmodModel.from_pretrained("facebook/xmod-base")
model.set_default_language("en_XX")
By explicitly passing the index of the language adapter for each sample:
thon
import torch
input_ids = torch.tensor(
[
[0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2],
[0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2],
]
)
lang_ids = torch.LongTensor(
[
0, # en_XX
8, # de_DE
]
)
output = model(input_ids, lang_ids=lang_ids)
Fine-tuning
The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided:
thon
model.freeze_embeddings_and_language_adapters()
Fine-tune the model
Cross-lingual transfer
After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language:
thon
model.set_default_language("de_DE")
Evaluate the model on German examples
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XmodConfig
[[autodoc]] XmodConfig
XmodModel
[[autodoc]] XmodModel
- forward
XmodForCausalLM
[[autodoc]] XmodForCausalLM
- forward
XmodForMaskedLM
[[autodoc]] XmodForMaskedLM
- forward
XmodForSequenceClassification
[[autodoc]] XmodForSequenceClassification
- forward
XmodForMultipleChoice
[[autodoc]] XmodForMultipleChoice
- forward
XmodForTokenClassification
[[autodoc]] XmodForTokenClassification
- forward
XmodForQuestionAnswering
[[autodoc]] XmodForQuestionAnswering
- forward