RAG / knowledge_base /model_doc_nezha.txt
Ahmadzei's picture
update 1
57bdca5
Nezha
Overview
The Nezha model was proposed in NEZHA: Neural Contextualized Representation for Chinese Language Understanding by Junqiu Wei et al.
The abstract from the paper is the following:
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks
due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed
representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks.
The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional
Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy,
Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA
achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including
named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti)
and natural language inference (XNLI).
This model was contributed by sijunhe. The original code can be found here.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
NezhaConfig
[[autodoc]] NezhaConfig
NezhaModel
[[autodoc]] NezhaModel
- forward
NezhaForPreTraining
[[autodoc]] NezhaForPreTraining
- forward
NezhaForMaskedLM
[[autodoc]] NezhaForMaskedLM
- forward
NezhaForNextSentencePrediction
[[autodoc]] NezhaForNextSentencePrediction
- forward
NezhaForSequenceClassification
[[autodoc]] NezhaForSequenceClassification
- forward
NezhaForMultipleChoice
[[autodoc]] NezhaForMultipleChoice
- forward
NezhaForTokenClassification
[[autodoc]] NezhaForTokenClassification
- forward
NezhaForQuestionAnswering
[[autodoc]] NezhaForQuestionAnswering
- forward