|
|
|
CANINE |
|
Overview |
|
The CANINE model was proposed in CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language |
|
Representation by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's |
|
among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair |
|
Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level. |
|
Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient |
|
downsampling strategy, before applying a deep Transformer encoder. |
|
The abstract from the paper is the following: |
|
Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models |
|
still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword |
|
lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all |
|
languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, |
|
a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a |
|
pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. |
|
To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input |
|
sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by |
|
2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters. |
|
This model was contributed by nielsr. The original code can be found here. |
|
Usage tips |
|
|
|
CANINE uses no less than 3 Transformer encoders internally: 2 "shallow" encoders (which only consist of a single |
|
layer) and 1 "deep" encoder (which is a regular BERT encoder). First, a "shallow" encoder is used to contextualize |
|
the character embeddings, using local attention. Next, after downsampling, a "deep" encoder is applied. Finally, |
|
after upsampling, a "shallow" encoder is used to create the final character embeddings. Details regarding up- and |
|
downsampling can be found in the paper. |
|
CANINE uses a max sequence length of 2048 characters by default. One can use [CanineTokenizer] |
|
to prepare text for the model. |
|
Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token |
|
(which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of |
|
tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The |
|
details for this can be found in the paper. |
|
|
|
Model checkpoints: |
|
|
|
google/canine-c: Pre-trained with autoregressive character loss, |
|
12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). |
|
google/canine-s: Pre-trained with subword loss, 12-layer, |
|
768-hidden, 12-heads, 121M parameters (size ~500 MB). |
|
|
|
Usage example |
|
CANINE works on raw characters, so it can be used without a tokenizer: |
|
thon |
|
|
|
from transformers import CanineModel |
|
import torch |
|
model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss |
|
text = "hello world" |
|
use Python's built-in ord() function to turn each character into its unicode code point id |
|
input_ids = torch.tensor([[ord(char) for char in text]]) |
|
outputs = model(input_ids) # forward pass |
|
pooled_output = outputs.pooler_output |
|
sequence_output = outputs.last_hidden_state |
|
|
|
For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all |
|
sequences to the same length): |
|
thon |
|
|
|
from transformers import CanineTokenizer, CanineModel |
|
model = CanineModel.from_pretrained("google/canine-c") |
|
tokenizer = CanineTokenizer.from_pretrained("google/canine-c") |
|
inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."] |
|
encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt") |
|
outputs = model(**encoding) # forward pass |
|
pooled_output = outputs.pooler_output |
|
sequence_output = outputs.last_hidden_state |
|
|
|
Resources |
|
|
|
Text classification task guide |
|
Token classification task guide |
|
Question answering task guide |
|
Multiple choice task guide |
|
|
|
CanineConfig |
|
[[autodoc]] CanineConfig |
|
CanineTokenizer |
|
[[autodoc]] CanineTokenizer |
|
- build_inputs_with_special_tokens |
|
- get_special_tokens_mask |
|
- create_token_type_ids_from_sequences |
|
CANINE specific outputs |
|
[[autodoc]] models.canine.modeling_canine.CanineModelOutputWithPooling |
|
CanineModel |
|
[[autodoc]] CanineModel |
|
- forward |
|
CanineForSequenceClassification |
|
[[autodoc]] CanineForSequenceClassification |
|
- forward |
|
CanineForMultipleChoice |
|
[[autodoc]] CanineForMultipleChoice |
|
- forward |
|
CanineForTokenClassification |
|
[[autodoc]] CanineForTokenClassification |
|
- forward |
|
CanineForQuestionAnswering |
|
[[autodoc]] CanineForQuestionAnswering |
|
- forward |