Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
thon
tokenized_sequence = tokenizer.tokenize(sequence)
The tokens are either words or subwords.