Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
",
]
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf")
print(encoded_input)
{'input_ids': ,
'token_type_ids': ,
'attention_mask': }
Different pipelines support tokenizer arguments in their __call__() differently.