File size: 152 Bytes
5fa1a76
 
 
 
1
2
3
4
If there are several sentences you want to preprocess, pass them as a list to the tokenizer:

batch_sentences = [
     "But what about second breakfast?