File size: 332 Bytes
5fa1a76
 
 
 
 
 
1
2
3
4
5
6
Here is a first preprocessing function to join the list of strings for each example and tokenize the result:

def preprocess_function(examples):
     return tokenizer([" ".join(x) for x in examples["answers.text"]])

To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.map] method.