File size: 333 Bytes
5fa1a76 |
1 2 3 4 5 6 7 8 9 |
encoding = tokenizer(table, question, return_tensors="pt") let the model generate an answer autoregressively outputs = model.generate(**encoding) decode back to text predicted_answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] print(predicted_answer) 53 Note that [TapexTokenizer] also supports batched inference. |