).convert("RGB") | |
words = ["hello", "world"] | |
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes | |
word_labels = [1, 2] | |
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt") | |
print(encoding.keys()) | |
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'labels', 'image']) | |
Use case 4: visual question answering (inference), apply_ocr=True | |
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. |