QiushiSun nielsr HF Staff commited on
Commit
511f2ec
·
verified ·
1 Parent(s): 4efc54b

Add link to code repository (#1)

Browse files

- Add link to code repository (b9f0d985e4cb2b9f80c021099a2d2836c7d6d877)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
- license: apache-2.0
3
- library_name: transformers
4
  base_model: OpenGVLab/InternVL2-4B
 
 
5
  pipeline_tag: image-text-to-text
6
  ---
7
 
@@ -137,9 +137,15 @@ tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast
137
  pixel_values = load_image('./web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png', max_num=6).to(torch.bfloat16).cuda()
138
  generation_config = dict(max_new_tokens=1024, do_sample=True)
139
 
140
- question = "<image>\nYou are a GUI task expert, I will provide you with a high-level instruction, an action history, a screenshot with its corresponding accessibility tree.\n High-level instruction: {high_level_instruction}\n Action history: {action_history}\n Accessibility tree: {a11y_tree}\n Please generate the low-level thought and action for the next step."
 
 
 
 
 
141
  response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
142
- print(f'User: {question}\nAssistant: {response}')
 
143
  ```
144
 
145
 
 
1
  ---
 
 
2
  base_model: OpenGVLab/InternVL2-4B
3
+ library_name: transformers
4
+ license: apache-2.0
5
  pipeline_tag: image-text-to-text
6
  ---
7
 
 
137
  pixel_values = load_image('./web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png', max_num=6).to(torch.bfloat16).cuda()
138
  generation_config = dict(max_new_tokens=1024, do_sample=True)
139
 
140
+ question = "<image>
141
+ You are a GUI task expert, I will provide you with a high-level instruction, an action history, a screenshot with its corresponding accessibility tree.
142
+ High-level instruction: {high_level_instruction}
143
+ Action history: {action_history}
144
+ Accessibility tree: {a11y_tree}
145
+ Please generate the low-level thought and action for the next step."
146
  response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
147
+ print(f'User: {question}
148
+ Assistant: {response}')
149
  ```
150
 
151