JZPeterPan commited on
Commit
bee6757
·
verified ·
1 Parent(s): 358cdb8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -3
README.md CHANGED
@@ -1,3 +1,129 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen2-VL-2B-Instruct
5
+ language:
6
+ - en
7
+ ---
8
+ <div align="center">
9
+ <h1>
10
+ MedVLM-R1
11
+ </h1>
12
+ </div>
13
+
14
+ <div align="center">
15
+ <a href="https://arxiv.org/abs/2502.19634" target="_blank">Paper</a>
16
+ </div>
17
+
18
+ # <span id="Start">Introduction</span>
19
+ MedVLM-R1 is a medical Vision-Language Model built upon [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) and fine-tuned using the [GRPO](https://arxiv.org/abs/2402.03300) reinforcement learning framework. Trained on 600 MRI VQA samples from the [HuatuoGPT-Vision dataset](https://huggingface.co/datasets/FreedomIntelligence/Medical_Multimodal_Evaluation_Data), MedVLM-R1 excels in out-of-distribution performance on CT and X-ray VQA tasks. It also demonstrates explicit medical reasoning capabilities beyond merely providing final answers, ensuring greater interpretability and trustworthiness in clinical applications.
20
+
21
+ # <span id="Start">Quick Start</span>
22
+
23
+ ### 1. Load the model
24
+ ```python
25
+ from transformers import Qwen2VLForConditionalGeneration, AutoProcessor, GenerationConfig
26
+ from qwen_vl_utils import process_vision_info
27
+ import torch
28
+
29
+ MODEL_PATH = 'JZPeterPan/MedVLM-R1'
30
+
31
+ model = Qwen2VLForConditionalGeneration.from_pretrained(
32
+ MODEL_PATH,
33
+ torch_dtype=torch.bfloat16,
34
+ attn_implementation="flash_attention_2",
35
+ device_map="auto",
36
+ )
37
+
38
+ processor = AutoProcessor.from_pretrained(MODEL_PATH)
39
+
40
+ temp_generation_config = GenerationConfig(
41
+ max_new_tokens=1024,
42
+ do_sample=False,
43
+ temperature=1,
44
+ num_return_sequences=1,
45
+ pad_token_id=151643,
46
+ )
47
+ ```
48
+ ### 2. Load the VQA Data
49
+ Pick one of the following examples. These are samples from [OmniMedVQA](https://huggingface.co/datasets/foreverbeliever/OmniMedVQA) data and are bundled by [HuatuoGPT-Vision](https://huggingface.co/datasets/FreedomIntelligence/Medical_Multimodal_Evaluation_Data).
50
+
51
+ ```python
52
+ question = {"image": ['images/successful_cases/mdb146.png'], "problem": "What content appears in this image?\nA) Cardiac tissue\nB) Breast tissue\nC) Liver tissue\nD) Skin tissue", "solution": "B", "answer": "Breast tissue"}
53
+
54
+ question = {"image": ["images/successful_cases/person19_virus_50.jpeg"], "problem": "What content appears in this image?\nA) Lungs\nB) Bladder\nC) Brain\nD) Heart", "solution": "A", "answer": "Lungs"}
55
+
56
+ question = {"image":["images/successful_cases/abd-normal023599.png"],"problem":"Is any abnormality evident in this image?\nA) No\nB) Yes.","solution":"A","answer":"No"}
57
+
58
+ question = {"image":["images/successful_cases/foot089224.png"],"problem":"Which imaging technique was utilized for acquiring this image?\nA) MRI\nB) Electroencephalogram (EEG)\nC) Ultrasound\nD) Angiography","solution":"A","answer":"MRI"}
59
+
60
+ question = {"image":["images/successful_cases/knee031316.png"],"problem":"What can be observed in this image?\nA) Chondral abnormality\nB) Bone density loss\nC) Synovial cyst formation\nD) Ligament tear","solution":"A","answer":"Chondral abnormality"}
61
+
62
+ question = {"image":["images/successful_cases/shoulder045906.png"],"problem":"What can be visually detected in this picture?\nA) Bone fracture\nB) Soft tissue fluid\nC) Blood clot\nD) Tendon tear","solution":"B","answer":"Soft tissue fluid"}
63
+
64
+ question = {"image":["images/successful_cases/brain003631.png"],"problem":"What attribute can be observed in this image?\nA) Focal flair hyperintensity\nB) Bone fracture\nC) Vascular malformation\nD) Ligament tear","solution":"A","answer":"Focal flair hyperintensity"}
65
+
66
+ question = {"image":["images/successful_cases/mrabd005680.png"],"problem":"What can be observed in this image?\nA) Pulmonary embolism\nB) Pancreatic abscess\nC) Intraperitoneal mass\nD) Cardiac tamponade","solution":"C","answer":"Intraperitoneal mass"}
67
+ ```
68
+ ### 3. Run the inference
69
+
70
+ ```python
71
+ QUESTION_TEMPLATE = """
72
+ {Question}
73
+ Your task:
74
+ 1. Think through the question step by step, enclose your reasoning process in <think>...</think> tags.
75
+ 2. Then provide the correct single-letter choice (A, B, C, D,...) inside <answer>...</answer> tags.
76
+ 3. No extra information or text outside of these tags.
77
+ """
78
+
79
+ message = [{
80
+ "role": "user",
81
+ "content": [{"type": "image", "image": f"file://{question['image'][0]}"}, {"type": "text","text": QUESTION_TEMPLATE.format(Question=question['problem'])}]
82
+ }]
83
+
84
+ text = processor.apply_chat_template(message, tokenize=False, add_generation_prompt=True)
85
+
86
+ image_inputs, video_inputs = process_vision_info(message)
87
+ inputs = processor(
88
+ text=text,
89
+ images=image_inputs,
90
+ videos=video_inputs,
91
+ padding=True,
92
+ return_tensors="pt",
93
+ ).to("cuda")
94
+
95
+ generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=1024, do_sample=False, generation_config=temp_generation_config)
96
+
97
+ generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
98
+
99
+ output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)
100
+
101
+ print(f'model output: {output_text[0]}')
102
+
103
+ ```
104
+ ### Failure cases
105
+ MedVLM-R1's reasoning fails when testing on more difficult VQA examples. Although it can output correct choices in the following examples, the reasoning of them is either superficial or contradictory.
106
+ ```python
107
+ question = {"image":["images/failure_cases/mrabd021764.png"],"problem":"What is the observable finding in this image?\nA) Brain lesion\nB) Intestinal lesion\nC) Gallbladder lesion\nD) Pancreatic lesion","solution":"D","answer":"Pancreatic lesion"}
108
+
109
+ question = {"image":["images/failure_cases/spine010017.png"],"problem":"What can be observed in this image?\nA) Cystic lesions\nB) Fractured bones\nC) Inflamed tissue\nD) Nerve damage","solution":"A","answer":"Cystic lesions"}
110
+
111
+ question = {"image":["images/failure_cases/ankle056120.png"],"problem":"What attribute can be observed in this image?\nA) Bursitis\nB) Flexor pathology\nC) Tendonitis\nD) Joint inflammation","solution":"B","answer":"Flexor pathology"}
112
+
113
+ question = {"image":["images/failure_cases/lung067009.png"],"problem":"What is the term for the anomaly depicted in the image?\nA) Pulmonary embolism\nB) Airspace opacity\nC) Lung consolidation\nD) Atelectasis","solution":"B","answer":"Airspace opacity"}
114
+
115
+ ```
116
+
117
+ # <span id="Start">Acknowledgement</span>
118
+ We thank all machine learning / medical workers for making public codebase / datasets available to the community 🫶🫶🫶
119
+
120
+ If you find our work helpful, feel free to give us a cite.
121
+
122
+ ```
123
+ @article{pan2025medvlm,
124
+ title={MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning},
125
+ author={Pan, Jiazhen and Liu, Che and Wu, Junde and Liu, Fenglin and Zhu, Jiayuan and Li, Hongwei Bran and Chen, Chen and Ouyang, Cheng and Rueckert, Daniel},
126
+ journal={arXiv preprint arXiv:2502.19634},
127
+ year={2025}
128
+ }
129
+ ```