Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This is the official data repository for [RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval via Radiology Report Mining](https://www.arxiv.org/abs/2503.04653).
|
2 |
+
|
3 |
+
We mine image-paired report to extract findings on diverse anatomy structures, and quantify the multi-grained image-image relevance via [RaTEScore](https://arxiv.org/abs/2406.16845).
|
4 |
+
Specifically, we have extended two public datasets for multi-grained medical image retrieval task:
|
5 |
+
- MIMIC-IR is extended from [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/), containing 377,110 images and x anatomy structures.
|
6 |
+
- CTRATE-IR is extended from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), containing 25,692 images and 48 anatomy structures.
|
7 |
+
|
8 |
+
A simple demo to read the data from CTRATE-IR:
|
9 |
+
```
|
10 |
+
import pandas as pd
|
11 |
+
import numpy as np
|
12 |
+
|
13 |
+
anatomy_condition = 'bone'
|
14 |
+
sample_A_idx = 10
|
15 |
+
sample_B_idx = 20
|
16 |
+
|
17 |
+
df = pd.read_csv(f'CTRATE-IR/anatomy/train_entity/{anatomy_condition}.csv')
|
18 |
+
id_ls = df.iloc[:,0].tolist()
|
19 |
+
findings_ls = df.iloc[:,1].tolist()
|
20 |
+
|
21 |
+
simi_tab = np.load(f'CTRATE-IR/anatomy/train_ratescore/{anatomy_condition}.npy')
|
22 |
+
|
23 |
+
print(f'Sample {id_ls[sample_A_idx]} findings on {anatomy_condition}: {findings_ls[sample_A_idx]}')
|
24 |
+
print(f'Sample {id_ls[sample_B_idx]} findings on {anatomy_condition}: {findings_ls[sample_B_idx]}')
|
25 |
+
print(f'Relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')
|
26 |
+
```
|
27 |
+
|
28 |
+
We also provide the whole image-level relevance quantified based on their entire reports:
|
29 |
+
```
|
30 |
+
import os
|
31 |
+
import json
|
32 |
+
import numpy as np
|
33 |
+
|
34 |
+
sample_A_idx = 10
|
35 |
+
sample_B_idx = 20
|
36 |
+
|
37 |
+
with open('CTRATE-IR/train_filtered.jsonl', 'r') as f:
|
38 |
+
data = f.readlines()
|
39 |
+
data = [json.loads(l) for l in data]
|
40 |
+
|
41 |
+
simi_tab = np.load(f'CTRATE-IR/CT_train_ratescore.npy')
|
42 |
+
|
43 |
+
sample_A_id = os.path.basename(data[sample_A_idx]['img_path'])
|
44 |
+
sample_B_id = os.path.basename(data[sample_B_idx]['img_path'])
|
45 |
+
|
46 |
+
sample_A_report = os.path.basename(data[sample_A_idx]['text'])
|
47 |
+
sample_B_report = os.path.basename(data[sample_B_idx]['text'])
|
48 |
+
|
49 |
+
print(f'Sample {sample_A_id} reports: {sample_A_report}\n')
|
50 |
+
print(f'Sample {sample_B_id} reports: {sample_B_report}\n')
|
51 |
+
print(f'Whole image relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')
|
52 |
+
```
|
53 |
+
|
54 |
+
For raw image data, you can download them from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE) (or [RadGenome-ChestCT](https://huggingface.co/datasets/RadGenome/RadGenome-ChestCT)) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/). We keep all the sample id consistent so you can easily find them.
|