Datasets:

ArXiv:
File size: 4,276 Bytes
ebd2df5
 
 
 
 
6c924fb
 
ddbd4aa
 
6c924fb
 
87c286b
6c924fb
 
0dabed8
87c286b
 
 
 
 
6c924fb
87c286b
b69ac19
0dabed8
87c286b
6c924fb
 
 
87c286b
6c924fb
 
 
 
 
 
 
 
87c286b
 
 
 
 
 
 
 
 
6c924fb
 
 
 
 
0dabed8
 
 
87c286b
6c924fb
 
 
 
 
 
 
87c286b
6c924fb
 
 
 
 
 
 
 
 
87c286b
 
 
 
 
 
 
 
6c924fb
 
 
 
 
87c286b
 
 
 
 
7a928d1
87c286b
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
tags:
- medical
---

This is the official data repository for [RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval via Radiology Report Mining](https://www.arxiv.org/abs/2503.04653).

If you try to build RadIR with the data we provide, please refer to this [repo](https://github.com/MAGIC-AI4Med/RadIR).

We mine image-paired report to extract findings on diverse anatomy structures, and quantify the multi-grained image-image relevance via [RaTEScore](https://arxiv.org/abs/2406.16845). 
Specifically, we have extended two public datasets for multi-grained medical image retrieval task:
- MIMIC-IR is extended from [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/), containing 377,110 images and 90 anatomy structures.
- CTRATE-IR is extended from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), containing 25,692 images and 48 anatomy structures.


**Note:** For the MIMIC-IR dataset, you need to manually merge and decompress the files.  
After downloading all split parts (from `MIMIC-IR.tar.gz.part00` to `MIMIC-IR.tar.gz.part08`), execute the following commands in the same directory:
```
cat MIMIC-IR.tar.gz.part* > MIMIC-IR.tar.gz
tar xvzf MIMIC-IR.tar.gz
```

This example demonstrates how to read data from either the MIMIC-IR or CTRATE-IR datasets. You can switch between datasets by commenting/uncommenting the relevant sections.

```python
import pandas as pd
import numpy as np

# CTRATE-IR
anatomy_condition = 'bone'
sample_A_idx = 10
sample_B_idx = 20
df = pd.read_csv(f'CTRATE-IR/anatomy/train_entity/{anatomy_condition}.csv')
id_ls = df.iloc[:,0].tolist()
findings_ls = df.iloc[:,1].tolist()
simi_tab = np.load(f'CTRATE-IR/anatomy/train_ratescore/{anatomy_condition}.npy')

# # MIMIC-IR
# anatomy_condition = 'lungs'
# sample_A_idx = 10
# sample_B_idx = 20
# df = pd.read_csv(f'MIMIC-IR/anatomy/train_caption/{anatomy_condition}.csv')
# id_ls = df.iloc[:,0].tolist()
# findings_ls = df.iloc[:,1].tolist()
# simi_tab = np.load(f'MIMIC-IR/anatomy/train_ratescore/{anatomy_condition}.npy')

print(f'Sample {id_ls[sample_A_idx]} findings on {anatomy_condition}: {findings_ls[sample_A_idx]}')
print(f'Sample {id_ls[sample_B_idx]} findings on {anatomy_condition}: {findings_ls[sample_B_idx]}')
print(f'Relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')
```

**Note:** the score have been normalized to 0~100 and stored in uint8. 

We also provide the whole image-level relevance quantified based on their entire reports:
```python
import os
import json
import numpy as np

sample_A_idx = 10
sample_B_idx = 20

# CTRATE-IR
with open('CTRATE-IR/train_filtered.jsonl', 'r') as f:
  data = f.readlines()
  data = [json.loads(l) for l in data]
simi_tab = np.load(f'CTRATE-IR/CT_train_ratescore.npy')
sample_A_id = os.path.basename(data[sample_A_idx]['img_path'])
sample_B_id = os.path.basename(data[sample_B_idx]['img_path'])
sample_A_report = os.path.basename(data[sample_A_idx]['text'])
sample_B_report = os.path.basename(data[sample_B_idx]['text'])

# # MIMIC-IR
# data = pd.read_csv('MIMIC-IR/val_caption.csv')
# simi_tab = np.load('MIMIC-IR/val_ratescore.npy')
# sample_A_id = data.iloc[sample_A_idx]['File Path']
# sample_B_id = data.iloc[sample_B_idx]['File Path']
# sample_A_report = data.iloc[sample_A_idx]['Findings']
# sample_B_report = data.iloc[sample_B_idx]['Findings']

print(f'Sample {sample_A_id} reports: {sample_A_report}\n')
print(f'Sample {sample_B_id} reports: {sample_B_report}\n')
print(f'Whole image relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')
```

For raw image data, you can download them from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE) (or [RadGenome-ChestCT](https://huggingface.co/datasets/RadGenome/RadGenome-ChestCT)) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/). We keep all the sample id consistent so you can easily find them.


**Citation**

If you find our data useful, please cite our work:
```
@article{zhang2025radir,
  title={RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval via Radiology Report Mining},
  author={Zhang, Tengfei and Zhao, Ziheng and Wu, Chaoyi and Zhou, Xiao and Zhang, Ya and Wang, Yangfeng and Xie, Weidi},
  journal={arXiv preprint arXiv:2503.04653},
  year={2025}
}
```