Datasets:

ArXiv:
timeseed commited on
Commit
f06313c
·
verified ·
1 Parent(s): 762caea

update MIMIC-IR section

Browse files
Files changed (1) hide show
  1. README.md +45 -8
README.md CHANGED
@@ -7,31 +7,47 @@ This is the official data repository for [RadIR: A Scalable Framework for Multi-
7
 
8
  We mine image-paired report to extract findings on diverse anatomy structures, and quantify the multi-grained image-image relevance via [RaTEScore](https://arxiv.org/abs/2406.16845).
9
  Specifically, we have extended two public datasets for multi-grained medical image retrieval task:
10
- - MIMIC-IR is extended from [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/), containing 377,110 images and x anatomy structures.
11
  - CTRATE-IR is extended from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), containing 25,692 images and 48 anatomy structures.
12
 
13
- A simple demo to read the data from CTRATE-IR:
 
 
 
 
14
  ```
 
 
 
15
  import pandas as pd
16
  import numpy as np
17
 
 
18
  anatomy_condition = 'bone'
19
  sample_A_idx = 10
20
  sample_B_idx = 20
21
-
22
  df = pd.read_csv(f'CTRATE-IR/anatomy/train_entity/{anatomy_condition}.csv')
23
  id_ls = df.iloc[:,0].tolist()
24
  findings_ls = df.iloc[:,1].tolist()
25
 
26
  simi_tab = np.load(f'CTRATE-IR/anatomy/train_ratescore/{anatomy_condition}.npy')
27
 
 
 
 
 
 
 
 
 
 
28
  print(f'Sample {id_ls[sample_A_idx]} findings on {anatomy_condition}: {findings_ls[sample_A_idx]}')
29
  print(f'Sample {id_ls[sample_B_idx]} findings on {anatomy_condition}: {findings_ls[sample_B_idx]}')
30
  print(f'Relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')
31
  ```
32
 
33
  Note that the score have been normalized to 0~100 and stored in uint8. We also provide the whole image-level relevance quantified based on their entire reports:
34
- ```
35
  import os
36
  import json
37
  import numpy as np
@@ -39,21 +55,42 @@ import numpy as np
39
  sample_A_idx = 10
40
  sample_B_idx = 20
41
 
 
42
  with open('CTRATE-IR/train_filtered.jsonl', 'r') as f:
43
  data = f.readlines()
44
  data = [json.loads(l) for l in data]
45
-
46
  simi_tab = np.load(f'CTRATE-IR/CT_train_ratescore.npy')
47
-
48
  sample_A_id = os.path.basename(data[sample_A_idx]['img_path'])
49
  sample_B_id = os.path.basename(data[sample_B_idx]['img_path'])
50
-
51
  sample_A_report = os.path.basename(data[sample_A_idx]['text'])
52
  sample_B_report = os.path.basename(data[sample_B_idx]['text'])
53
 
 
 
 
 
 
 
 
 
54
  print(f'Sample {sample_A_id} reports: {sample_A_report}\n')
55
  print(f'Sample {sample_B_id} reports: {sample_B_report}\n')
56
  print(f'Whole image relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')
57
  ```
58
 
59
- For raw image data, you can download them from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE) (or [RadGenome-ChestCT](https://huggingface.co/datasets/RadGenome/RadGenome-ChestCT)) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/). We keep all the sample id consistent so you can easily find them.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  We mine image-paired report to extract findings on diverse anatomy structures, and quantify the multi-grained image-image relevance via [RaTEScore](https://arxiv.org/abs/2406.16845).
9
  Specifically, we have extended two public datasets for multi-grained medical image retrieval task:
10
+ - MIMIC-IR is extended from [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/), containing 377,110 images and 90 anatomy structures.
11
  - CTRATE-IR is extended from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE), containing 25,692 images and 48 anatomy structures.
12
 
13
+ **Note:** For the MIMIC-IR dataset, you need to manually merge and decompress the files.
14
+ After downloading all split parts (from `MIMIC-IR.tar.gz.part00` to `MIMIC-IR.tar.gz.part08`), execute the following commands in the same directory:
15
+ ```
16
+ cat MIMIC-IR.tar.gz.part* > MIMIC-IR.tar.gz
17
+ tar xvzf MIMIC-IR.tar.gz
18
  ```
19
+
20
+ A simple demo to read the data from CTRATE-IR:
21
+ ```python
22
  import pandas as pd
23
  import numpy as np
24
 
25
+ # CTRATE-IR
26
  anatomy_condition = 'bone'
27
  sample_A_idx = 10
28
  sample_B_idx = 20
 
29
  df = pd.read_csv(f'CTRATE-IR/anatomy/train_entity/{anatomy_condition}.csv')
30
  id_ls = df.iloc[:,0].tolist()
31
  findings_ls = df.iloc[:,1].tolist()
32
 
33
  simi_tab = np.load(f'CTRATE-IR/anatomy/train_ratescore/{anatomy_condition}.npy')
34
 
35
+ # # MIMIC-IR
36
+ # anatomy_condition = 'lungs'
37
+ # sample_A_idx = 10
38
+ # sample_B_idx = 20
39
+ # df = pd.read_csv(f'MIMIC-IR/anatomy/train_caption/{anatomy_condition}.csv')
40
+ # id_ls = df.iloc[:,0].tolist()
41
+ # findings_ls = df.iloc[:,1].tolist()
42
+ # simi_tab = np.load(f'MIMIC-IR/anatomy/train_ratescore/{anatomy_condition}.npy')
43
+
44
  print(f'Sample {id_ls[sample_A_idx]} findings on {anatomy_condition}: {findings_ls[sample_A_idx]}')
45
  print(f'Sample {id_ls[sample_B_idx]} findings on {anatomy_condition}: {findings_ls[sample_B_idx]}')
46
  print(f'Relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')
47
  ```
48
 
49
  Note that the score have been normalized to 0~100 and stored in uint8. We also provide the whole image-level relevance quantified based on their entire reports:
50
+ ```python
51
  import os
52
  import json
53
  import numpy as np
 
55
  sample_A_idx = 10
56
  sample_B_idx = 20
57
 
58
+ # CTRATE-IR
59
  with open('CTRATE-IR/train_filtered.jsonl', 'r') as f:
60
  data = f.readlines()
61
  data = [json.loads(l) for l in data]
 
62
  simi_tab = np.load(f'CTRATE-IR/CT_train_ratescore.npy')
 
63
  sample_A_id = os.path.basename(data[sample_A_idx]['img_path'])
64
  sample_B_id = os.path.basename(data[sample_B_idx]['img_path'])
 
65
  sample_A_report = os.path.basename(data[sample_A_idx]['text'])
66
  sample_B_report = os.path.basename(data[sample_B_idx]['text'])
67
 
68
+ # # MIMIC-IR
69
+ # data = pd.read_csv('MIMIC-IR/val_caption.csv')
70
+ # simi_tab = np.load('MIMIC-IR/val_ratescore.npy')
71
+ # sample_A_id = data.iloc[sample_A_idx]['File Path']
72
+ # sample_B_id = data.iloc[sample_B_idx]['File Path']
73
+ # sample_A_report = data.iloc[sample_A_idx]['Findings']
74
+ # sample_B_report = data.iloc[sample_B_idx]['Findings']
75
+
76
  print(f'Sample {sample_A_id} reports: {sample_A_report}\n')
77
  print(f'Sample {sample_B_id} reports: {sample_B_report}\n')
78
  print(f'Whole image relevance score: {simi_tab[sample_A_idx, sample_B_idx]}')
79
  ```
80
 
81
+
82
+
83
+ For raw image data, you can download them from [CTRATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE) (or [RadGenome-ChestCT](https://huggingface.co/datasets/RadGenome/RadGenome-ChestCT)) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/). We keep all the sample id consistent so you can easily find them.
84
+
85
+
86
+ **Citation**
87
+
88
+ If you find our data useful, please consider citing our work!
89
+ ```
90
+ @article{zhang2025radir,
91
+ title={RadIR: A Scalable Framework for Multi-Grained Medical Image Retrieval via Radiology Report Mining},
92
+ author={Zhang, Tengfei and Zhao, Ziheng and Wu, Chaoyi and Zhou, Xiao and Zhang, Ya and Wang, Yangfeng and Xie, Weidi},
93
+ journal={arXiv preprint arXiv:2503.04653},
94
+ year={2025}
95
+ }
96
+ ```