Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ tags:
|
|
9 |
- medical
|
10 |
---
|
11 |
## LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching (Neurips 2023).
|
12 |
-
We release [LVM-Med](https://arxiv.org/abs/2306.11925)'s pre-trained models and demonstrate downstream tasks on 2D-3D segmentations, linear/fully finetuning image classification, and object detection.
|
13 |
|
14 |
LVM-Med was trained with ~ 1.3 million medical images collected from 55 datasets using a second-order graph matching formulation unifying
|
15 |
current contrastive and instance-based SSL.
|
@@ -37,11 +37,11 @@ current contrastive and instance-based SSL.
|
|
37 |
* [License](#license)
|
38 |
|
39 |
## News
|
40 |
-
-
|
41 |
-
-
|
42 |
-
-
|
43 |
-
-
|
44 |
-
- **31/07/2023**: Release ONNX support for LVM-Med ResNet50 and LVM-Med ViT as backbones in
|
45 |
- **26/07/2023**: We release ViT architectures (**ViT-B** and **ViT-H**) initialized from LVM-Med and further training on the LIVECell dataset with 1.6 million high-quality cells. See at this [table](#further-training-lvm-med-on-large-dataset).
|
46 |
- **25/06/2023**: We release two pre-trained models of LVM-Med: ResNet-50 and ViT-B. Providing scripts for downstream tasks.
|
47 |
|
@@ -75,7 +75,7 @@ After downloading the pre-trained models, please place them in [`lvm_med_weights
|
|
75 |
- For **Resnet-50**, we demo **end-to-end** segmentation/classification/object detection.
|
76 |
- For **ViT-B**, we demo **prompt-based** segmentation using bounding-boxes.
|
77 |
|
78 |
-
**Important Note:** please check[```dataset.md```](https://github.com/duyhominhnguyen/LVM-Med/blob/main/lvm-med-training-data/README.md) to avoid potential leaking testing data when using our model.
|
79 |
|
80 |
**Segment Anything Model-related Experiments**
|
81 |
- For all experiments using [SAM](https://github.com/facebookresearch/segment-anything) model, we use the base architecture of SAM which is `sam_vit_b`. You could browse the [`original repo`](https://github.com/facebookresearch/segment-anything) for this pre-trained weight and put it in [`./working_dir/sam_vit_b_01ec64.pth`](./working_dir/) folder to use yaml properly.
|
|
|
9 |
- medical
|
10 |
---
|
11 |
## LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching (Neurips 2023).
|
12 |
+
We release [LVM-Med](https://arxiv.org/abs/2306.11925)'s pre-trained models in PyTorch and demonstrate downstream tasks on 2D-3D segmentations, linear/fully finetuning image classification, and object detection.
|
13 |
|
14 |
LVM-Med was trained with ~ 1.3 million medical images collected from 55 datasets using a second-order graph matching formulation unifying
|
15 |
current contrastive and instance-based SSL.
|
|
|
37 |
* [License](#license)
|
38 |
|
39 |
## News
|
40 |
+
- **14/12/2023**: The LVM-Med training algorithm is ready to be released! Please send us an email to request!
|
41 |
+
- If you want to have other architecture, send us a request by email or create an Issue. If the requests are enough, we will train them.
|
42 |
+
- Coming soon: ConvNext architecture trained by LVM-Med.
|
43 |
+
- Coming soon: ViT architectures for end-to-end segmentation with better performance reported in the paper.
|
44 |
+
- **31/07/2023**: Release ONNX support for LVM-Med ResNet50 and LVM-Med ViT as backbones in `onnx_model` folder.
|
45 |
- **26/07/2023**: We release ViT architectures (**ViT-B** and **ViT-H**) initialized from LVM-Med and further training on the LIVECell dataset with 1.6 million high-quality cells. See at this [table](#further-training-lvm-med-on-large-dataset).
|
46 |
- **25/06/2023**: We release two pre-trained models of LVM-Med: ResNet-50 and ViT-B. Providing scripts for downstream tasks.
|
47 |
|
|
|
75 |
- For **Resnet-50**, we demo **end-to-end** segmentation/classification/object detection.
|
76 |
- For **ViT-B**, we demo **prompt-based** segmentation using bounding-boxes.
|
77 |
|
78 |
+
**Important Note:** please check [```dataset.md```](https://github.com/duyhominhnguyen/LVM-Med/blob/main/lvm-med-training-data/README.md) to avoid potential leaking testing data when using our model.
|
79 |
|
80 |
**Segment Anything Model-related Experiments**
|
81 |
- For all experiments using [SAM](https://github.com/facebookresearch/segment-anything) model, we use the base architecture of SAM which is `sam_vit_b`. You could browse the [`original repo`](https://github.com/facebookresearch/segment-anything) for this pre-trained weight and put it in [`./working_dir/sam_vit_b_01ec64.pth`](./working_dir/) folder to use yaml properly.
|