Skip to content

Commit 78e036c

Browse files
authored
[Project] Medical semantic seg dataset: orvs (#2728)
1 parent 6333dc1 commit 78e036c

File tree

7 files changed

+315
-0
lines changed

7 files changed

+315
-0
lines changed
Lines changed: 140 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,140 @@
1+
# ORVS (Online Retinal image for Vessel Segmentation (ORVS))
2+
3+
## Description
4+
5+
This project supports **`ORVS (Online Retinal image for Vessel Segmentation (ORVS))`**, which can be downloaded from [here](https://opendatalab.org.cn/ORVS).
6+
7+
### Dataset Overview
8+
9+
The ORVS dataset is a newly established collaboration between the Department of Computer Science and the Department of Vision Science at the University of Calgary. The dataset contains 49 images collected from a clinic in Calgary, Canada, consisting of 42 training images and 7 testing images. All images were obtained using a Zeiss Visucam 200 with a 30-degree field of view (FOV). The image size is 1444×1444 pixels with 24 bits per pixel. The images are stored in JPEG format with low compression, which is common in ophthalmic practice. All images were manually traced by an expert who has been working in the field of retinal image analysis and has been trained to mark all pixels belonging to retinal vessels. The Windows Paint 3D tool was used for manual image annotation.
10+
11+
### Original Statistic Information
12+
13+
| Dataset name | Anatomical region | Task type | Modality | Num. Classes | Train/Val/Test Images | Train/Val/Test Labeled | Release Date | License |
14+
| ------------------------------------------------------ | ----------------- | ------------ | ------------------ | ------------ | --------------------- | ---------------------- | ------------ | ------- |
15+
| [Bactteria detection](https://opendatalab.org.cn/ORVS) | bacteria | segmentation | fundus photography | 2 | 130/-/72 | yes/-/yes | 2020 | - |
16+
17+
| Class Name | Num. Train | Pct. Train | Num. Val | Pct. Val | Num. Test | Pct. Test |
18+
| :--------: | :--------: | :--------: | :------: | :------: | :-------: | :-------: |
19+
| background | 130 | 94.83 | - | - | 72 | 94.25 |
20+
| vessel | 130 | 5.17 | - | - | 72 | 5.75 |
21+
22+
Note:
23+
24+
- `Pct` means percentage of pixels in this category in all pixels.
25+
26+
### Visualization
27+
28+
![bac](https://raw.githubusercontent.com/uni-medical/medical-datasets-visualization/main/2d/semantic_seg/fundus_photography/orvs/ORVS_dataset.png)
29+
30+
### Prerequisites
31+
32+
- Python v3.8
33+
- PyTorch v1.10.0
34+
- [MIM](https://github.com/open-mmlab/mim) v0.3.4
35+
- [MMCV](https://github.com/open-mmlab/mmcv) v2.0.0rc4
36+
- [MMEngine](https://github.com/open-mmlab/mmengine) v0.2.0 or higher
37+
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation) v1.0.0rc5
38+
39+
All the commands below rely on the correct configuration of `PYTHONPATH`, which should point to the project's directory so that Python can locate the module files. In `orvs/` root directory, run the following line to add the current directory to `PYTHONPATH`:
40+
41+
```shell
42+
export PYTHONPATH=`pwd`:$PYTHONPATH
43+
```
44+
45+
### Dataset preparing
46+
47+
- Clone this [repository](https://github.com/AbdullahSarhan/ICPRVessels), then move `Vessels-Datasets` to `data/`.
48+
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below.
49+
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly.
50+
51+
```none
52+
mmsegmentation
53+
├── mmseg
54+
├── projects
55+
│ ├── medical
56+
│ │ ├── 2d_image
57+
│ │ │ ├── fundus_photography
58+
│ │ │ │ ├── orvs
59+
│ │ │ │ │ ├── configs
60+
│ │ │ │ │ ├── datasets
61+
│ │ │ │ │ ├── tools
62+
│ │ │ │ │ ├── data
63+
│ │ │ │ │ │ ├── train.txt
64+
│ │ │ │ │ │ ├── test.txt
65+
│ │ │ │ │ │ ├── images
66+
│ │ │ │ │ │ │ ├── train
67+
│ │ │ │ | │ │ │ ├── xxx.png
68+
│ │ │ │ | │ │ │ ├── ...
69+
│ │ │ │ | │ │ │ └── xxx.png
70+
│ │ │ │ │ │ ├── masks
71+
│ │ │ │ │ │ │ ├── train
72+
│ │ │ │ | │ │ │ ├── xxx.png
73+
│ │ │ │ | │ │ │ ├── ...
74+
│ │ │ │ | │ │ │ └── xxx.png
75+
```
76+
77+
### Training commands
78+
79+
Train models on a single server with one GPU.
80+
81+
```shell
82+
mim train mmseg ./configs/${CONFIG_FILE}
83+
```
84+
85+
### Testing commands
86+
87+
Test models on a single server with one GPU.
88+
89+
```shell
90+
mim test mmseg ./configs/${CONFIG_FILE} --checkpoint ${CHECKPOINT_PATH}
91+
```
92+
93+
<!-- List the results as usually done in other model's README. [Example](https://github.com/open-mmlab/mmsegmentation/tree/dev-1.x/configs/fcn#results-and-models)
94+
95+
You should claim whether this is based on the pre-trained weights, which are converted from the official release; or it's a reproduced result obtained from retraining the model in this project. -->
96+
97+
## Dataset Citation
98+
99+
If this work is helpful for your research, please consider citing the below paper.
100+
101+
```
102+
@inproceedings{sarhan2021transfer,
103+
title={Transfer learning through weighted loss function and group normalization for vessel segmentation from retinal images},
104+
author={Sarhan, Abdullah and Rokne, Jon and Alhajj, Reda and Crichton, Andrew},
105+
booktitle={2020 25th International Conference on Pattern Recognition (ICPR)},
106+
pages={9211--9218},
107+
year={2021},
108+
organization={IEEE}
109+
}
110+
```
111+
112+
## Checklist
113+
114+
- [x] Milestone 1: PR-ready, and acceptable to be one of the `projects/`.
115+
116+
- [x] Finish the code
117+
118+
- [x] Basic docstrings & proper citation
119+
120+
- [ ] Test-time correctness
121+
122+
- [x] A full README
123+
124+
- [ ] Milestone 2: Indicates a successful model implementation.
125+
126+
- [ ] Training-time correctness
127+
128+
- [ ] Milestone 3: Good to be a part of our core package!
129+
130+
- [ ] Type hints and docstrings
131+
132+
- [ ] Unit tests
133+
134+
- [ ] Code polishing
135+
136+
- [ ] Metafile.yml
137+
138+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
139+
140+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'./orvs_512x512.py', 'mmseg::_base_/models/fcn_unet_s5-d16.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.orvs_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.0001)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'./orvs_512x512.py', 'mmseg::_base_/models/fcn_unet_s5-d16.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.orvs_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.001)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'./orvs_512x512.py', 'mmseg::_base_/models/fcn_unet_s5-d16.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.orvs_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.01)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
dataset_type = 'ORVSDataset'
2+
data_root = 'data/'
3+
img_scale = (512, 512)
4+
train_pipeline = [
5+
dict(type='LoadImageFromFile'),
6+
dict(type='LoadAnnotations'),
7+
dict(type='Resize', scale=img_scale, keep_ratio=False),
8+
dict(type='RandomFlip', prob=0.5),
9+
dict(type='PhotoMetricDistortion'),
10+
dict(type='PackSegInputs')
11+
]
12+
test_pipeline = [
13+
dict(type='LoadImageFromFile'),
14+
dict(type='Resize', scale=img_scale, keep_ratio=False),
15+
dict(type='LoadAnnotations'),
16+
dict(type='PackSegInputs')
17+
]
18+
train_dataloader = dict(
19+
batch_size=16,
20+
num_workers=4,
21+
persistent_workers=True,
22+
sampler=dict(type='InfiniteSampler', shuffle=True),
23+
dataset=dict(
24+
type=dataset_type,
25+
data_root=data_root,
26+
ann_file='train.txt',
27+
data_prefix=dict(img_path='images/', seg_map_path='masks/'),
28+
pipeline=train_pipeline))
29+
val_dataloader = dict(
30+
batch_size=1,
31+
num_workers=4,
32+
persistent_workers=True,
33+
sampler=dict(type='DefaultSampler', shuffle=False),
34+
dataset=dict(
35+
type=dataset_type,
36+
data_root=data_root,
37+
ann_file='test.txt',
38+
data_prefix=dict(img_path='images/', seg_map_path='masks/'),
39+
pipeline=test_pipeline))
40+
test_dataloader = val_dataloader
41+
val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mDice'])
42+
test_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mDice'])
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
from mmseg.datasets import BaseSegDataset
2+
from mmseg.registry import DATASETS
3+
4+
5+
@DATASETS.register_module()
6+
class ORVSDataset(BaseSegDataset):
7+
"""ORVSDataset dataset.
8+
9+
In segmentation map annotation for ORVSDataset,
10+
``reduce_zero_label`` is fixed to False. The ``img_suffix``
11+
is fixed to '.png' and ``seg_map_suffix`` is fixed to '.png'.
12+
13+
Args:
14+
img_suffix (str): Suffix of images. Default: '.png'
15+
seg_map_suffix (str): Suffix of segmentation maps. Default: '.png'
16+
"""
17+
METAINFO = dict(classes=('background', 'vessel'))
18+
19+
def __init__(self,
20+
img_suffix='.png',
21+
seg_map_suffix='.png',
22+
**kwargs) -> None:
23+
super().__init__(
24+
img_suffix=img_suffix,
25+
seg_map_suffix=seg_map_suffix,
26+
reduce_zero_label=False,
27+
**kwargs)
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
import glob
2+
import os
3+
4+
import numpy as np
5+
from PIL import Image
6+
7+
root_path = 'data/'
8+
img_suffix = '.jpg'
9+
seg_map_suffix_list = ['.jpg', '.png', '.tif']
10+
save_img_suffix = '.png'
11+
save_seg_map_suffix = '.png'
12+
13+
x_train = glob.glob(
14+
os.path.join('data/Vessels-Datasets/*/Train/Original/Images/*' +
15+
img_suffix))
16+
x_test = glob.glob(
17+
os.path.join('data/Vessels-Datasets/*/Test/Original/Images/*' +
18+
img_suffix))
19+
20+
os.system('mkdir -p ' + root_path + 'images/train/')
21+
os.system('mkdir -p ' + root_path + 'images/test/')
22+
os.system('mkdir -p ' + root_path + 'masks/train/')
23+
os.system('mkdir -p ' + root_path + 'masks/test/')
24+
25+
part_dir_dict = {0: 'train/', 1: 'test/'}
26+
for ith, part in enumerate([x_train, x_test]):
27+
part_dir = part_dir_dict[ith]
28+
for img in part:
29+
type_name = img.split('/')[-5]
30+
basename = type_name + '_' + os.path.basename(img)
31+
save_img_path = root_path + 'images/' + part_dir + basename.split(
32+
'.')[0] + save_img_suffix
33+
Image.open(img).save(save_img_path)
34+
35+
for seg_map_suffix in seg_map_suffix_list:
36+
if os.path.exists('/'.join(img.split('/')[:-1]).replace(
37+
'Images', 'Labels')):
38+
mask_path = img.replace('Images', 'Labels').replace(
39+
img_suffix, seg_map_suffix)
40+
else:
41+
mask_path = img.replace('Images', 'labels').replace(
42+
img_suffix, seg_map_suffix)
43+
if os.path.exists(mask_path):
44+
break
45+
save_mask_path = root_path + 'masks/' + part_dir + basename.split(
46+
'.')[0] + save_seg_map_suffix
47+
masks = np.array(Image.open(mask_path).convert('L')).astype(np.uint8)
48+
if len(np.unique(masks)) == 2 and 1 in np.unique(masks):
49+
print(np.unique(masks))
50+
pass
51+
else:
52+
masks[masks < 128] = 0
53+
masks[masks >= 128] = 1
54+
masks = Image.fromarray(masks)
55+
masks.save(save_mask_path)

0 commit comments

Comments
 (0)