Skip to content

Commit d934d10

Browse files
authored
[Project] Medical semantic seg dataset: Rite (open-mmlab#2680)
1 parent 5a9cfa9 commit d934d10

File tree

8 files changed

+375
-0
lines changed

8 files changed

+375
-0
lines changed
Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
# Retinal Images vessel Tree Extraction (RITE)
2+
3+
## Description
4+
5+
This project supports **`Retinal Images vessel Tree Extraction (RITE) `**, which can be downloaded from [here](https://opendatalab.com/RITE).
6+
7+
### Dataset Overview
8+
9+
The RITE (Retinal Images vessel Tree Extraction) is a database that enables comparative studies on segmentation or classification of arteries and veins on retinal fundus images, which is established based on the public available DRIVE database (Digital Retinal Images for Vessel Extraction). RITE contains 40 sets of images, equally separated into a training subset and a test subset, the same as DRIVE. The two subsets are built from the corresponding two subsets in DRIVE. For each set, there is a fundus photograph, a vessel reference standard. The fundus photograph is inherited from DRIVE. For the training set, the vessel reference standard is a modified version of 1st_manual from DRIVE. For the test set, the vessel reference standard is 2nd_manual from DRIVE.
10+
11+
### Statistic Information
12+
13+
| Dataset Name | Anatomical Region | Task Type | Modality | Num. Classes | Train/Val/Test Images | Train/Val/Test Labeled | Release Date | License |
14+
| ------------------------------------ | ----------------- | ------------ | ------------------ | ------------ | --------------------- | ---------------------- | ------------ | --------------------------------------------------------------- |
15+
| [Rite](https://opendatalab.com/RITE) | head_and_neck | segmentation | fundus_photography | 2 | 20/-/20 | yes/-/yes | 2013 | [CC-BY-NC 4.0](https://creativecommons.org/licenses/by-sa/4.0/) |
16+
17+
| Class Name | Num. Train | Pct. Train | Num. Val | Pct. Val | Num. Test | Pct. Test |
18+
| :--------: | :--------: | :--------: | :------: | :------: | :-------: | :-------: |
19+
| background | 20 | 91.61 | - | - | 20 | 91.58 |
20+
| vessel | 20 | 8.39 | - | - | 20 | 8.42 |
21+
22+
Note:
23+
24+
- `Pct` means percentage of pixels in this category in all pixels.
25+
26+
### Visualization
27+
28+
![rite](https://raw.githubusercontent.com/uni-medical/medical-datasets-visualization/main/2d/semantic_seg/fundus_photography/rite/rite_dataset.png?raw=true)
29+
30+
### Dataset Citation
31+
32+
```
33+
@InProceedings{10.1007/978-3-642-40763-5_54,
34+
author={Hu, Qiao and Abr{\`a}moff, Michael D. and Garvin, Mona K.},
35+
title={Automated Separation of Binary Overlapping Trees in Low-Contrast Color Retinal Images},
36+
booktitle={Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2013},
37+
year={2013},
38+
pages={436--443},
39+
}
40+
41+
42+
```
43+
44+
### Prerequisites
45+
46+
- Python v3.8
47+
- PyTorch v1.10.0
48+
- pillow(PIL) v9.3.0 9.3.0
49+
- scikit-learn(sklearn) v1.2.0 1.2.0
50+
- [MIM](https://github.com/open-mmlab/mim) v0.3.4
51+
- [MMCV](https://github.com/open-mmlab/mmcv) v2.0.0rc4
52+
- [MMEngine](https://github.com/open-mmlab/mmengine) v0.2.0 or higher
53+
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation) v1.0.0rc5
54+
55+
All the commands below rely on the correct configuration of `PYTHONPATH`, which should point to the project's directory so that Python can locate the module files. In `rite/` root directory, run the following line to add the current directory to `PYTHONPATH`:
56+
57+
```shell
58+
export PYTHONPATH=`pwd`:$PYTHONPATH
59+
```
60+
61+
### Dataset Preparing
62+
63+
- download dataset from [here](https://opendatalab.com/RITE) and decompress data to path `'data/'`.
64+
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below.
65+
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set cannot be obtained, we generate `train.txt` and `val.txt` from the training set randomly.
66+
67+
```none
68+
mmsegmentation
69+
├── mmseg
70+
├── projects
71+
│ ├── medical
72+
│ │ ├── 2d_image
73+
│ │ │ ├── fundus_photography
74+
│ │ │ │ ├── rite
75+
│ │ │ │ │ ├── configs
76+
│ │ │ │ │ ├── datasets
77+
│ │ │ │ │ ├── tools
78+
│ │ │ │ │ ├── data
79+
│ │ │ │ │ │ ├── train.txt
80+
│ │ │ │ │ │ ├── val.txt
81+
│ │ │ │ │ │ ├── images
82+
│ │ │ │ │ │ │ ├── train
83+
│ │ │ │ | │ │ │ ├── xxx.png
84+
│ │ │ │ | │ │ │ ├── ...
85+
│ │ │ │ | │ │ │ └── xxx.png
86+
│ │ │ │ │ │ ├── masks
87+
│ │ │ │ │ │ │ ├── train
88+
│ │ │ │ | │ │ │ ├── xxx.png
89+
│ │ │ │ | │ │ │ ├── ...
90+
│ │ │ │ | │ │ │ └── xxx.png
91+
```
92+
93+
### Training commands
94+
95+
To train models on a single server with one GPU. (default)
96+
97+
```shell
98+
mim train mmseg ./configs/${CONFIG_FILE}
99+
```
100+
101+
### Testing commands
102+
103+
To test models on a single server with one GPU. (default)
104+
105+
```shell
106+
mim test mmseg ./configs/${CONFIG_FILE} --checkpoint ${CHECKPOINT_PATH}
107+
```
108+
109+
<!-- List the results as usually done in other model's README. [Example](https://github.com/open-mmlab/mmsegmentation/tree/dev-1.x/configs/fcn#results-and-models)
110+
111+
You should claim whether this is based on the pre-trained weights, which are converted from the official release; or it's a reproduced result obtained from retraining the model in this project. -->
112+
113+
## Checklist
114+
115+
- [x] Milestone 1: PR-ready, and acceptable to be one of the `projects/`.
116+
117+
- [x] Finish the code
118+
- [x] Basic docstrings & proper citation
119+
- [ ] Test-time correctness
120+
- [x] A full README
121+
122+
- [ ] Milestone 2: Indicates a successful model implementation.
123+
124+
- [ ] Training-time correctness
125+
126+
- [ ] Milestone 3: Good to be a part of our core package!
127+
128+
- [ ] Type hints and docstrings
129+
- [ ] Unit tests
130+
- [ ] Code polishing
131+
- [ ] Metafile.yml
132+
133+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
134+
135+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'mmseg::_base_/models/fcn_unet_s5-d16.py', './rite_512x512.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.rite_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.0001)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'mmseg::_base_/models/fcn_unet_s5-d16.py', './rite_512x512.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.rite_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.001)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'mmseg::_base_/models/fcn_unet_s5-d16.py', './rite_512x512.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.rite_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.01)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
_base_ = [
2+
'mmseg::_base_/models/fcn_unet_s5-d16.py', './rite_512x512.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.rite_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.01)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(
14+
num_classes=2, loss_decode=dict(use_sigmoid=True), out_channels=1),
15+
auxiliary_head=None,
16+
test_cfg=dict(mode='whole', _delete_=True))
17+
vis_backends = None
18+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
dataset_type = 'RITEDataset'
2+
data_root = 'data/'
3+
img_scale = (512, 512)
4+
train_pipeline = [
5+
dict(type='LoadImageFromFile'),
6+
dict(type='LoadAnnotations'),
7+
dict(type='Resize', scale=img_scale, keep_ratio=False),
8+
dict(type='RandomFlip', prob=0.5),
9+
dict(type='PhotoMetricDistortion'),
10+
dict(type='PackSegInputs')
11+
]
12+
test_pipeline = [
13+
dict(type='LoadImageFromFile'),
14+
dict(type='Resize', scale=img_scale, keep_ratio=False),
15+
dict(type='LoadAnnotations'),
16+
dict(type='PackSegInputs')
17+
]
18+
train_dataloader = dict(
19+
batch_size=16,
20+
num_workers=4,
21+
persistent_workers=True,
22+
sampler=dict(type='InfiniteSampler', shuffle=True),
23+
dataset=dict(
24+
type=dataset_type,
25+
data_root=data_root,
26+
ann_file='train.txt',
27+
data_prefix=dict(img_path='images/', seg_map_path='masks/'),
28+
pipeline=train_pipeline))
29+
val_dataloader = dict(
30+
batch_size=1,
31+
num_workers=4,
32+
persistent_workers=True,
33+
sampler=dict(type='DefaultSampler', shuffle=False),
34+
dataset=dict(
35+
type=dataset_type,
36+
data_root=data_root,
37+
ann_file='test.txt',
38+
data_prefix=dict(img_path='images/', seg_map_path='masks/'),
39+
pipeline=test_pipeline))
40+
test_dataloader = val_dataloader
41+
val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mDice'])
42+
test_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mDice'])
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
from mmseg.datasets import BaseSegDataset
2+
from mmseg.registry import DATASETS
3+
4+
5+
@DATASETS.register_module()
6+
class RITEDataset(BaseSegDataset):
7+
"""RITEDataset dataset.
8+
9+
In segmentation map annotation for RITEDataset,
10+
0 stands for background, which is included in 2 categories.
11+
``reduce_zero_label`` is fixed to False. The ``img_suffix``
12+
is fixed to '.png' and ``seg_map_suffix`` is fixed to '.png'.
13+
14+
Args:
15+
img_suffix (str): Suffix of images. Default: '.png'
16+
seg_map_suffix (str): Suffix of segmentation maps. Default: '.png'
17+
reduce_zero_label (bool): Whether to mark label zero as ignored.
18+
Default to False.
19+
"""
20+
METAINFO = dict(classes=('background', 'vessel'))
21+
22+
def __init__(self,
23+
img_suffix='.png',
24+
seg_map_suffix='.png',
25+
reduce_zero_label=False,
26+
**kwargs) -> None:
27+
super().__init__(
28+
img_suffix=img_suffix,
29+
seg_map_suffix=seg_map_suffix,
30+
reduce_zero_label=reduce_zero_label,
31+
**kwargs)
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
import glob
2+
import os
3+
4+
import numpy as np
5+
from PIL import Image
6+
7+
root_path = 'data/'
8+
img_suffix = '.tif'
9+
seg_map_suffix = '.png'
10+
save_img_suffix = '.png'
11+
save_seg_map_suffix = '.png'
12+
src_img_train_dir = os.path.join(root_path, 'AV_groundTruth/training/images/')
13+
src_img_test_dir = os.path.join(root_path, 'AV_groundTruth/test/images/')
14+
src_mask_train_dir = os.path.join(root_path, 'AV_groundTruth/training/vessel/')
15+
src_mask_test_dir = os.path.join(root_path, 'AV_groundTruth/test/vessel/')
16+
17+
tgt_img_train_dir = os.path.join(root_path, 'images/train/')
18+
tgt_mask_train_dir = os.path.join(root_path, 'masks/train/')
19+
tgt_img_test_dir = os.path.join(root_path, 'images/test/')
20+
tgt_mask_test_dir = os.path.join(root_path, 'masks/test/')
21+
os.system('mkdir -p ' + tgt_img_train_dir)
22+
os.system('mkdir -p ' + tgt_mask_train_dir)
23+
os.system('mkdir -p ' + tgt_img_test_dir)
24+
os.system('mkdir -p ' + tgt_mask_test_dir)
25+
26+
27+
def filter_suffix_recursive(src_dir, suffix):
28+
# filter out file names and paths in source directory
29+
suffix = '.' + suffix if '.' not in suffix else suffix
30+
file_paths = glob.glob(
31+
os.path.join(src_dir, '**', '*' + suffix), recursive=True)
32+
file_names = [_.split('/')[-1] for _ in file_paths]
33+
return sorted(file_paths), sorted(file_names)
34+
35+
36+
def convert_label(img, convert_dict):
37+
arr = np.zeros_like(img, dtype=np.uint8)
38+
for c, i in convert_dict.items():
39+
arr[img == c] = i
40+
return arr
41+
42+
43+
def convert_pics_into_pngs(src_dir, tgt_dir, suffix, convert='RGB'):
44+
if not os.path.exists(tgt_dir):
45+
os.makedirs(tgt_dir)
46+
47+
src_paths, src_names = filter_suffix_recursive(src_dir, suffix=suffix)
48+
for i, (src_name, src_path) in enumerate(zip(src_names, src_paths)):
49+
tgt_name = src_name.replace(suffix, save_img_suffix)
50+
tgt_path = os.path.join(tgt_dir, tgt_name)
51+
num = len(src_paths)
52+
img = np.array(Image.open(src_path))
53+
if len(img.shape) == 2:
54+
pil = Image.fromarray(img).convert(convert)
55+
elif len(img.shape) == 3:
56+
pil = Image.fromarray(img)
57+
else:
58+
raise ValueError('Input image not 2D/3D: ', img.shape)
59+
60+
pil.save(tgt_path)
61+
print(f'processed {i+1}/{num}.')
62+
63+
64+
def convert_label_pics_into_pngs(src_dir,
65+
tgt_dir,
66+
suffix,
67+
convert_dict={
68+
0: 0,
69+
255: 1
70+
}):
71+
if not os.path.exists(tgt_dir):
72+
os.makedirs(tgt_dir)
73+
74+
src_paths, src_names = filter_suffix_recursive(src_dir, suffix=suffix)
75+
num = len(src_paths)
76+
for i, (src_name, src_path) in enumerate(zip(src_names, src_paths)):
77+
tgt_name = src_name.replace(suffix, save_seg_map_suffix)
78+
tgt_path = os.path.join(tgt_dir, tgt_name)
79+
80+
img = np.array(Image.open(src_path))
81+
img = convert_label(img, convert_dict)
82+
Image.fromarray(img).save(tgt_path)
83+
print(f'processed {i+1}/{num}.')
84+
85+
86+
if __name__ == '__main__':
87+
88+
convert_pics_into_pngs(
89+
src_img_train_dir, tgt_img_train_dir, suffix=img_suffix)
90+
91+
convert_pics_into_pngs(
92+
src_img_test_dir, tgt_img_test_dir, suffix=img_suffix)
93+
94+
convert_label_pics_into_pngs(
95+
src_mask_train_dir, tgt_mask_train_dir, suffix=seg_map_suffix)
96+
97+
convert_label_pics_into_pngs(
98+
src_mask_test_dir, tgt_mask_test_dir, suffix=seg_map_suffix)

0 commit comments

Comments
 (0)