Skip to content

Commit 942b054

Browse files
authored
[Project] Medical semantic seg dataset: 2pm vessel (open-mmlab#2685)
1 parent ac24111 commit 942b054

8 files changed

+341
-0
lines changed
Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,153 @@
1+
# 2-PM Vessel Dataset
2+
3+
## Description
4+
5+
This project supports **`2-PM Vessel Dataset`**, which can be downloaded from [here](https://opendatalab.org.cn/2-PM_Vessel_Dataset).
6+
7+
### Dataset Overview
8+
9+
An open-source volumetric brain vasculature dataset obtained with two-photon microscopy at Focused Ultrasound Lab, at Sunnybrook Research Institute (affiliated with University of Toronto by Dr. Alison Burgess, Charissa Poon and Marc Santos).
10+
11+
The dataset contains a total of 12 volumetric stacks consisting images of mouse brain vasculature and tumor vasculature.
12+
13+
### Information Statistics
14+
15+
| Dataset Name | Anatomical Region | Task Type | Modality | Num. Classes | Train/Val/Test Images | Train/Val/Test Labeled | Release Date | License |
16+
| ------------------------------------------------------------ | ----------------- | ------------ | ----------------- | ------------ | --------------------- | ---------------------- | ------------ | ------------------------------------------------------------- |
17+
| [2pm_vessel](https://opendatalab.org.cn/2-PM_Vessel_Dataset) | vessel | segmentation | microscopy_images | 2 | 216/-/- | yes/-/- | 2021 | [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/) |
18+
19+
| Class Name | Num. Train | Pct. Train | Num. Val | Pct. Val | Num. Test | Pct. Test |
20+
| :--------: | :--------: | :--------: | :------: | :------: | :-------: | :-------: |
21+
| background | 216 | 85.78 | - | - | - | - |
22+
| vessel | 180 | 14.22 | - | - | - | - |
23+
24+
Note:
25+
26+
- `Pct` means percentage of pixels in this category in all pixels.
27+
28+
### Visualization
29+
30+
![2pmv](https://raw.githubusercontent.com/uni-medical/medical-datasets-visualization/main/2d/semantic_seg/histopathology/2pm_vessel/2pm_vessel_dataset.png?raw=true)
31+
32+
### Dataset Citation
33+
34+
```
35+
@article{teikari2016deep,
36+
title={Deep learning convolutional networks for multiphoton microscopy vasculature segmentation},
37+
author={Teikari, Petteri and Santos, Marc and Poon, Charissa and Hynynen, Kullervo},
38+
journal={arXiv preprint arXiv:1606.02382},
39+
year={2016}
40+
}
41+
```
42+
43+
### Prerequisites
44+
45+
- Python v3.8
46+
- PyTorch v1.10.0
47+
- pillow(PIL) v9.3.0
48+
- scikit-learn(sklearn) v1.2.0
49+
- [MIM](https://github.com/open-mmlab/mim) v0.3.4
50+
- [MMCV](https://github.com/open-mmlab/mmcv) v2.0.0rc4
51+
- [MMEngine](https://github.com/open-mmlab/mmengine) v0.2.0 or higher
52+
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation) v1.0.0rc5
53+
54+
All the commands below rely on the correct configuration of `PYTHONPATH`, which should point to the project's directory so that Python can locate the module files. In `2pm_vessel/` root directory, run the following line to add the current directory to `PYTHONPATH`:
55+
56+
```shell
57+
export PYTHONPATH=`pwd`:$PYTHONPATH
58+
```
59+
60+
### Dataset Preparing
61+
62+
- download dataset from [here](https://opendatalab.org.cn/2-PM_Vessel_Dataset) and decompress data to path `'data/'`.
63+
- run script `"python tools/prepare_dataset.py"` to format data and change folder structure as below.
64+
- run script `"python ../../tools/split_seg_dataset.py"` to split dataset and generate `train.txt`, `val.txt` and `test.txt`. If the label of official validation set and test set can't be obtained, we generate `train.txt` and `val.txt` from the training set randomly.
65+
66+
```shell
67+
mkdir data & cd data
68+
pip install opendatalab
69+
odl get 2-PM_Vessel_Dataset
70+
cd ..
71+
python tools/prepare_dataset.py
72+
python tools/prepare_dataset.py
73+
```
74+
75+
```none
76+
mmsegmentation
77+
├── mmseg
78+
├── projects
79+
│ ├── medical
80+
│ │ ├── 2d_image
81+
│ │ │ ├── microscopy_images
82+
│ │ │ │ ├── 2pm_vessel
83+
│ │ │ │ │ ├── configs
84+
│ │ │ │ │ ├── datasets
85+
│ │ │ │ │ ├── tools
86+
│ │ │ │ │ ├── data
87+
│ │ │ │ │ │ ├── train.txt
88+
│ │ │ │ │ │ ├── val.txt
89+
│ │ │ │ │ │ ├── images
90+
│ │ │ │ │ │ │ ├── train
91+
│ │ │ │ | │ │ │ ├── xxx.png
92+
│ │ │ │ | │ │ │ ├── ...
93+
│ │ │ │ | │ │ │ └── xxx.png
94+
│ │ │ │ │ │ ├── masks
95+
│ │ │ │ │ │ │ ├── train
96+
│ │ │ │ | │ │ │ ├── xxx.png
97+
│ │ │ │ | │ │ │ ├── ...
98+
│ │ │ │ | │ │ │ └── xxx.png
99+
100+
```
101+
102+
### Divided Dataset Information
103+
104+
***Note: The table information below is divided by ourselves.***
105+
106+
| Class Name | Num. Train | Pct. Train | Num. Val | Pct. Val | Num. Test | Pct. Test |
107+
| :--------: | :--------: | :--------: | :------: | :------: | :-------: | :-------: |
108+
| background | 172 | 85.88 | 44 | 85.4 | - | - |
109+
| vessel | 142 | 14.12 | 38 | 14.6 | - | - |
110+
111+
### Training commands
112+
113+
To train models on a single server with one GPU. (default)
114+
115+
```shell
116+
mim train mmseg ./configs/${CONFIG_FILE}
117+
```
118+
119+
### Testing commands
120+
121+
To test models on a single server with one GPU. (default)
122+
123+
```shell
124+
mim test mmseg ./configs/${CONFIG_FILE} --checkpoint ${CHECKPOINT_PATH}
125+
```
126+
127+
<!-- List the results as usually done in other model's README. [Example](https://github.com/open-mmlab/mmsegmentation/tree/dev-1.x/configs/fcn#results-and-models)
128+
129+
You should claim whether this is based on the pre-trained weights, which are converted from the official release; or it's a reproduced result obtained from retraining the model in this project. -->
130+
131+
## Checklist
132+
133+
- [x] Milestone 1: PR-ready, and acceptable to be one of the `projects/`.
134+
135+
- [x] Finish the code
136+
- [x] Basic docstrings & proper citation
137+
- [ ] Test-time correctness
138+
- [x] A full README
139+
140+
- [ ] Milestone 2: Indicates a successful model implementation.
141+
142+
- [ ] Training-time correctness
143+
144+
- [ ] Milestone 3: Good to be a part of our core package!
145+
146+
- [ ] Type hints and docstrings
147+
- [ ] Unit tests
148+
- [ ] Code polishing
149+
- [ ] Metafile.yml
150+
151+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
152+
153+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
dataset_type = 'TwoPMVesselDataset'
2+
data_root = 'data/'
3+
img_scale = (512, 512)
4+
train_pipeline = [
5+
dict(type='LoadImageFromFile'),
6+
dict(type='LoadAnnotations'),
7+
dict(type='Resize', scale=img_scale, keep_ratio=False),
8+
dict(type='RandomFlip', prob=0.5),
9+
dict(type='PhotoMetricDistortion'),
10+
dict(type='PackSegInputs')
11+
]
12+
test_pipeline = [
13+
dict(type='LoadImageFromFile'),
14+
dict(type='Resize', scale=img_scale, keep_ratio=False),
15+
dict(type='LoadAnnotations'),
16+
dict(type='PackSegInputs')
17+
]
18+
train_dataloader = dict(
19+
batch_size=16,
20+
num_workers=4,
21+
persistent_workers=True,
22+
sampler=dict(type='InfiniteSampler', shuffle=True),
23+
dataset=dict(
24+
type=dataset_type,
25+
data_root=data_root,
26+
ann_file='train.txt',
27+
data_prefix=dict(img_path='images/', seg_map_path='masks/'),
28+
pipeline=train_pipeline))
29+
val_dataloader = dict(
30+
batch_size=1,
31+
num_workers=4,
32+
persistent_workers=True,
33+
sampler=dict(type='DefaultSampler', shuffle=False),
34+
dataset=dict(
35+
type=dataset_type,
36+
data_root=data_root,
37+
ann_file='val.txt',
38+
data_prefix=dict(img_path='images/', seg_map_path='masks/'),
39+
pipeline=test_pipeline))
40+
test_dataloader = val_dataloader
41+
val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mDice'])
42+
test_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mDice'])
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'mmseg::_base_/models/fcn_unet_s5-d16.py', './2pm-vessel_512x512.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.2pm-vessel_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.0001)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'mmseg::_base_/models/fcn_unet_s5-d16.py', './2pm-vessel_512x512.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.2pm-vessel_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.001)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
_base_ = [
2+
'mmseg::_base_/models/fcn_unet_s5-d16.py', './2pm-vessel_512x512.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.2pm-vessel_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.01)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(num_classes=2),
14+
auxiliary_head=None,
15+
test_cfg=dict(mode='whole', _delete_=True))
16+
vis_backends = None
17+
visualizer = dict(vis_backends=vis_backends)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
_base_ = [
2+
'mmseg::_base_/models/fcn_unet_s5-d16.py', './2pm-vessel_512x512.py',
3+
'mmseg::_base_/default_runtime.py',
4+
'mmseg::_base_/schedules/schedule_20k.py'
5+
]
6+
custom_imports = dict(imports='datasets.2pm-vessel_dataset')
7+
img_scale = (512, 512)
8+
data_preprocessor = dict(size=img_scale)
9+
optimizer = dict(lr=0.01)
10+
optim_wrapper = dict(optimizer=optimizer)
11+
model = dict(
12+
data_preprocessor=data_preprocessor,
13+
decode_head=dict(
14+
num_classes=2, loss_decode=dict(use_sigmoid=True), out_channels=1),
15+
auxiliary_head=None,
16+
test_cfg=dict(mode='whole', _delete_=True))
17+
vis_backends = None
18+
visualizer = dict(vis_backends=vis_backends)
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
from mmseg.datasets import BaseSegDataset
2+
from mmseg.registry import DATASETS
3+
4+
5+
@DATASETS.register_module()
6+
class TwoPMVesselDataset(BaseSegDataset):
7+
"""TwoPMVesselDataset dataset.
8+
9+
In segmentation map annotation for TwoPMVesselDataset,
10+
0 stands for background, which is included in 2 categories.
11+
``reduce_zero_label`` is fixed to False. The ``img_suffix``
12+
is fixed to '.png' and ``seg_map_suffix`` is fixed to '.png'.
13+
14+
Args:
15+
img_suffix (str): Suffix of images. Default: '.png'
16+
seg_map_suffix (str): Suffix of segmentation maps. Default: '.png'
17+
reduce_zero_label (bool): Whether to mark label zero as ignored.
18+
Default to False.
19+
"""
20+
METAINFO = dict(classes=('background', 'vessel'))
21+
22+
def __init__(self,
23+
img_suffix='.png',
24+
seg_map_suffix='.png',
25+
reduce_zero_label=False,
26+
**kwargs) -> None:
27+
super().__init__(
28+
img_suffix=img_suffix,
29+
seg_map_suffix=seg_map_suffix,
30+
reduce_zero_label=reduce_zero_label,
31+
**kwargs)
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
import os
2+
3+
import tifffile as tiff
4+
from PIL import Image
5+
6+
root_path = 'data/'
7+
8+
image_dir = os.path.join(root_path,
9+
'2-PM_Vessel_Dataset/raw/vesselNN_dataset/denoised')
10+
label_dir = os.path.join(root_path,
11+
'2-PM_Vessel_Dataset/raw/vesselNN_dataset/labels')
12+
tgt_img_train_dir = os.path.join(root_path, 'images/train/')
13+
tgt_mask_train_dir = os.path.join(root_path, 'masks/train/')
14+
os.system('mkdir -p ' + tgt_img_train_dir)
15+
os.system('mkdir -p ' + tgt_mask_train_dir)
16+
17+
18+
def filter_suffix(src_dir, suffix):
19+
suffix = '.' + suffix if '.' not in suffix else suffix
20+
file_names = [_ for _ in os.listdir(src_dir) if _.endswith(suffix)]
21+
file_paths = [os.path.join(src_dir, _) for _ in file_names]
22+
return sorted(file_paths), sorted(file_names)
23+
24+
25+
if __name__ == '__main__':
26+
27+
image_path_list, _ = filter_suffix(image_dir, suffix='tif')
28+
label_path_list, _ = filter_suffix(label_dir, suffix='.tif')
29+
30+
for img_path, label_path in zip(image_path_list, label_path_list):
31+
labels = tiff.imread(label_path)
32+
images = tiff.imread(img_path)
33+
assert labels.ndim == 3
34+
assert images.shape == labels.shape
35+
name = img_path.split('/')[-1].replace('.tif', '')
36+
# a single .tif file contains multiple slices
37+
# as long as it is read by tifffile package.
38+
for i in range(labels.shape[0]):
39+
slice_name = name + '_' + str(i).rjust(3, '0') + '.png'
40+
image = images[i]
41+
label = labels[i] // 255
42+
43+
save_path_label = os.path.join(tgt_mask_train_dir, slice_name)
44+
Image.fromarray(label).save(save_path_label)
45+
save_path_image = os.path.join(tgt_img_train_dir, slice_name)
46+
Image.fromarray(image).convert('RGB').save(save_path_image)

0 commit comments

Comments
 (0)