The Official Repository of SAMed-2 & Medical SAM Benchmark
| 🧠 Memory-Enhanced SOTA Best performance on medical benchmarks |
🔧 Unified Framework Fair comparison of all Medical SAM variants |
SAMed-2 is a new foundation model for medical image segmentation built upon the SAM-2 architecture. Specifically, we introduce a temporal adapter into the image encoder to capture image correlations and a confidence-driven memory mechanism to store high-certainty features for later retrieval. This memory-based strategy counters the pervasive noise in large-scale medical datasets and mitigates catastrophic forgetting when encountering new tasks or modalities.
[06/2025] 🎉 SAMed-2 is accepted by MICCAI 2025!
[06/2025] 🚀 Initial release of SAMed-2!
This project is released under the Apache 2.0 license.
Linux Environment
Clone this repository and navigate to the folder:
git clone https://github.com/ZhilingYan/Medical-SAM-Bench.git
cd Medical-SAM-Bench# Create a new conda environment
conda create -n samed2 python=3.10 -y
conda activate samed2
# Install PyTorch (adjust cuda version as needed)
pip install torch==2.5.1 torchvision==0.20.1 --index-url https://download.pytorch.org/whl/cu118
# Install other requirements
pip install -r requirements.txt| Model | Architecture | Medical Fine-tuned |
Performance | Download |
|---|---|---|---|---|
| SAMed-2 ⭐ | SAM2-Hiera-S | ✅ | Best | 📥 Download |
| MedSAM2 | SAM2-Hiera-T | ✅ | Good | 📥 Download |
| MedSAM | SAM-ViT-B | ✅ | Good | 📥 Download |
| SAM2 | SAM2-Hiera-S | ❌ | Baseline | 📥 Download |
| SAM | SAM-ViT-B | ❌ | Baseline | 📥 Download |
📁 Place downloaded weights in
./checkpoints/
Download the pre-trained memory bank: memory_bank_list_640.pkl
Place it in the root directory of this repository.
🚀 Simple Python API
from predict import MedicalSegmenter
# Initialize
segmenter = MedicalSegmenter(
model_type='samed2',
checkpoint_path='checkpoints/latest_epoch_0217.pth'
)
# Segment
result = segmenter.predict(
'medical_image.png',
box=[100, 100, 900, 900]
)
# Visualize
segmenter.visualize(
'medical_image.png',
result['mask'],
'result.jpg'
)📊 Benchmark Medical SAM Models
# 🏆 SAMed-2 (Ours)
python main.py -net samed2 -sam_ckpt checkpoints/latest_epoch_0217.pth -sam_config sam2_hiera_s
# 🔬 MedSAM2
python main.py -net medsam2 -sam_ckpt checkpoints/MedSAM2_pretrain.pth -sam_config sam2_hiera_t_original
# 🏥 MedSAM
python main.py -net medsam -sam_ckpt checkpoints/medsam_vit_b.pth
# 🎯 SAM2
python main.py -net sam2 -sam_ckpt checkpoints/sam2_hiera_small.pt -sam_config sam2_hiera_s_original
# 🔷 SAM
python main.py -net sam -sam_ckpt checkpoints/sam_vit_b_01ec64.pth💡 Common:
-exp_name ${DATASET} -image_size 1024 -data_path /path/to/data -val_file_dir /path/to/test.txt
| Dataset | Modality | Size | Download |
|---|---|---|---|
| Optic Cup | Fundus | ~100MB | 📥 Download |
| Brain Tumor | MRI | ~100MB | 📥 Download |
Prepare Your Own Dataset
For custom datasets, organize your data as follows:
your_dataset/
├── image/
│ ├── case_idx_slice_001.png
│ ├── case_idx_slice_002.png
│ └── ...
├── mask/
│ ├── case_idx_slice_001.png
│ ├── case_idx_slice_002.png
│ └── ...
└── test.txt # List of test image names
Run Evaluation
# Full evaluation on the dataset
bash run.shParameters Explanation:
-net: Model type (samed2,medsam2,sam2,medsam,sam)-exp_name: Dataset name for logging-sam_ckpt: Path to model checkpoint-sam_config: Configuration file-image_size: Input image size (default: 1024)-out_size: Output size (default: 1024)-b: Batch size-data_path: Root path to datasets-train_file_dir: Path to training file list-val_file_dir: Path to validation file list-memory_bank_size: Memory bank size for SAMed2 (default: 640)-lr: Learning rate (default: 1e-4)-epoch: Number of epochs (default: 100)
📈 Performance Comparison
| Dataset | SAM | MedSAM | SAM2 | MedSAM2 | SAMed-2 |
|---|---|---|---|---|---|
| OpticCup | 0.61 | 0.86 | 0.62 | 0.40 | 0.90 🏆 |
| BrainTumor | 0.56 | 0.60 | 0.44 | 0.58 | 0.67 🏆 |
Dice scores on test sets. Higher is better.
If you find SAMed-2 useful in your research, please consider citing:
@article{yan2025samed,
title={SAMed-2: Selective Memory Enhanced Medical Segment Anything Model},
author={Yan, Zhiling and Song, Sifan and Song, Dingjie and Li, Yiwei and Zhou, Rong and Sun, Weixiang and Chen, Zhennong and Kim, Sekeun and Ren, Hui and Liu, Tianming and others},
journal={arXiv preprint arXiv:2507.03698},
year={2025}
}Contributors:
Zhiling Yan¹, Sifan Song², Dingjie Song¹, Yiwei Li³, Rong Zhou¹, Weixiang Sun⁴, Zhennong Chen², Sekeun Kim², Hui Ren², Tianming Liu³, Quanzheng Li², Xiang Li², Lifang He¹, Lichao Sun¹*
¹Lehigh University
²Massachusetts General Hospital and Harvard Medical School
³University of Georgia, Athens
⁴University of Notre Dame
We gratefully acknowledge: