[ICCV 2025 Highlight] Mind the Gap: Preserving and Compensating for the Modality Gap in CLIP-Based Continual Learning
This is the official code for our paper
:
The environment is the same as that of our RAPF.
create enviroment using Miniconda (or Anaconda)
conda create -n continual_clip python=3.8
conda activate continual_clip
install dependencies:
bash setup_environment.shWe provide the scripts for imagenet100. Please run:
python main.py \
--config-path configs/class \
--config-name imagenet100_10-10.yaml \
dataset_root="[imagenet1k_path]" \
class_order="class_orders/imagenet100.yaml"
Note: To obtain the epoch parameter from the first task described in Eq. (3), please run the epoch.py file.
The dataset_root folder should contain the train and val folders.
imagenet1k_path
├── train
│ ├── n01440764
│ └── ···
├── val
│ ├── n01440764
│ └── ···
imagenet-r_path
├── train
│ ├── n01443537
│ └── ···
├── val
│ ├── n01443537
│ └── ···
The command to run the other two datasets is similar, in run_experiment.sh
Cifar100 will download automatically. Imagenet-R is randomly splited. You can also use our splited list in RAPF/imgr_split/imgr_train_test_split.txt.
The format of imgr_train_test_split.txt:
train
n02051845/art_0.jpg
...
test
n02051845/tattoo_4.jpg
...
Our method implementation is based on the Continual-CLIP.
If you find our repo useful for your research, please consider citing our paper:
@inproceedings{huang2025mind,
title={Mind the gap: Preserving and compensating for the modality gap in clip-based continual learning},
author={Huang, Linlan and Cao, Xusheng and Lu, Haori and Meng, Yifan and Yang, Fei and Liu, Xialei},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={3777--3786},
year={2025}
}This code is licensed under the Creative Commons Attribution-NonCommercial 4.0 International for non-commercial use only. Please note that any commercial use of this code requires formal permission prior to use.
For technical questions, please contact [email protected]