A Comprehensive Survey: Awesome Multi-modal Object Tracking. Chunhui Zhang, Li Liu, Hao Wen, Xi Zhou, Yanfeng Wang. [paper] [homepage][中文解读]
Abstract: Multi-modal object tracking (MMOT) is an emerging field that combines data from various modalities, \eg vision (RGB), depth, thermal infrared, event, language and audio, to estimate the state of an arbitrary object in a video sequence. It is of great significance for many applications such as autonomous driving and intelligent surveillance. In recent years, MMOT has received more and more attention. However, existing MMOT algorithms mainly focus on two modalities (\eg RGB+depth, RGB+thermal infrared, and RGB+language). To leverage more modalities, some recent efforts have been made to learn a unified visual object tracking model for any modality. Additionally, some large-scale multi-modal tracking benchmarks have been established by simultaneously providing more than two modalities, such as vision-language-audio (\eg WebUAV-3M) and vision-depth-language (\eg UniMod1K). To track the latest progress in MMOT, we conduct a comprehensive investigation in this report. Specifically, we first divide existing MMOT tasks into five main categories, \ie RGBL tracking, RGBE tracking, RGBD tracking, RGBT tracking, and miscellaneous (RGB+X), where X can be any modality, such as language, depth, and event. Then, we analyze and summarize each MMOT task, focusing on widely used datasets and mainstream tracking algorithms based on their technical paradigms (\eg self-supervised learning, prompt learning, knowledge distillation, generative models, and state space models). Finally, we maintain a continuously updated paper list for MMOT at this https URL.
Awesome MMOT: A continuously updated project to track the latest progress in multi-modal object tracking (MMOT). This project focuses solely on single-object tracking. If this repository can bring you some inspiration, we would feel greatly honored. If you have any suggestions, please feel free to contact: [email protected].
UPDATE: Our survey covers common paradigms of multi-modal object tracking, including RGBL, RGBE, RGBD, RGBT, RGB-Sonar, miscellaneous (RGB+X) tracking, embodied visual tracking (EVT), and hyperspectral object tracking (HOT).
🔥 Awesome Visual Object Tracking (VOT) Project is at Awesome-VOT.
- 2025.04.28: The Paper of UW-COT220 and VL-SAM2 was Accepted by CVPR 2025 Workshop (arXiv, Outstanding Paper).
- 2025.04.02: We Released UW-COT220 and VL-SAM2, with both Training and Testing Code Available (project).
- 2025.04.02: The Paper of VL-SOT500 and COST was Online (arXiv, Project).
- 2025.02.28: Awesome Visual Object Tracking Project Started at Awesome-VOT.
- 2025.01.20: The Technical Report for UW-COT220 and VL-SAM2 was Updated (arXiv, 知乎).
- 2024.09.26: The WebUOT-1M was Accepted by NeurIPS 2024, and its Extended Version, UW-COT220, was Online (arXiv).
- 2024.05.30: The Paper of WebUOT-1M was Online (arXiv).
- 2024.05.24: The Report of Awesome MMOT Project was Online (arXiv, 知乎).
- 2024.05.20: Awesome MMOT Project Started.
- Survey
- Embodied Visual Tracking
- Vision-Language Tracking (RGBL Tracking)
- RGBE Tracking
- RGBD Tracking
- RGBT Tracking
- Miscellaneous (RGB+X)
- Hyperspectral Object Tracking
- Others
- Awesome Repositories for MMOT
If you find our work useful in your research, please consider citing:
@article{zhang2024awesome,
title={Awesome Multi-modal Object Tracking},
author={Zhang, Chunhui and Liu, Li and Wen, Hao and Zhou, Xi and Wang, Yanfeng},
journal={arXiv preprint arXiv:2405.14200},
year={2024}
}
-
Awesome MMOT: Chunhui Zhang, Li Liu, Hao Wen, Xi Zhou, Yanfeng Wang.
"Awesome Multi-modal Object Tracking." ArXiv (2024). [paper] [project] -
Pengyu Zhang, Dong Wang, Huchuan Lu.
"Multi-modal Visual Tracking: Review and Experimental Comparison." ArXiv (2022). [paper] -
Zhangyong Tang, Tianyang Xu, Xiao-Jun Wu.
"A Survey for Deep RGBT Tracking." ArXiv (2022). [paper] -
Jinyu Yang, Zhe Li, Song Yan, Feng Zheng, Aleš Leonardis, Joni-Kristian Kämäräinen, Ling Shao.
"RGBD Object Tracking: An In-depth Review." ArXiv (2022). [paper] -
Chenglong Li, Andong Lu, Lei Liu, Jin Tang.
"Multi-modal visual tracking: a survey. 多模态视觉跟踪方法综述" Journal of Image and Graphics.中国图象图形学报 (2023). [paper] -
Ou Zhou, Ying Ge, Zhang Dawei, and Zheng Zhonglong.
"A Survey of RGB-Depth Object Tracking. RGB-D 目标跟踪综述" Journal of Computer-Aided Design & Computer Graphics. 计算机辅助设计与图形学学报 (2024). [paper] -
Zhang, ZhiHao and Wang, Jun and Zang, Zhuli and Jin, Lei and Li, Shengjie and Wu, Hao and Zhao, Jian and Bo, Zhang.
"Review and Analysis of RGBT Single Object Tracking Methods: A Fusion Perspective." ACM Transactions on Multimedia Computing, Communications and Applications (2024). [paper] -
MV-RGBT & MoETrack: Zhangyong Tang, Tianyang Xu, Zhenhua Feng, Xuefeng Zhu, He Wang, Pengcheng Shao, Chunyang Cheng, Xiao-Jun Wu, Muhammad Awais, Sara Atito, Josef Kittler.
"Revisiting RGBT Tracking Benchmarks from the Perspective of Modality Validity: A New Benchmark, Problem, and Method." ArXiv (2024). [paper] [code] -
Xingchen Zhang and Ping Ye and Henry Leung and Ke Gong and Gang Xiao.
"Object fusion tracking based on visible and infrared images: A comprehensive review." Information Fusion (2024). [paper] -
Mingzheng Feng and Jianbo Su.
"RGBT tracking: A comprehensive review." Information Fusion (2024). [paper] -
Zhang, Haiping and Yuan, Di and Shu, Xiu and Li, Zhihui and Liu, Qiao and Chang, Xiaojun and He, Zhenyu and Shi, Guangming.
"A Comprehensive Review of RGBT Tracking." IEEE TIM (2024). [paper] -
Mengmeng Wang, Teli Ma, Shuo Xin, Xiaojun Hou, Jiazheng Xing, Guang Dai, Jingdong Wang, Yong Liu.
"Visual Object Tracking across Diverse Data Modalities: A Review." ArXiv (2024). [paper] -
Zeshi Chen and Caiping Peng and Shuai Liu and Weiping Ding.
"Visual object tracking: Review and challenges." Applied Soft Computing (2025). [paper] -
Fereshteh Aghaee Meibodi, Shadi Alijani, Homayoun Najjaran.
"A Deep Dive into Generic Object Tracking: A Survey." ArXiv (2025). [paper] -
Zhangyong Tang, Tianyang Xu, Xuefeng Zhu, Hui Li, Shaochuan Zhao, Tao Zhou, Chunyang Cheng, Xiaojun Wu, Josef Kittler.
"Omni Survey for Multimodality Analysis in Visual Object Tracking." ArXiv (2025). [paper] [project]
Coming soon.
-
TrackVLA++: Jiahang Liu, Yunpeng Qi, Jiazhao Zhang, Minghan Li, Shaoan Wang, Kui Wu, Hanjing Ye, Hong Zhang, Zhibo Chen, Fangwei Zhong, Zhizheng Zhang, He Wang.
"TrackVLA++: Unleashing Reasoning and Memory Capabilities in VLA Models for Embodied Visual Tracking." ArXiv (2025). [paper] [code] -
TrackVLA: Shaoan Wang, Jiazhao Zhang, Minghan Li, Jiahang Liu, Anqi Li, Kui Wu, Fangwei Zhong, Junzhi Yu, Zhizheng Zhang, He Wang.
"TrackVLA: Embodied Visual Tracking in the Wild." CoRL (2025). [paper] [project] [code] -
HIEVT: Kui Wu, Hao Chen, Churan Wang, Fakhri Karray, Zhoujun Li, Yizhou Wang, Fangwei Zhong.
"Hierarchical Instruction-aware Embodied Visual Tracking." ArXiv (2025). [paper] [code] -
EVT-Recovery-Assistant: Kui Wu, Shuhang Xu, Hao Chen, Churan Wang, Zhoujun Li, Yizhou Wang, Fangwei Zhong.
"VLM Can Be a Good Assistant: Enhancing Embodied Visual Tracking with Self-Improving Visual-Language Models." ArXiv (2025). [paper] [code]
- Fangwei Zhong, Kui Wu, Hai Ci, Churan Wang, Hao Chen.
"Empowering Embodied Visual Tracking with Visual Foundation Models and Offline RL." ECCV (2024). [paper] [code]
| Dataset | Pub. & Date | WebSite | Introduction |
|---|---|---|---|
| OTB99-L | CVPR-2017 | OTB99-L | 99 videos |
| LaSOT | CVPR-2019 | LaSOT | 1400 videos |
| LaSOT_EXT | IJCV-2021 | LaSOT_EXT | 150 videos |
| TNL2K | CVPR-2021 | TNL2K | 2000 videos |
| WebUAV-3M | TPAMI-2023 | WebUAV-3M | 4500 videos, 3.3 million frames, UAV tracking, vision-language-audio |
| MGIT | NeurIPS-2023 | MGIT | 150 long video sequences, 2.03 million frames, three semantic grains (i.e., action, activity, and story) |
| VastTrack | NeurIPS-2024 | VastTrack | 50,610 video sequences, 4.2 million frames, 2,115 classes |
| WebUOT-1M | NeurIPS-2024 | WebUOT-1M | The first million-scale underwater object tracking dataset contains 1,500 video sequences, 1.1 million frames |
| ElysiumTrack-1M | ECCV-2024 | ElysiumTrack-1M | A large-scale dataset that supports three tasks: single object tracking, reference single object tracking, and video reference expression generation, with 1.27 million videos |
| VLT-MI | arXiv-2024 | VLT-MI | A dataset for multi-round, multi-modal interaction, with 3,619 videos. |
| DTVLT | arXiv-2024 | DTVLT | A multi-modal diverse text benchmark for visual language tracking (RGBL Tracking). |
| SemTrack | ECCV-2024 | SemTrack | A large-scale dataset comprising 6.7 million frames from 6,961 videos, capturing the semantic trajectory of targets across 52 interaction classes and 115 object classes. |
| UW-COT220 | CVPR Workshop-2025 | UW-COT220 | The first multimodal underwater camouflaged object tracking dataset with 220 videos. |
| VL-SOT500 | Information Fusion-2025 | VL-SOT500 | The first large-scale multi-modal small object tracking dataset with two subsets, VL-SOT230 and VL-SOT270, designed for benchmarking generic and high-speed small object tracking, respectively. |
| TNLLT | arXiv-2025 | TNLLT | A large-scale long-term vision-language tracking benchmark dataset with 200 video sequences. |
- SOIBench: Yipei Wang, Shiyu Hu, Shukun Jia, Panxi Xu, Hongfei Ma, Yiping Ma, Jing Zhang, Xiaobo Lu, Xin Zhao.
"SOI is the Root of All Evil: Quantifying and Breaking Similar Object Interference in Single Object Tracking." AAAI (2026). [paper]
-
UITrack: Wang, Jingchao and Wu, Zhijian and Zhang, Wenlong and Liu, Wenhui and Zhang, Jianwei and Huang, Dingjiang.
"Overcoming Feature Contamination by Unidirectional Information Modeling for Vision-Language Tracking." ICME (2025). [paper] [code] -
CMTrack: Tang, Yuyang and Ma, Yinchao and Yang, Dengqing and Xiao, Jie and Zhang, Tianzhu.
"State Space Models for Natural Language Tracking: Exploring Context-adaptive Language Cues." TCSVT (2025). [paper] -
MACT: Guo, Guomao and Geng, Gu and Tang, Jianing and Liu, Qiao and Yuan, Di.
"Multi-Scale Adaptive Cascaded Tracking for Vision-Language Integration." SPL (2025). [paper] -
MDCT: Zhongjian Huang, Lingling Li, Licheng Jiao, Jinyue Zhang, Long Sun, Xu Liu, Yuting Yang, Jiaxuan Zhao, Wenping Ma, Xiangrong Zhang.
"Multi-granularity Dynamic Conditional Transformer for Vision-Language Tracking." RoboSoft (2025). [paper] -
THM: Xu, Wei, Gu Geng, Xinming Zhang, and Di Yuan.
"Cross-Modal Alignment Enhancement for Vision–Language Tracking via Textual Heatmap Mapping." AI (2025). [paper] -
LGTrack: Jianbo Song, Hong Zhang, Yachun Feng, Hanyang Liu, Yifan Yang.
"Language-guided Visual Tracking: Comprehensive and Effective Multimodal Information Fusion." ACM Trans. Multimedia Comput. Commun. Appl. (2025). [paper] -
ReasoningTrack: Xiao Wang, Liye Jin, Xufeng Lou, Shiao Wang, Lan Chen, Bo Jiang, Zhipeng Zhang.
"ReasoningTrack: Chain-of-Thought Reasoning for Long-term Vision-Language Tracking." ArXiv (2025). [paper] [code] -
ATCTrack: X. Feng, S. Hu, X. Li, D. Zhang, M. Wu, J. Zhang, X. Chen, K. Huang.
"ATCTrack: Aligning Target-Context Cues with Dynamic Target States for Robust Vision-Language Tracking." ICCV (2025). [paper] [code] -
ATSTrack: Yihao Zhen, Qiang Wang, Yu Qiao, Liangqiong Qu, Huijie Fan.
"ATSTrack: Enhancing Visual-Language Tracking by Aligning Temporal and Spatial Scales." ArXiv (2025). [paper] -
R1-Track: Biao Wang, Wenwen Li.
"R1-Track: Direct Application of MLLMs to Visual Object Tracking via Reinforcement Learning." ArXiv (2025). [paper] [code] -
VLDF: Zhang, J., Yan, X., Zhang, H. et al.
"Vision-language discriminative fusion network for object tracking." The Journal of Supercomputing (2025). [paper] -
Mono3DVLT: Hongkai Wei · YANG YANG · Shijie Sun · Mingtao Feng · Xiangyu Song · Qi Lei · Hongli Hu · Rong Wang · Huansheng Song · Naveed Akhtar · Ajmal Mian.
"Mono3DVLT: Monocular-Video-Based 3D Visual Language Tracking." CVPR (2025). [paper] [code] -
JTD-UAV: Yifan Wang, Jian Zhao, Zhaoxin Fan, Xin Zhang, Xuecheng Wu, Yudian Zhang, Lei Jin, Xinyue Li, Gang Wang, Mengxi Jia, Ping Hu, Zheng Zhu, Xuelong Li.
"JTD-UAV: MLLM-Enhanced Joint Tracking and Description Framework for Anti-UAV Systems." CVPR (2025). [paper] -
CLDTracker: Mohamad Alansari, Sajid Javed, Iyyakutti Iyappan Ganapathi, Sara Alansari, Muzammal Naseer.
"CLDTracker: A Comprehensive Language Description for Visual Tracking." ArXiv (2025). [paper] [code] -
TMTR: Guocai Du and Peiyong Zhou and Nurbiya Yadikar and Alimjan Aysa and Kurban Ubul.
"Toward a dynamic tree-Mamba encoder for UAV tracking with vision-language." KBS (2025). [paper] -
MHITrack: Lei, Lei and Li, Xianxian.
"Multi-modal Hybrid Interaction Vision-language Tracking." TMM (2025). [paper] -
TCMLTrack: Du, G., Zhou, P., Yadikar, N. et al.
"Toward based on concentrated multi-scale linear attention real-time UAV tracking using joint natural language specification." Scientific Reports (2025). [paper] -
SATrack: Tang, Yuyang and Ma, Yinchao and Zhang, Tianzhu.
"Semantic-aware Network for Natural Language Tracking." TCSVT (2025). [paper] -
A-CLIP: Hong Zhu and Qingyang Lu and Lei Xue and Guanglin Yuan and Kaihua Zhang.
"Joint feature extraction and alignment in object tracking with vision-language model." Engineering Applications of Artificial Intelligence (2025). [paper] -
MAVLT: Liangtao Shi, Bineng Zhong, Qihua Liang, Xiantao Hu, Zhiyi Mo, Shuxiang Song.
"Mamba Adapter: Efficient Multi-Modal Fusion for Vision-Language Tracking." TCSVT (2025). [paper] [code] -
SAKTrack: Mao, Kaige and Hong, Xiaopeng and Fan, Xiaopeng and Zuo, Wangmeng.
"A Swiss Army Knife for Tracking by Natural Language Specification." TIP (2025). [paper] [code] -
ProVLT: Zong, Chengao and Zhao, Jie and Chen, Xin and Lu, Huchuan and Wang, Dong.
"Learning Language Prompt for Vision-Language Tracking." TCSVT (2025). [paper] -
COST: Chunhui Zhang, Li Liu, Jialin Gao, Xin Sun, Hao Wen, Xi Zhou, Shiming Ge, Yanfeng Wang.
"COST: Contrastive One-Stage Transformer for Vision-Language Small Object Tracking." Information Fusion (2025). [paper] [ResearchGate] [code] -
AITtrack: Basit Alawode, Sajid Javed.
"AITtrack: Attention-based Image-Text Alignment for Visual Tracking." IEEE Access (2025). [paper] [code] -
TrackingMeetsLMM: Ayesha Ishaq, Jean Lahoud, Fahad Shahbaz Khan, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer.
"Tracking Meets Large Multimodal Models for Driving Scenario Understanding." ArXiv (2025). [paper] [code] -
AVLTrack: Xue, Yuanliang and Zhong, Bineng and Jin, Guodong and Shen, Tao and Tan, Lining and Li, Ning and Zheng, Yaozong.
"AVLTrack: Dynamic Sparse Learning for Aerial Vision-Language Tracking." TCSVT (2025). [paper] [code] -
MambaVLT: Xinqi Liu, Li Zhou, Zikun Zhou, Jianqiu Chen, Zhenyu He.
"MambaVLT: Time-Evolving Multimodal State Space Model for Vision-Language Tracking." CVPR (2025). [paper] -
DUTrack: Xiaohai Li, Bineng Zhong, Qihua Liang, Zhiyi Mo, Jian Nong, Shuxiang Song.
"Dynamic Updates for Language Adaptation in Visual-Language Tracking." CVPR (2025). [paper] [code] -
SIEVL-Track: Li, Ning and Zhong, Bineng and Liang, Qihua and Mo, Zhiyi and Nong, Jian and Song, Shuxiang.
"SIEVL-Track: Exploring Semantic Information Enhancement for Visual-Language Object Tracking." TCSVT (2025). [paper] -
UW-COT220 & VL-SAM2: Chunhui Zhang, Li Liu, Guanjie Huang, Zhipeng Zhang, Hao Wen, Xi Zhou, Shiming Ge, Yanfeng Wang.
"Underwater Camouflaged Object Tracking Meets Vision-Language SAM2." CVPR Workshop (2025). [paper] [ResearchGate] [知乎] [project] -
MambaTrack: Chunhui Zhang, Li Liu, Hao Wen, Xi Zhou, Yanfeng Wang.
"MambaTrack: Exploiting Dual-Enhancement for Night UAV Tracking." ICASSP (2025). [paper] [code] -
CTVLT: X. Feng, D. Zhang, S. Hu, X. Li, M. Wu, J. Zhang, X. Chen, K. Huang.
"Enhancing Vision-Language Tracking by Effectively Converting Textual Cues into Visual Cues." ICASSP (2025). [paper] [code]
-
JLPT: Weng, ZhiMin and Zhang, JinPu and Wang, YueHuan.
"Joint Language Prompt and Object Tracking." ICME (2024). [paper] -
CPIPTrack: Zhu, Hong and Lu, Qingyang and Xue, Lei and Zhang, Pingping and Yuan, Guanglin.
"Vision-Language Tracking With CLIP and Interactive Prompt Learning." TITS (2024). [paper] -
DMITrack: Zhiyi Mo, Guangtong Zhang, Jian Nong, Bineng Zhong, Zhi Li.
"Dual-stream Multi-modal Interactive Vision-language Tracking." MMAsia (2024). [paper] -
PJVLT: Liang, Yanjie and Wu, Qiangqiang and Cheng, Lin and Xia, Changqun and Li, Jia.
"Progressive Semantic-Visual Alignment and Refinement for Vision-Language Tracking." TCSVT (2024). [paper] -
MugTracker: Zhu, Hong and Zhang, Pingping and Xue, Lei and Yuan, Guanglin.
"Multi-modal Understanding and Generation for Object Tracking." TCSVT (2024). [paper] -
CogVLM-Track: Xuexin Liu, Zhuojun Zou & Jie Hao.
"Adaptive Text Feature Updating for Visual-Language Tracking." ICPR (2024). [paper] -
VLTVerse: Xuchen Li, Shiyu Hu, Xiaokun Feng, Dailing Zhang, Meiqi Wu, Jing Zhang, Kaiqi Huang.
"How Texts Help? A Fine-grained Evaluation to Reveal the Role of Language in Vision-Language Tracking." ArXiv (2024). [paper] [project] -
Li, Hengyou and Liu, Xinyan and Li, Guorong and Wang, Shuhui and Qing, Laiyun and Huang, Qingming.
"Boost Tracking by Natural Language With Prompt-Guided Grounding." TITS (2024). [paper] -
ChatTracker: Yiming Sun, Fan Yu, Shaoxiang Chen, Yu Zhang, Junwei Huang, Chenhui Li, Yang Li, Changbo Wang.
"ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model." NeurIPS (2024). [paper] -
SemTrack: Wang, Pengfei and Hui, Xiaofei and Wu, Jing and Yang, Zile and Ong, Kian Eng and Zhao, Xinge and Lu, Beijia and Huang, Dezhao and Ling, Evan and Chen, Weiling and Ma, Keng Teck and Hur, Minhoe and Liu, Jun.
"SemTrack: A Large-scale Dataset for Semantic Tracking in the Wild." ECCV (2024). [paper] [project] -
MemVLT: Xiaokun Feng, Xuchen Li, Shiyu Hu, Dailing Zhang, Meiqi Wu, Jing Zhang, Xiaotang Chen, Kaiqi Huang.
"MemVLT: Visual-Language Tracking with Adaptive Memory-based Prompts." NeurIPS (2024). [paper] [code] -
DTVLT: Xuchen Li, Shiyu Hu, Xiaokun Feng, Dailing Zhang, Meiqi Wu, Jing Zhang, Kaiqi Huang.
"DTVLT: A Multi-modal Diverse Text Benchmark for Visual Language Tracking Based on LLM." ArXiv (2024). [paper] [project] -
WebUOT-1M: Chunhui Zhang, Li Liu, Guanjie Huang, Hao Wen, Xi Zhou, Yanfeng Wang.
"WebUOT-1M: Advancing Deep Underwater Object Tracking with A Million-Scale Benchmark." NeurIPS (2024). [paper] [project] -
ElysiumTrack-1M: Han Wang, Yanjie Wang, Yongjie Ye, Yuxiang Nie, Can Huang.
"Elysium: Exploring Object-level Perception in Videos via MLLM." ECCV (2024). [paper] [code] -
VLT-MI: Xuchen Li, Shiyu Hu, Xiaokun Feng, Dailing Zhang, Meiqi Wu, Jing Zhang, Kaiqi Huang.
"Visual Language Tracking with Multi-modal Interaction: A Robust Benchmark." ArXiv (2024). [paper] [project] -
VastTrack: Liang Peng, Junyuan Gao, Xinran Liu, Weihong Li, Shaohua Dong, Zhipeng Zhang, Heng Fan, Libo Zhang.
"VastTrack: Vast Category Visual Object Tracking." NeurIPS (2024). [paper] [project] -
DMTrack: Guangtong Zhang, Bineng Zhong, Qihua Liang, Zhiyi Mo, Shuxiang Song.
"Diffusion Mask-Driven Visual-language Tracking." IJCAI (2024). [paper] -
ATTracker: Jiawei Ge, Jiuxin Cao, Xuelin Zhu, Xinyu Zhang, Chang Liu, Kun Wang, Bo Liu.
"Consistencies are All You Need for Semi-supervised Vision-Language Tracking." ACM MM (2024). [paper] -
ALTracker: Zikai Song, Ying Tang, Run Luo, Lintao Ma, Junqing Yu, Yi-Ping Phoebe Chen, Wei Yang.
"Autogenic Language Embedding for Coherent Point Tracking." ACM MM (2024). [paper] [code] -
Elysium: Han Wang, Yanjie Wang, Yongjie Ye, Yuxiang Nie, Can Huang.
"Elysium: Exploring Object-level Perception in Videos via MLLM." ECCV (2024). [paper] [code] -
Tapall.ai: Mingqi Gao, Jingnan Luo, Jinyu Yang, Jungong Han, Feng Zheng.
"1st Place Solution for MeViS Track in CVPR 2024 PVUW Workshop: Motion Expression guided Video Segmentation." ArXiv (2024). [paper] [code] -
DTLLM-VLT: Xuchen Li, Xiaokun Feng, Shiyu Hu, Meiqi Wu, Dailing Zhang, Jing Zhang, Kaiqi Huang.
"DTLLM-VLT: Diverse Text Generation for Visual Language Tracking Based on LLM." CVPRW (2024). [paper] [project] -
UVLTrack: Yinchao Ma, Yuyang Tang, Wenfei Yang, Tianzhu Zhang, Jinpeng Zhang, Mengxue Kang.
"Unifying Visual and Vision-Language Tracking via Contrastive Learning." AAAI (2024). [paper] [code] -
QueryNLT: Yanyan Shao, Shuting He, Qi Ye, Yuchao Feng, Wenhan Luo, Jiming Chen.
"Context-Aware Integration of Language and Visual References for Natural Language Tracking." CVPR (2024). [paper] [code] -
OSDT: Guangtong Zhang, Bineng Zhong, Qihua Liang, Zhiyi Mo, Ning Li, Shuxiang Song.
"One-Stream Stepwise Decreasing for Vision-Language Tracking." TCSVT (2024). [paper] -
TTCTrack: Zhongjie Mao; Yucheng Wang; Xi Chen; Jia Yan.
"Textual Tokens Classification for Multi-Modal Alignment in Vision-Language Tracking." ICASSP (2024). [paper] -
OneTracker: Lingyi Hong, Shilin Yan, Renrui Zhang, Wanyun Li, Xinyu Zhou, Pinxue Guo, Kaixun Jiang, Yiting Cheng, Jinglun Li, Zhaoyu Chen, Wenqiang Zhang.
"OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning." CVPR (2024). [paper] -
MMTrack: Zheng, Yaozong and Zhong, Bineng and Liang, Qihua and Li, Guorong and Ji, Rongrong and Li, Xianxian.
"Toward Unified Token Learning for Vision-Language Tracking." TCSVT (2024). [paper] [code] -
Ping Ye, Gang Xiao, Jun Liu .
"Multimodal Features Alignment for Vision–Language Object Tracking." Remote Sensing (2024). [paper] -
VLT_OST: Mingzhe Guo, Zhipeng Zhang, Liping Jing, Haibin Ling, Heng Fan.
"Divert More Attention to Vision-Language Object Tracking." TPAMI (2024). [paper] [code] -
SATracker: Jiawei Ge, Xiangmei Chen, Jiuxin Cao, Xuelin Zhu, Weijia Liu, Bo Liu.
"Beyond Visual Cues: Synchronously Exploring Target-Centric Semantics for Vision-Language Tracking." ArXiv (2024). [paper] -
VLFSE: Fuchao Yang, Mingkai Jiang, Qiaohong Hao, Xiaolei Zhao, Qinghe Feng.
"VLFSE: Enhancing visual tracking through visual language fusion and state update evaluator." Machine Learning with Applications (2024). [paper]
-
WebUAV-3M: Chunhui Zhang, Guanjie Huang, Li Liu, Shan Huang, Yinan Yang, Xiang Wan, Shiming Ge, Dacheng Tao.
"WebUAV-3M: A Benchmark for Unveiling the Power of Million-Scale Deep UAV Tracking." TPAMI (2023). [paper] [project] -
All in One: Chunhui Zhang, Xin Sun, Li Liu, Yiqian Yang, Qiong Liu, Xi Zhou, Yanfeng Wang.
"All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment." ACM MM (2023). [paper] [code] -
CiteTracker: Xin Li, Yuqing Huang, Zhenyu He, Yaowei Wang, Huchuan Lu, Ming-Hsuan Yang.
"CiteTracker: Correlating Image and Text for Visual Tracking." ICCV (2023). [paper] [code] -
JointNLT: Li Zhou, Zikun Zhou, Kaige Mao, Zhenyu He.
"Joint Visual Grounding and Tracking with Natural Language Specifcation." CVPR (2023). [paper] [code] -
MGIT: Shiyu Hu, Dailin Zhang, Meiqi Wu, Xiaokun Feng, Xuchen Li, Xin Zhao, Kaiqi Huang.
"A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and Causal Relationship." NeurIPS (2023). [paper] [code] -
DecoupleTNL: Ma, Ding and Wu, Xiangqian.
"Tracking by Natural Language Specification with Long Short-term Context Decoupling." ICCV (2023). [paper] -
Haojie Zhao, Xiao Wang, Dong Wang, Huchuan Lu, Xiang Ruan.
"Transformer vision-language tracking via proxy token guided cross-modal fusion." PRL (2023). [paper] -
OVLM: Zhang, Huanlong and Wang, Jingchao and Zhang, Jianwei and Zhang, Tianzhu and Zhong, Bineng.
"One-Stream Vision-Language Memory Network for Object Tracking." TMM (2023). [paper] [code] -
VLATrack: Zuo, Jixiang and Wu, Tao and Shi, Meiping and Liu, Xueyan and Zhao, Xijun.
"Multi-Modal Object Tracking with Vision-Language Adaptive Fusion and Alignment." RICAI (2023). [paper]
-
VLT_TT: Mingzhe Guo, Zhipeng Zhang, Heng Fan, Liping Jing.
"Divert More Attention to Vision-Language Tracking." NeurIPS (2022). [paper] [code] -
AdaRS: Li, Yihao and Yu, Jun and Cai, Zhongpeng and Pan, Yuwen.
"Cross-modal Target Retrieval for Tracking by Natural Language." CVPR Workshops (2022). [paper]
-
TNL2K: Wang, Xiao and Shu, Xiujun and Zhang, Zhipeng and Jiang, Bo and Wang, Yaowei and Tian, Yonghong and Wu, Feng.
"Towards More Flexible and Accurate Object Tracking with Natural Language: Algorithms and Benchmark." CVPR (2021). [paper] [project] -
LaSOT_EXT: Heng Fan, Hexin Bai, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Harshit, Mingzhen Huang, Juehuan Liu, Yong Xu, Chunyuan Liao, Lin Yuan, Haibin Ling.
"LaSOT: A High-quality Large-scale Single Object Tracking Benchmark." IJCV (2021). [paper] [project] -
SNLT: Qi Feng, Vitaly Ablavsky, Qinxun Bai, Stan Sclaroff.
"Siamese Natural Language Tracker: Tracking by Natural Language Descriptions with Siamese Trackers." CVPR (2021). [paper] [code]
- LaSOT: Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, Haibin Ling.
"LaSOT: A High-quality Benchmark for Large-scale Single Object Tracking." CVPR (2021). [paper] [project]
- OTB99-L: Zhenyang Li, Ran Tao, Efstratios Gavves, Cees G. M. Snoek, Arnold W.M. Smeulders.
"Tracking by Natural Language Specification." CVPR (2017). [paper] [project]
| Dataset | Pub. & Date | WebSite | Introduction |
|---|---|---|---|
| FE108 | ICCV-2021 | FE108 | 108 event videos |
| COESOT | arXiv-2022 | COESOT | 1354 RGB-event video pairs |
| VisEvent | TC-2023 | VisEvent | 820 RGB-event video pairs |
| EventVOT | CVPR-2024 | EventVOT | 1141 event videos |
| CRSOT | arXiv-2024 | CRSOT | 1030 RGB-event video pairs |
| FELT | arXiv-2024 | FELT | 742 RGB-event video pairs |
| MEVDT | arXiv-2024 | MEVDT | 63 multimodal sequences with 13k images, 5M events, 10k object labels and 85 trajectories |
| FELT v2 | arXiv-2025 | FELT v2 | 1,044 long-term RGB-event video pairs |
-
WTA: Taha Razzaq, Asim Iqbal.
"Multimodal Neuromorphic Event-Frame Fusion in Domain-Generalized Vision Transformer for Dynamic Object Tracking." ICCVW (2025). [paper] -
WTA: Taha Razzaq, Asim Iqbal.
"Multimodal Neuromorphic Event-Frame Fusion in Domain-Generalized Vision Transformer for Dynamic Object Tracking." ICCVW (2025). [paper] -
GUSEM: Oussama Abdul Hay, Sara Alansari, Mohamad Alansari, Yahya Zweiri.
"Comparing Representations for Event Camera-based Visual Object Tracking." ICCVW (2025). [paper] -
HAD: Yao Deng, Xian Zhong, Wenxuan Liu, Zhaofei Yu, Jingling Yuan, Tiejun Huang.
"HAD: Hierarchical Asymmetric Distillation to Bridge Spatio-Temporal Gaps in Event-Based Object Tracking." ArXiv (2025). [paper] -
UREPTrack: Min Lu.
"UREPTrack: Unified RGB-Event Visual Tracking via PoolFormer Backbone." ArXiv (2025). [paper] [code] -
ISTASTrack: Siying Liu, Zikai Wang, Hanle Zheng, Yifan Hu, Xilin Wang, Qingkai Yang, Jibin Wu, Hao Guo, Lei Deng.
"ISTASTrack: Bridging ANN and SNN via ISTA Adapter for RGB-Event Tracking." ArXiv (2025). [paper] [code] -
AMTTrack & FELT v2: Xiao Wang, Xufeng Lou, Shiao Wang, Ju Huang, Lan Chen, Bo Jiang.
"Long-Term Visual Object Tracking with Event Cameras: An Associative Memory Augmented Tracker and A Benchmark Dataset." ArXiv (2025). [paper] [code] -
TrackSSD-FEnet: Keqi Liu, Rong Xiao, Deng Xiong, Yongsheng Sang & Jiancheng Lv.
"Joint Frame and Event Object Tracking via Non-causal State Space Duality." ICIC (2025). [paper] -
Mamba-FETrack V2: Shiao Wang, Ju Huang, Qingchuan Ma, Jinfeng Gao, Chunyi Xu, Xiao Wang, Lan Chen, Bo Jiang.
"Mamba-FETrack V2: Revisiting State Space Model for Frame-Event based Visual Object Tracking." ArXiv (2025). [paper] [code] -
EMTrack: Xu, Xianda and Jing, Shilong and Zhang, Zeshu and Chen, Chao and Guo, Guangsha and Lv, Hengyi and Zhao, Yuchen and Gu, Jialin.
"EMTrack: Event-guide Multimodal Transformer for Challenging Single Object Tracking." TGRS (2025). [paper] [code] -
MamTrack: Chuanyu Sun, Jiqing Zhang, Yang Wang, Huilin Ge, qianchen xia, Baocai Yin, Xin Yang.
"Exploring Historical Information for RGBE Visual Tracking with Mamba." CVPR (2025). [paper] [code] -
SpikeFET.: Jingjun Yang, Liangwei Fan, Jinpu Zhang, Xiangkai Lian, Hui Shen, Dewen Hu.
"Fully Spiking Neural Networks for Unified Frame-Event Object Tracking." NeurIPS (2025). [paper] -
SFTrack: Shiao Wang, Xiao Wang, Liye Jin, Bo Jiang, Lin Zhu, Lan Chen, Yonghong Tian, Bin Luo.
"Towards Low-Latency Event Stream-based Visual Object Tracking: A Slow-Fast Approach." ArXiv (2025). [paper] [code] -
FAEFTrack: Shang, Xilong and Zeng, Zhaoyuan and Li, Xiaopeng and Fan, Cien and Jin, Weizheng.
"Improving Object Tracking Performances with Frequency Learning for Event Cameras." IEEE Sensors Journal(2025). [paper] -
Qiang Chen, Xiao Wang, Haowen Wang, Bo Jiang, Lin Zhu, Dawei Zhang, Yonghong Tian, Jin Tang.
"Adversarial Attack for RGB-Event based Visual Object Tracking." ArXiv (2025). [paper] [code] -
HPL: Wang, Mianzhao and Shi, Fan and Cheng, Xu and Chen, Shengyong.
"Prior Knowledge-Driven Hybrid Prompter Learning for RGB-Event Tracking." TCSVT (2025). [paper] -
SNNPTrack: Ji, Yixi and Zhao, Qinghang and Liang, Yuping and Wu, Jinjian.
"SNNPTrack: Spiking Neural Network Based Prompt for High-Accuracy RGBE Tracking." ICASSP (2025). [paper] -
SDTrack: Yimeng Shan, Zhenbang Ren, Haodi Wu, Wenjie Wei, Rui-Jie Zhu, Shuai Wang, Dehao Zhang, Yichen Xiao, Jieyuan Zhang, Kexin Shi, Jingzhinan Wang, Jason K. Eshraghian, Haicheng Qu, Jiqing Zhang, Malu Zhang, Yang Yang.
"SDTrack: A Baseline for Event-based Tracking via Spiking Neural Networks." ArXiv (2025). [paper] -
HDETrack V2: Shiao Wang, Xiao Wang, Chao Wang, Liye Jin, Lin Zhu, Bo Jiang, Yonghong Tian, Jin Tang.
"Event Stream-based Visual Object Tracking: HDETrack V2 and A High-Definition Benchmark." ArXiv (2025). [paper] [code]
-
CSAM: Tianlu Zhang, Kurt Debattista, Qiang Zhang, Guiguang Ding, Jungong Han.
"Revisiting motion information for RGB-Event tracking with MOT philosophy." NeurIPS (2024). [paper] -
GS-EVT: Tao Liu, Runze Yuan, Yi'ang Ju, Xun Xu, Jiaqi Yang, Xiangting Meng, Xavier Lagorce, Laurent Kneip.
"GS-EVT: Cross-Modal Event Camera Tracking based on Gaussian Splatting." ArXiv (2024). [paper] -
DS-MESA: Pengcheng Shao, Tianyang Xu, Xuefeng Zhu, Xiaojun Wu, Josef Kittler.
"Dynamic Subframe Splitting and Spatio-Temporal Motion Entangled Sparse Attention for RGB-E Tracking." ArXiv (2024). [paper] -
BlinkTrack: Yichen Shen, Yijin Li, Shuo Chen, Guanglin Li, Zhaoyang Huang, Hujun Bao, Zhaopeng Cui, Guofeng Zhang.
"BlinkTrack: Feature Tracking over 100 FPS via Events and Images." ArXiv (2024). [paper] -
FE-TAP: Jiaxiong Liu, Bo Wang, Zhen Tan, Jinpu Zhang, Hui Shen, Dewen Hu.
"Tracking Any Point with Frame-Event Fusion Network at High Frame Rate." ArXiv (2024). [paper] [code] -
MambaEVT: Xiao Wang, Chao wang, Shiao Wang, Xixi Wang, Zhicheng Zhao, Lin Zhu, Bo Jiang.
"MambaEVT: Event Stream based Visual Object Tracking using State Space Model." ArXiv (2024). [paper] [code] -
eMoE-Tracker: Yucheng Chen, Lin Wang.
"eMoE-Tracker: Environmental MoE-based Transformer for Robust Event-guided Object Tracking." ArXiv (2024). [paper] [code] -
ED-DCFNet: Raz Ramon, Hadar Cohen-Duwek, Elishai Ezra Tsur.
"ED-DCFNet: An Unsupervised Encoder-decoder Neural Model for Event-driven Feature Extraction and Object Tracking." CVPRW (2024). [paper] [code] -
Mamba-FETrack: Ju Huang, Shiao Wang, Shuai Wang, Zhe Wu, Xiao Wang, Bo Jiang.
"Mamba-FETrack: Frame-Event Tracking via State Space Model." ArXiv (2024). [paper] [code] -
AMTTrack & FELT: Xiao Wang, Ju Huang, Shiao Wang, Chuanming Tang, Bo Jiang, Yonghong Tian, Jin Tang, Bin Luo.
"Long-term Frame-Event Visual Tracking: Benchmark Dataset and Baseline." ArXiv (2024). [paper] [code] -
TENet: Pengcheng Shao, Tianyang Xu, Zhangyong Tang, Linze Li, Xiao-Jun Wu, Josef Kittler.
"TENet: Targetness Entanglement Incorporating with Multi-Scale Pooling and Mutually-Guided Fusion for RGB-E Object Tracking." ArXiv (2024). [paper] [code] -
HDETrack: Xiao Wang, Shiao Wang, Chuanming Tang, Lin Zhu, Bo Jiang, Yonghong Tian, Jin Tang.
"Event Stream-based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel Baseline." CVPR (2024). [paper] [code] -
Yabin Zhu, Xiao Wang, Chenglong Li, Bo Jiang, Lin Zhu, Zhixiang Huang, Yonghong Tian, Jin Tang.
"CRSOT: Cross-Resolution Object Tracking using Unaligned Frame and Event Cameras." ArXiv (2024). [paper] [code] -
CDFI: Jiqing Zhang, Xin Yang, Yingkai Fu, Xiaopeng Wei, Baocai Yin, Bo Dong.
"Object Tracking by Jointly Exploiting Frame and Event Domain." ArXiv (2024). [paper] -
MMHT: Hongze Sun, Rui Liu, Wuque Cai, Jun Wang, Yue Wang, Huajin Tang, Yan Cui, Dezhong Yao, Daqing Guo.
"Reliable Object Tracking by Multimodal Hybrid Feature Extraction and Transformer-Based Fusion." ArXiv (2024). [paper]
-
Zhiyu Zhu, Junhui Hou, Dapeng Oliver Wu.
"Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers." ICCV (2023). [paper] [code] -
AFNet: Jiqing Zhang, Yuanchen Wang, Wenxi Liu, Meng Li, Jinpeng Bai, Baocai Yin, Xin Yang.
"Frame-Event Alignment and Fusion Network for High Frame Rate Tracking." CVPR (2023). [paper] [code] -
RT-MDNet: Xiao Wang, Jianing Li, Lin Zhu, Zhipeng Zhang, Zhe Chen, Xin Li, Yaowei Wang, Yonghong Tian, Feng Wu.
"VisEvent: Reliable Object Tracking via Collaboration of Frame and Event Flows." TC (2023). [paper] [code]
-
Event-tracking: Zhiyu Zhu, Junhui Hou, Xianqiang Lyu.
"Learning Graph-embedded Key-event Back-tracing for Object Tracking in Event Clouds." NeurIPS (2022). [paper] [code] -
STNet: Jiqing Zhang, Bo Dong, Haiwei Zhang, Jianchuan Ding, Felix Heide, Baocai Yin, Xin Yang.
"Spiking Transformers for Event-based Single Object Tracking." CVPR (2022). [paper] [code] -
CEUTrack: Chuanming Tang, Xiao Wang, Ju Huang, Bo Jiang, Lin Zhu, Jianlin Zhang, Yaowei Wang, Yonghong Tian.
"Revisiting Color-Event based Tracking: A Unified Network, Dataset, and Metric." ArXiv (2022). [paper] [code]
- CFE: Jiqing Zhang, Kai Zhao, Bo Dong, Yingkai Fu, Yuxin Wang, Xin Yang, Baocai Yin.
"Multi-domain Collaborative Feature Representation for Robust Visual Object Tracking." The Visual Computer (2021). [paper]
| Dataset | Pub. & Date | WebSite | Introduction |
|---|---|---|---|
| PTB | ICCV-2013 | PTB | 100 sequences |
| STC | TC-2018 | STC | 36 sequences |
| CDTB | ICCV-2019 | CDTB | 80 sequences |
| VOT-RGBD 2019/2020/2021 | ICCVW-2019 | VOT-RGBD 2019 | VOT-RGBD 2019, 2020, and 2021 are based on CDTB |
| DepthTrack | ICCV-2021 | DepthTrack | 200 sequences |
| VOT-RGBD 2022 | ECCVW-2022 | VOT-RGBD 2022 | VOT-RGBD 2022 is based on CDTB and DepthTrack |
| RGBD1K | AAAI-2023 | RGBD1K | 1,050 sequences, 2.5M frames |
| DTTD | CVPR Workshops-2023 | DTTD | 103 scenes, 55691 frames |
| ARKitTrack | CVPR-2023 | ARKitTrack | 300 RGB-D sequences, 455 targets, 229.7K video frames |
-
RDT-TEF: Long Gao and Yuze Ke and Wanlin Zhao and Yang Zhang and Yan Jiang and Gang He and Yunsong Li.
"RGB-D visual object tracking with transformer-based multi-modal feature fusion." Knowledge-Based Systems (2025). [paper] -
HMAD: Boyue Xu, Yi Xu, Ruichao Hou, Jia Bei, Tongwei Ren, Gangshan Wu.
"RGB-D Tracking via Hierarchical Modality Aggregation and Distribution Network." ArXiv (2025). [paper]
-
DAMT: Yifan Pan, Tianyang Xu, Xue-Feng Zhu, Xiaoqing Luo, Xiao-Jun Wu & Josef Kittler .
"Learning Explicit Modulation Vectors for Disentangled Transformer Attention-Based RGB-D Visual Tracking." ICPR (2024). [paper] -
3DPT: Bocen Li, Yunzhi Zhuge, Shan Jiang, Lijun Wang, Yifan Wang, Huchuan Lu.
"3D Prompt Learning for RGB-D Tracking." ACCV (2024). [paper] -
UBPT: Ou, Zhou and Zhang, Dawei and Ying, Ge and Zheng, Zhonglong.
"UBPT: Unidirectional and Bidirectional Prompts for RGBD Tracking." IEEE Sensors Journal (2024). [paper] -
L2FIG-Tracker: Jintao Su, Ye Liu, Shitao Song .
"L2FIG-Tracker: L2-Norm Based Fusion with Illumination Guidance for RGB-D Object Tracking." PRCV (2024). [paper] -
Depth Attention: Yu Liu, Arif Mahmood, Muhammad Haris Khan.
"Depth Attention for Robust RGB Tracking." ACCV (2024). [paper] [code] -
DepthRefiner: Lai, Simiao and Wang, Dong and Lu, Huchuan.
"DepthRefiner: Adapting RGB Trackers to RGBD Scenes via Depth-Fused Refinement." ICME (2024). [paper] -
TABBTrack: Ge Ying and Dawei Zhang and Zhou Ou and Xiao Wang and Zhonglong Zheng.
"Temporal adaptive bidirectional bridging for RGB-D tracking." PR (2024). [paper] -
AMATrack: Ye, Ping and Xiao, Gang and Liu, Jun.
"AMATrack: A Unified Network With Asymmetric Multimodal Mixed Attention for RGBD Tracking." IEEE TIM (2024). [paper] -
SSLTrack: Xue-Feng Zhu, Tianyang Xu, Sara Atito, Muhammad Awais, Xiao-Jun Wu, Zhenhua Feng, Josef Kittler.
"Self-supervised learning for RGB-D object tracking." PR (2024). [paper] -
VADT: Zhang, Guangtong and Liang, Qihua and Mo, Zhiyi and Li, Ning and Zhong, Bineng.
"Visual Adapt for RGBD Tracking." ICASSP (2024). [paper] -
FECD: Xue-Feng Zhu, Tianyang Xu, Xiao-Jun Wu, Josef Kittler.
"Feature enhancement and coarse-to-fine detection for RGB-D tracking." PRL (2024). [paper] -
CDAAT: Xue-Feng Zhu, Tianyang Xu, Xiao-Jun Wu, Zhenhua Feng, Josef Kittler.
"Adaptive Colour-Depth Aware Attention for RGB-D Object Tracking." SPL (2024). [paper] [code]
-
SPT: Xue-Feng Zhu, Tianyang Xu, Zhangyong Tang, Zucheng Wu, Haodong Liu, Xiao Yang, Xiao-Jun Wu, Josef Kittler.
"RGBD1K: A Large-scale Dataset and Benchmark for RGB-D Object Tracking." AAAI (2023). [paper] [code] -
EMT: Yang, Jinyu and Gao, Shang and Li, Zhe and Zheng, Feng and Leonardis, Ale\v{s}.
"Resource-Effcient RGBD Aerial Tracking." CVPR (2023). [paper] [code]
-
Track-it-in-3D: Jinyu Yang, Zhongqun Zhang, Zhe Li, Hyung Jin Chang, Aleš Leonardis, Feng Zheng.
"Towards Generic 3D Tracking in RGBD Videos: Benchmark and Baseline." ECCV (2022). [paper] [code] -
DMTracker: Shang Gao, Jinyu Yang, Zhe Li, Feng Zheng, Aleš Leonardis, Jingkuan Song.
"Learning Dual-Fused Modality-Aware Representations for RGBD Tracking." ECCVW (2022). [paper]
-
DeT: Song Yan, Jinyu Yang, Jani Käpylä, Feng Zheng, Aleš Leonardis, Joni-Kristian Kämäräinen.
"DepthTrack: Unveiling the Power of RGBD Tracking." ICCV (2021). [paper] [code] -
TSDM: Pengyao Zhao, Quanli Liu, Wei Wang and Qiang Guo.
"TSDM: Tracking by SiamRPN++ with a Depth-refiner and a Mask-generator." ICPR (2021). [paper] [code] -
3s-RGBD: Feng Xiao, Qiuxia Wu, Han Huang.
"Single-scale siamese network based RGB-D object tracking with adaptive bounding boxes." Neurocomputing (2021). [paper]
-
DAL: Yanlin Qian, Alan Lukezic, Matej Kristan, Joni-Kristian Kämäräinen, Jiri Matas.
"DAL : A deep depth-aware long-term tracker." ICPR (2020). [paper] [code] -
RF-CFF: Yong Wang, Xian Wei, Hao Shen, Lu Ding, Jiuqing Wan.
"Robust fusion for RGB-D tracking using CNN features." Applied Soft Computing Journal (2020). [paper] -
SiamOC: Wenli Zhang, Kun Yang, Yitao Xin, Rui Meng.
"An Occlusion-Aware RGB-D Visual Object Tracking Method Based on Siamese Network.." ICSP (2020). [paper] -
WCO: Weichun Liu, Xiaoan Tang, Chengling Zhao.
"Robust RGBD Tracking via Weighted Convlution Operators." Sensors (2020). [paper]
-
OTR: Ugur Kart, Alan Lukezic, Matej Kristan, Joni-Kristian Kamarainen, Jiri Matas.
"Object Tracking by Reconstruction with View-Specific Discriminative Correlation Filters." CVPR (2019). [paper] [code] -
H-FCN: Ming-xin Jiang, Chao Deng, Jing-song Shan, Yuan-yuan Wang, Yin-jie Jia, Xing Sun.
"Hierarchical multi-modal fusion FCN with attention model for RGB-D tracking." Information Fusion (2019). [paper] -
Kuai, Yangliu and Wen, Gongjian and Li, Dongdong and Xiao, Jingjing.
"Target-Aware Correlation Filter Tracking in RGBD Videos." IEEE Sensors Journal (2019). [paper] -
RGBD-OD: Yujun Xie, Yao Lu, Shuang Gu.
"RGB-D Object Tracking with Occlusion Detection." CIS (2019). [paper] -
3DMS: Alexander Gutev, Carl James Debono.
"Exploiting Depth Information to Increase Object Tracking Robustness." ICST (2019). [paper] -
CA3DMS: Ye Liu, Xiao-Yuan Jing, Jianhui Nie, Hao Gao, Jun Liu, Guo-Ping Jiang.
"Context-Aware Three-Dimensional Mean-Shift With Occlusion Handling for Robust Object Tracking in RGB-D Videos." TMM (2019). [paper] [code] -
Depth-CCF: Guanqun Li, Lei Huang, Peichang Zhang, Qiang Li, YongKai Huo.
"Depth Information Aided Constrained correlation Filter for Visual Tracking." GSKI (2019). [paper]
-
STC: Jingjing Xiao, Rustam Stolkin, Yuqing Gao, Aleš Leonardis.
"Robust Fusion of Color and Depth Data for RGB-D Target Tracking Using Adaptive Range-Invariant Depth Models and Spatio-Temporal Consistency Constraints." TC (2018). [paper] [code] -
Kart, Uğur and Kämäräinen, Joni-Kristian and Matas, Jiří.
"How to Make an RGBD Tracker ?." ECCVW (2018). [paper] [code] -
Jiaxu Leng, Ying Liu.
"Real-Time RGB-D Visual Tracking With Scale Estimation and Occlusion Handling." IEEE Access (2018). [paper] -
DM-DCF: Uğur Kart, Joni-Kristian Kämäräinen, Jiří Matas, Lixin Fan, Francesco Cricri.
"Depth Masked Discriminative Correlation Filter." ICPR (2018). [paper] -
OACPF: Yayu Zhai, Ping Song, Zonglei Mou, Xiaoxiao Chen, Xiongjun Liu.
"Occlusion-Aware Correlation Particle FilterTarget Tracking Based on RGBD Data." Access (2018). [paper] -
RT-KCF: Han Zhang, Meng Cai, Jianxun Li.
"A Real-time RGB-D tracker based on KCF." CCDC (2018). [paper]
-
ODIOT: Wei-Long Zheng, Shan-Chun Shen, Bao-Liang Lu.
"Online Depth Image-Based Object Tracking with Sparse Representation and Object Detection." Neural Process Letters (2017). [paper] -
ROTSL: Zi-ang Ma, Zhi-yu Xiang.
"Robust Object Tracking with RGBD-based Sparse Learning." ITEE (2017). [paper]
-
DLS: Ning An, Xiao-Guang Zhao, Zeng-Guang Hou.
"Online RGB-D Tracking via Detection-Learning-Segmentation." ICPR (2016). [paper] -
DS-KCF_shape: Sion Hannuna, Massimo Camplani, Jake Hall, Majid Mirmehdi, Dima Damen, Tilo Burghardt, Adeline Paiement, Lili Tao.
"DS-KCF: A Real-time Tracker for RGB-D Data." RTIP (2016). [paper] [code] -
3D-T: Adel Bibi, Tianzhu Zhang, Bernard Ghanem.
"3D Part-Based Sparse Tracker with Automatic Synchronization and Registration." CVPR (2016). [paper] [code] -
OAPF: Kourosh Meshgia, Shin-ichi Maedaa, Shigeyuki Obaa, Henrik Skibbea, Yu-zhe Lia, Shin Ishii.
"Occlusion Aware Particle Filter Tracker to Handle Complex and Persistent Occlusions." CVIU (2016). [paper]
-
CDG: Huizhang Shi, Changxin Gao, Nong Sang.
"Using Consistency of Depth Gradient to Improve Visual Tracking in RGB-D sequences." CAC (2015). [paper] -
DS-KCF: Massimo Camplani, Sion Hannuna, Majid Mirmehdi, Dima Damen, Adeline Paiement, Lili Tao, Tilo Burghardt.
"Real-time RGB-D Tracking with Depth Scaling Kernelised Correlation Filters and Occlusion Handling." BMVC (2015). [paper] [code] -
DOHR: Ping Ding, Yan Song.
"Robust Object Tracking Using Color and Depth Images with a Depth Based Occlusion Handling and Recovery." FSKD (2015). [paper] -
ISOD: Yan Chen, Yingju Shen, Xin Liu, Bineng Zhong.
"3D Object Tracking via Image Sets and Depth-Based Occlusion Detection." SP (2015). [paper] -
OL3DC: Bineng Zhong, Yingju Shen, Yan Chen, Weibo Xie, Zhen Cui, Hongbo Zhang, Duansheng Chen ,Tian Wang, Xin Liu, Shujuan Peng, Jin Gou, Jixiang Du, Jing Wang, Wenming Zheng.
"Online Learning 3D Context for Robust Visual Tracking." Neurocomputing (2015). [paper]
- MCBT: Qi Wang, Jianwu Fang, Yuan Yuan. Multi-Cue Based Tracking.
"Multi-Cue Based Tracking." Neurocomputing (2014). [paper]
- PT: Shuran Song, Jianxiong Xiao.
"Tracking Revisited using RGBD Camera: Unified Benchmark and Baselines." ICCV (2013). [paper] [code]
-
Matteo Munaro, Filippo Basso and Emanuele Menegatti .
"Tracking people within groups with RGB-D data." IROS (2012). [paper] -
AMCT: Germán Martín García, Dominik Alexander Klein, Jörg Stückler, Simone Frintrop, Armin B. Cremers.
"Adaptive Multi-cue 3D Tracking of Arbitrary Objects." JDOS (2012). [paper]
| Dataset | Pub. & Date | WebSite | Introduction |
|---|---|---|---|
| GTOT | TIP-2016 | GTOT | 50 video pairs, 1.5W frames |
| RGBT210 | ACM MM-2017 | RGBT210 | 210 video pairs |
| RGBT234 | PR-2018 | RGBT234 | 234 video pairs, the extension of RGBT210 |
| LasHeR | TIP-2021 | LasHeR | 1224 video pairs, 730K frames |
| VTUAV | CVPR-2022 | VTUAV | Visible-thermal UAV tracking, 500 sequences, 1.7 million high-resolution frame pairs |
| MV-RGBT | arXiv-2024 | MV-RGBT | 122 video pairs, 89.9K frames |
| NOT-156 | TGRS-2025 | NOT-156 | 156 videos with low-light image and thermal infrared |
-
MPANet: Liu, Xiang and Li, Haiyan and Sheng, Victor and Ma, Yujun and Liang, Xiaoguo and Wang, Guanbo.
"Scale-Aware Attention and Multi-Modal Prompt Learning with Fusion Adapter for RGBT Tracking." TMM (2025). [paper] -
VFPTrack: Hongtao Yang, Bineng Zhong, Qihua Liang, Zhiruo Zhu, Yaozong Zheng, Ning Li.
"Robust RGB-T Tracking via Learnable Visual Fourier Prompt Fine-tuning and Modality Fusion Prompt Generation." TMM (2025). [paper] -
HMFF: Na Li, Kai Huang, Zihang Wang, Yuquan Gan & Jinglu He .
"Hierarchical multi-modal feature fusion for RGBT tracking." Signal, Image and Video Processing (2025). [paper] [code] -
MCINet: Zhao Gao and Dongming Zhou and Zhiyong Wu and Yisong Liu and Qingqing Shan.
"MCINet: Multimodal Context-Aware Network for RGBT Tracking." KBS (2025). [paper] [code] -
ACAttack: Xinyu Xiang, Qinglong Yan, Hao Zhang, Jiayi Ma.
"ACAttack: Adaptive Cross Attacking RGB-T Tracker via Multi-Modal Response Decoupling." CVPR (2025). [paper] [code] -
Bayer-Thermal: Augustin Borne, Christophe Hennequin, Stéphane Bazeille, Philippe De Faria, Franz Quint, Sébastien Changey, and Christophe Cudel.
"Multimodal object tracking using raw visible and thermal infrared data." SPIE (2025). [paper] -
FMTrack: Xue, Yuanliang and Jin, Guodong and Zhong, Bineng and Shen, Tao and Tan, Lining and Xue, Chaocan and Zheng, Yaozong.
"FMTrack: Frequency-aware Interaction and Multi-Expert Fusion for RGB-T Tracking." TCSVT (2025). [paper] [code] -
Ma, Shaoyang and Zhang, Kai and Yang, Yao and Liu, Qiyan and Chen, Gang.
"Vision-Inspired Transformer-Based Thermal Infrared Target Tracking Framework for Internet of Things." IEEE Internet of Things Journal (2025). [paper] -
MTNet: Ruichao Hou, Boyue Xu, Tongwei Ren, Gangshan Wu.
"MTNet: Learning modality-aware representation with transformer for RGBT tracking." ArXiv (2025). [paper] -
EHDA: Qiao Li, Kanlun Tan, Qiao Liu, Di Yuan, Xin Li, Yunpeng Liu.
"Efficient Hierarchical Domain Adaptive Thermal Infrared Tracking." ICASSP (2025). [paper] -
Hoang, Quynh T. X. and Duong, Soan T. M. and Bui, Ly and Tran, Duy Q..
"A Confidence-Based Sampling Strategy for Dense Temporal Token Learning in Thermal Infrared Object Tracking." ICIP (2025). [paper] -
HFDAE: Awad, Mohamed and Elliethy, Ahmed and Ahmad, M. Omair and Swamy, M. N. S..
"Adaptive Hierarchical Feature Difference Auto-Encoder for Robust RGB-T Object Tracking." ICIP (2025). [paper] [code] -
RAMR: Zhao Gao and Dongming Zhou and Yisong Liu and Qingqing Shan.
"RAMR: A Role-Adaptive Modality Recalibration Network for RGBT Tracking." Expert Systems with Applications (2025). [paper] [code] -
Augustin Borne, Christophe Hennequin, Stéphane Bazeille, Philippe De Faria, Franz Quint, Sébastien Changey, and Christophe Cudel.
"Multimodal object tracking using raw visible and thermal infrared data." QCAV (2025). [paper] -
MFJA: Xue, Hu and Zhu, Hao and Ran, Zhidan and Tang, Xianlun and Qi, Guanqiu and Zhu, Zhiqin and Kuok, Sin-Chi and Leung, Henry.
"Feature Fusion and Enhancement for Lightweight Visible-Thermal Infrared Tracking via Multiple Adapters." TCSVT (2025). [paper] [code] -
CST Anti-UAV: Bin Xie, Congxuan Zhang, Fagan Wang, Peng Liu, Feng Lu, Zhen Chen, Weiming Hu.
"CST Anti-UAV: A Thermal Infrared Benchmark for Tiny UAV Tracking in Complex Scenes." ICCVW (2025). [paper] -
TIPTrack: Kaixiang Yan, Wenhua Qian.
"TIPTrack: time-series information prompt network for RGBT tracking." ESWA (2025). [paper] -
MAP: Guyue Hu, Zhanghuan Wang, Chenglong Li, Duzhi Yuan, Bin He & Jin Tang .
"Missingness-aware prompting for modality-missing RGBT tracking." J. King Saud Univ. Comput. Inf. Sci (2025). [paper] [code] -
MRTTrack: Pujian Lai and Dong Gao and Shilei Wang and Gong Cheng.
"Mining Representative Tokens via Transformer-based Multi-modal Interaction for RGB-T Tracking." PR (2025). [paper] [code] -
Jianming Chen and Dingjian Li and Xiangjin Zeng and Yaman Jing and Zhenbo Ren and Jianglei Di and Yuwen Qin.
"Cross-modal information interaction of binocular predictive networks for RGBT tracking." Digital Signal Processing (2025). [paper] [code] -
DIDPT: Muyang Li; Xiwen Ren; Guangwen Luo; Haofei Zhang; Ruqian Hao; Juanxiu Liu.
"DIDPT:Dense Interaction Deep Prompt RGBT Tracking." IEEE Sensors Journal (2025). [paper] -
LRPD: Qing kuo Hu, Yichen Li, Wenbin Yu.
"Exploiting Multimodal Prompt Learning and Distillation for RGB-T Tracking." ICMR (2025). [paper] -
MCIT: Yu Qin and Jianming Zhang and Shimeng Fan and Zikang Liu and Jin Wang.
"MCIT: Multi-level cross-modal interactive transformer for RGBT tracking." Neurocomputing (2025). [paper] [code] -
mmMobileViT: Mahdi Falaki, Maria A. Amer.
"Lightweight RGB-T Tracking with Mobile Vision Transformers." ArXiv (2025). [paper] -
Hui Zhao, Lei Zhang.
"Dual-stream siamese network for RGB-T dual-modal fusion object tracking on UAV." The Journal of Supercomputing (2025). [paper] -
MGNet Jianming Zhang and Jing Yang and Yu Qin and Zhu Xiao and Jin Wang.
"MGNet: RGBT tracking via cross-modality cross-region mutual guidance." Neural Networks (2025). [paper] -
MGTrack: Ma, Shaoyang and Yang, Yao and Zhang, Kai and Chen, Gang.
"Transformer-Based Memory Guided Thermal Infrared Target Tracking Framework for Traffic Assistance." TITS (2025). [paper] -
STTrack: Yuan, Di and Zhang, Haiping and Liu, Qiao and Chang, Xiaojun and He, Zhenyu.
"Transformer-based RGBT Tracking with Spatio-Temporal Information Fusion." IEEE Sensors Journal (2025). [paper] -
ARMN: Ma, Shaoyang and Yang, Yao and Zhang, Kai and Chen, Gang.
"Transformer-Based Memory Guided Thermal Infrared Target Tracking Framework for Traffic Assistance." TITS (2025). [paper] -
SiamMLGR: Peng Gao and Shi-Min Li and Fei Wang and Hamido Fujita and Hanan Aljuaid and Ru-Yue Yuan.
"Learning multi-level graph attentional representation for thermal infrared object tracking." Engineering Applications of Artificial Intelligence (2025). [paper] -
GDSTrack: Shenglan Li, Rui Yao, Yong Zhou, Hancheng Zhu, Kunyang Sun, Bing Liu, Zhiwen Shao, Jiaqi Zhao.
"Modality-Guided Dynamic Graph Fusion and Temporal Diffusion for Self-Supervised RGB-T Tracking." IJCAI (2025). [paper] [code] -
SMMT: Shang Zhang, Huanbin Zhang, Dali Feng, Yujie Cui, Ruoyan Xiong, Cen He.
"SMMT: Siamese Motion Mamba with Self-attention for Thermal Infrared Target Tracking." ArXiv (2025). [paper] -
DMD: Hu, Yufan and Shao, Zekai and Fan, Bin and Liu, Hongmin.
"Dual-level Modality De-biasing for RGB-T Tracking." TIP (2025). [paper] -
AETrack: Zhu, Zhiruo and Zhong, Bineng and Liang, Qihua and Yang, Hongtao and Zheng, Yaozong and Li, Ning.
"Adaptive Expert Decision for RGB-T Tracking." TCSVT (2025). [paper] -
MFNet: Fanghua Hong; Wanyu Wang; Andong Lu; Lei Liu; Qunjing Wang.
"Efficient RGBT Tracking via Multi-Path Mamba Fusion Network." SPL (2025). [paper] -
SMTT: Shang Zhang, HuiPan Guan, XiaoBo Ding, Ruoyan Xiong, Yue Zhang.
"SMTT: Novel Structured Multi-task Tracking with Graph-Regularized Sparse Representation for Robust Thermal Infrared Target Tracking." ArXiv (2025). [paper] -
STARS: Shang Zhang, Xiaobo Ding, Huanbin Zhang, Ruoyan Xiong, Yue Zhang.
"STARS: Sparse Learning Correlation Filter with Spatio-temporal Regularization and Super-resolution Reconstruction for Thermal Infrared Target Tracking." ArXiv (2025). [paper] -
RAMCT: Shang Zhang, Yuke Hou, Guoqiang Gong, Ruoyan Xiong, Yue Zhang.
"RAMCT: Novel Region-adaptive Multi-channel Tracker with Iterative Tikhonov Regularization for Thermal Infrared Tracking." ArXiv (2025). [paper] -
DCFG: Ruoyan Xiong, Yuke Hou, Princess Retor Torboh, Hui He, Huanbin Zhang, Yue Zhang, Yanpin Wang, Huipan Guan, Shang Zhang.
"DCFG: Diverse Cross-Channel Fine-Grained Feature Learning and Progressive Fusion Siamese Tracker for Thermal Infrared Target Tracking." ArXiv (2025). [paper] -
FETA: Shiguo Chen and Linzhi Xu and Xiangyang Li and Chunna Tian.
"Frequency-space enhanced and temporal adaptative RGBT object tracking." Neurocomputing (2025). [paper] -
AINet: Andong Lu, Wanyu Wang, Chenglong Li, Jin Tang, Bin Luo.
"RGBT Tracking via All-layer Multimodal Interactions with Progressive Fusion Mamba." AAAI (2025). [paper] -
CAFormer: Yun Xiao, Jiacong Zhao, Andong Lu, Chenglong Li, Bing Yin, Yin Lin, Cong Liu.
"Cross-modulated Attention Transformer for RGBT Tracking." AAAI (2025). [paper] [code] -
TVTracker: Gao, Fang and Wu, Wenjie and Jin, Yan and Tang, Jingfeng and Zheng, Hanbo and Ma, Shengheng and Yu, Jun.
"TVTracker: Target-Adaptive Text-Guided Visual Fusion for Multi-Modal RGB-T Tracking." IEEE Internet of Things Journal (2025). [paper] -
TAAT: Zhangyong Tang and Tianyang Xu and Xiao-Jun Wu and Josef Kittler.
"Temporal aggregation for real-time RGBT tracking via fast decision-level fusion." Pattern Recognition Letters (2025). [paper] [code] -
FFTR: Liao, Donghai and Shu, Xiu and Li, Zhihui and Liu, Qiao and Yuan, Di and Chang, Xiaojun and He, Zhenyu.
"Fine-grained Feature and Template Reconstruction for TIR Object Tracking." TCSVT (2025). [paper] -
NOT-156: Sun, Chen and Wang, Xinyu and Fan, Shenghua and Dai, Xiaobing and Wan, Yuting and Jiang, Xiao and Zhu, Zengliang and Zhong, Yanfei.
"NOT-156: Night Object Tracking using Low-light and Thermal Infrared: From Multi-modal Common-aperture Camera to Benchmark Datasets." TGRS (2025). [paper] [code] -
SHT: Gao, Zhao and Zhou, Dongming and Cao, Jinde and Liu, Yisong and Shan, Qingqing.
"Enhanced RGBT Tracking Network With Semantic Generation and Historical Context." IEEE TIM (2025). [paper] -
IAMTrack: Huiwei Shi, Xiaodong Mu, Hao He, Chengliang Zhong, Bo Zhang & Peng Zhao.
"IAMTrack: interframe appearance and modality tokens propagation with temporal modeling for RGBT tracking." Applied Intelligence (2025). [paper] -
TPF: Breaking Shallow Limits: Task-Driven Pixel Fusion for Gap-free RGBT Tracking.
"Breaking Shallow Limits: Task-Driven Pixel Fusion for Gap-free RGBT Tracking." ArXiv (2025). [paper] -
TUFNet: Yisong Liu, Zhao Gao, Yang Cao, Dongming Zhou.
" Two-stage Unidirectional Fusion Network for RGBT tracking." KBS (2025). [paper] -
MAT: He Wang, Tianyang Xu, Zhangyong Tang, Xiao-Jun Wu, Josef Kittler.
"Multi-modal adapter for RGB-T tracking." Information Fusion (2025). [paper] [code] -
BTMTrack: Zhongxuan Zhang, Bi Zeng, Xinyu Ni, Yimin Du.
"BTMTrack: Robust RGB-T Tracking via Dual-template Bridging and Temporal-Modal Candidate Elimination." ArXiv (2025). [paper]
-
STMT: Sun, Dengdi and Pan, Yajie and Lu, Andong and Li, Chenglong and Luo, Bin.
"Transformer RGBT Tracking With Spatio-Temporal Multimodal Tokens." TCSVT (2024). [paper] [code] -
TGTrack: Chen, Liang and Zhong, Bineng and Liang, Qihua and Zheng, Yaozong and Mo, Zhiyi and Song, Shuxiang.
"Top-Down Cross-Modal Guidance for Robust RGB-T Tracking." TCSVT (2024). [paper] -
MCTrack: Hu, Xiantao and Zhong, Bineng and Liang, Qihua and Zhang, Shengping and Li, Ning and Li, Xianxian.
"Toward Modalities Correlation for RGB-T Tracking." TCSVT (2024). [paper] -
LSAR: Liu, Jun and Luo, Zhongqiang and Xiong, Xingzhong.
"Online Learning Samples and Adaptive Recovery for Robust RGB-T Tracking." TCSVT (2024). [paper] -
SiamTFA: Zhang, Jianming and Qin, Yu and Fan, Shimeng and Xiao, Zhu and Zhang, Jin.
"SiamTFA: Siamese Triple-Stream Feature Aggregation Network for Efficient RGBT Tracking." TITS (2024). [paper] [code] -
RFFM: Zeng, D., Luo, H., Li, J., Gao, P.
"RGB-T Tracking via Region Filtering-Fusion and Prompt Learning." ICAUS (2024). [paper] -
DKDTrack: Fanghua Hong, Mai Wen, Andong Lu, Qunjing Wang.
"DKDTrack: dual-granularity knowledge distillation for RGBT tracking." ICGIP (2024). [paper] -
Fanghua Hong, Jinhu Wang, Andong Lu, Qunjing Wang.
"Augmentative fusion network for robust RGBT tracking." ICGIP (2024). [paper] -
CAFF: FENG Zihang, et al.
"A content-aware correlation filter with multi-feature fusion for RGB-T tracking." Journal of Systems Engineering and Electronics (2024). [paper] -
DDFNe: Chenglong Li, Tao Wang, Zhaodong Ding, Yun Xiao, Jin Tang.
"Dynamic Disentangled Fusion Network for RGBT Tracking." ArXiv (2024). [paper] -
TMTB: Yimin Du, Bi Zeng, Qingmao Wei, Boquan Zhang & Huiting Hu.
"Transformer-Mamba-Based Trident-Branch RGB-T Tracker." PRICAI (2024). [paper] -
Shuixin Pan and Haopeng Wang and Dilong Li and Yueqiang Zhang and Bahubali Shiragapur and Xiaolin Liu and Qifeng Yu.
"A Lightweight Robust RGB-T Object Tracker Based on Jitter Factor and Associated Kalman Filter." Information Fusion (2024). [paper] -
SiamSCR: Liu, Yisong and Zhou, Dongming and Cao, Jinde and Yan, Kaixiang and Geng, Lizhi.
"Specific and Collaborative Representations Siamese Network for RGBT Tracking." IEEE Sensors Journal (2024). [paper] -
Jianming Zhang, Jing Yang, Zikang Liu, Jin Wang.
"RGBT tracking via frequency-aware feature enhancement and unidirectional mixed attention." Neurocomputing (2024). [paper] -
Jie Yu, Tianyang Xu, Xuefeng Zhu, Xiao-Jun Wu .
"Local Point Matching for Collaborative Image Registration and RGBT Anti-UAV Tracking." PRCV (2024). [paper] [code] -
FHAT: Lei Lei, Xianxian Li.
"RGB-T tracking with frequency hybrid awareness." Image and Vision Computing (2024). [paper] -
ACENet: Zhengzheng Tu, Le Gu, Danying Lin, Zhicheng Zhao.
"ACENet: Adaptive Context Enhancement Network for RGB-T Video Object Detection." PRCV (2024). [paper] [code] -
MMSTC: Zhang, Tianlu and Jiao, Qiang and Zhang, Qiang and Han, Jungong.
"Exploring Multi-Modal Spatial–Temporal Contexts for High-Performance RGB-T Tracking." TIP (2024). [paper] -
CKD: Andong Lu, Jiacong Zhao, Chenglong Li, Yun Xiao, Bin Luo.
"Breaking Modality Gap in RGBT Tracking: Coupled Knowledge Distillation." ACM MM (2024). [paper] [code] -
TBSI: Li, Bo and Peng, Fengguang and Hui, Tianrui and Wei, Xiaoming and Wei, Xiaolin and Zhang, Lijun and Shi, Hang and Liu, Si.
"RGB-T Tracking with Template-Bridged Search Interaction and Target-Preserved Template Updating." TPAMI (2024). [paper] [code] -
CFBT: Zhirong Zeng, Xiaotao Liu, Meng Sun, Hongyu Wang, Jing Liu.
"Cross Fusion RGB-T Tracking with Bi-directional Adapter." ArXiv (2024). [paper] -
MambaVT: Simiao Lai, Chang Liu, Jiawen Zhu, Ben Kang, Yang Liu, Dong Wang, Huchuan Lu.
"MambaVT: Spatio-Temporal Contextual Modeling for robust RGB-T Tracking." ArXiv (2024). [paper] -
SiamSEA: Zihan Zhuang, Mingfeng Yin, Qi Gao, Yong Lin, Xing Hong.
"SiamSEA: Semantic-aware Enhancement and Associative-attention Dual-Modal Siamese Network for Robust RGBT Tracking." IEEE Access (2024). [paper] -
VLCTrack: Wang, Jiahao and Liu, Fang and Jiao, Licheng and Gao, Yingjia and Wang, Hao and Li, Shuo and Li, Lingling and Chen, Puhua and Liu, Xu.
"Visual and Language Collaborative Learning for RGBT Object Tracking." TCSVT (2024). [paper] -
CAFormer: Yun Xiao, Jiacong Zhao, Andong Lu, Chenglong Li, Yin Lin, Bing Yin, Cong Liu.
"Cross-modulated Attention Transformer for RGBT Tracking." ArXiv (2024). [paper] -
Li, Kai, Lihua Cai, Guangjian He, and Xun Gong.
"MATI: Multimodal Adaptive Tracking Integrator for Robust Visual Object Tracking." Sensors (2024). [paper] -
PDAT: Qiao Li, Kanlun Tan, Qiao Liu, Di Yuan, Xin Li, Yunpeng Liu.
"Progressive Domain Adaptation for Thermal Infrared Object Tracking." ArXiv (2024). [paper] -
ReFocus: Lai, Simiao and Liu, Chang and Wang, Dong and Lu, Huchuan.
"Refocus the Attention for Parameter-Efficient Thermal Infrared Object Tracking." TNNLS (2024). [paper] -
MMSTC: Zhang, Tianlu and Jiao, Qiang and Zhang, Qiang and Han, Jungong.
"Exploring Multi-modal Spatial-Temporal Contexts for High-performance RGB-T Tracking." TIP (2024). [paper] -
MELT: Zhangyong Tang, Tianyang Xu, Xiao-Jun Wu, and Josef Kittler.
"Multi-Level Fusion for Robust RGBT Tracking via Enhanced Thermal Representation." ACM TOMM (2024). [paper] [code] -
NLMTrack: Miao Yan, Ping Zhang, Haofei Zhang, Ruqian Hao, Juanxiu Liu, Xiaoyang Wang, Lin Liu.
"Enhancing Thermal Infrared Tracking with Natural Language Modeling and Coordinate Sequence Generation." ArXiv (2024). [paper] [code] -
Yang Luo, Xiqing Guo, Hao Li.
"From Two-Stream to One-Stream: Efficient RGB-T Tracking via Mutual Prompt Learning and Knowledge Distillation." ArXiv (2024). [paper] -
Zhao, Qian, Jun Liu, Junjia Wang, and Xingzhong Xiong.
"Real-Time RGBT Target Tracking Based on Attention Mechanism." Electronics (2024). [paper] -
MIGTD: Yujue Cai, Xiubao Sui, Guohua Gu, Qian Chen.
"Multi-modal interaction with token division strategy for RGB-T tracking." PR (2024). [paper] -
GMMT: Zhangyong Tang, Tianyang Xu, Xuefeng Zhu, Xiao-Jun Wu, Josef Kittler.
"Generative-based Fusion Mechanism for Multi-Modal Tracking." AAAI (2024). [paper] [code] -
BAT: Bing Cao, Junliang Guo, Pengfei Zhu, Qinghua Hu.
"Bi-directional Adapter for Multi-modal Tracking." AAAI (2024). [paper] [code] -
ProFormer: Yabin Zhu, Chenglong Li, Xiao Wang, Jin Tang, Zhixiang Huang.
"RGBT Tracking via Progressive Fusion Transformer with Dynamically Guided Learning." TCSVT (2024). [paper] -
QueryTrack: Fan, Huijie and Yu, Zhencheng and Wang, Qiang and Fan, Baojie and Tang, Yandong.
"QueryTrack: Joint-Modality Query Fusion Network for RGBT Tracking." TIP (2024). [paper] -
CAT++: Liu, Lei and Li, Chenglong and Xiao, Yun and Ruan, Rui and Fan, Minghao.
"RGBT Tracking via Challenge-Based Appearance Disentanglement and Interaction." TIP (2024). [paper] -
TATrack: Hongyu Wang, Xiaotao Liu, Yifan Li, Meng Sun, Dian Yuan, Jing Liu.
"Temporal Adaptive RGBT Tracking with Modality Prompt." ArXiv (2024). [paper] -
MArMOT: Chenglong Li, Tianhao Zhu, Lei Liu, Xiaonan Si, Zilin Fan, Sulan Zhai.
"Cross-Modal Object Tracking: Modality-Aware Representations and A Unified Benchmark." ArXiv (2024). [paper] -
AMNet: Zhang, Tianlu and He, Xiaoyi and Jiao, Qiang and Zhang, Qiang and Han, Jungong.
"AMNet: Learning to Align Multi-modality for RGB-T Tracking." TCSVT (2024). [paper] -
MCTrack: Hu, Xiantao and Zhong, Bineng and Liang, Qihua and Zhang, Shengping and Li, Ning and Li, Xianxian.
"Towards Modalities Correlation for RGB-T Tracking." TCSVT (2024). [paper] -
AFter: Andong Lu, Wanyu Wang, Chenglong Li, Jin Tang, Bin Luo.
"AFter: Attention-based Fusion Router for RGBT Tracking." ArXiv (2024). [paper] [code] -
CSTNet: Yunfeng Li, Bo Wang, Ye Li, Zhiwen Yu, Liang Wang.
"Transformer-based RGB-T Tracking with Channel and Spatial Feature Fusion." ArXiv (2024). [paper] [code] -
RFFM: Zeng, D., Luo, H., Li, J., Gao, P..
"RGB-T Tracking via Region Filtering-Fusion and Prompt Learning." ICAUS (2024). [paper]
-
TBSI: Hui, Tianrui and Xun, Zizheng and Peng, Fengguang and Huang, Junshi and Wei, Xiaoming and Wei, Xiaolin and Dai, Jiao and Han, Jizhong and Liu, Si.
"Bridging Search Region Interaction with Template for RGB-T Tracking." CVPR (2023). [paper] [code] -
DFNet: Jingchao Peng , Haitao Zhao , and Zhengwei Hu.
"Dynamic Fusion Network for RGBT Tracking." TITS (2023). [paper] [code] -
CMD: Zhang, Tianlu and Guo, Hongyuan and Jiao, Qiang and Zhang, Qiang and Han, Jungong.
"Efficient RGB-T Tracking via Cross-Modality Distillation." CVPR (2023). [paper] -
DFAT: Zhangyong Tang, Tianyang Xu, Hui Li, Xiao-Jun Wu, XueFeng Zhu, Josef Kittler.
"Exploring fusion strategies for accurate RGBT visual object tracking." Information Fusion (2023). [paper] [code] -
QAT: Lei Liu, Chenglong Li, Yun Xiao, Jin Tang.
"Quality-Aware RGBT Tracking via Supervised Reliability Learning and Weighted Residual Guidance." ACM MM (2023). [paper] -
GuideFuse: Zhang, Zeyang and Li, Hui and Xu, Tianyang and Wu, Xiao-Jun and Fu, Yu.
"GuideFuse: A Novel Guided Auto-Encoder Fusion Network for Infrared and Visible Images." TIM (2023). [paper] -
MPLT: Yang Luo, Xiqing Guo, Hui Feng, Lei Ao.
"RGB-T Tracking via Multi-Modal Mutual Prompt Learning." ArXiv (2023). [paper] [code]
-
HMFT: Pengyu Zhang, Jie Zhao, Dong Wang, Huchuan Lu, Xiang Ruan.
"Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline." CVPR (2022). [paper] [code] -
MFGNet: Xiao Wang, Xiujun Shu, Shiliang Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, Feng Wu.
"MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking." TMM (2022). [paper] [code] -
MBAFNet: Li, Yadong and Lai, Huicheng and Wang, Liejun and Jia, Zhenhong.
"Multibranch Adaptive Fusion Network for RGBT Tracking." IEEE Sensors Journal (2022). [paper] -
AGMINet: Mei, Jiatian and Liu, Yanyu and Wang, Changcheng and Zhou, Dongming and Nie, Rencan and Cao, Jinde.
"Asymmetric Global–Local Mutual Integration Network for RGBT Tracking." TIM (2022). [paper] -
APFNet: Yun Xiao, Mengmeng Yang, Chenglong Li, Lei Liu, Jin Tang.
"Attribute-Based Progressive Fusion Network for RGBT Tracking." AAAI (2022). [paper] [code] -
DMCNet: Lu, Andong and Qian, Cun and Li, Chenglong and Tang, Jin and Wang, Liang.
"Duality-Gated Mutual Condition Network for RGBT Tracking." TNNLS (2022). [paper] -
TFNet: Zhu, Yabin and Li, Chenglong and Tang, Jin and Luo, Bin and Wang, Liang.
"RGBT Tracking by Trident Fusion Network." TCSVT (2022). [paper] -
Mingzheng Feng, Jianbo Su .
"Learning reliable modal weight with transformer for robust RGBT tracking." KBS (2022). [paper]
-
JMMAC: Zhang, Pengyu and Zhao, Jie and Bo, Chunjuan and Wang, Dong and Lu, Huchuan and Yang, Xiaoyun.
"Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking." TIP (2021). [paper] [code] -
ADRNet: Pengyu Zhang, Dong Wang, Huchuan Lu, Xiaoyun Yang.
"Learning Adaptive Attribute-Driven Representation for Real-Time RGB-T Tracking." IJCV (2021). [paper] [code] -
SiamCDA: Zhang, Tianlu and Liu, Xueru and Zhang, Qiang and Han, Jungong.
"SiamCDA: Complementarity-and distractor-aware RGB-T tracking based on Siamese network." TCSVT (2021). [paper] [code] -
Wang, Yong and Wei, Xian and Tang, Xuan and Shen, Hao and Zhang, Huanlong.
"Adaptive Fusion CNN Features for RGBT Object Tracking." TITS (2021). [paper] -
M5L: Zhengzheng Tu, Chun Lin, Chenglong Li, Jin Tang, Bin Luo.
"M5L: Multi-Modal Multi-Margin Metric Learning for RGBT Tracking." TIP (2021). [paper] -
CBPNet: Qin Xu, Yiming Mei, Jinpei Liu, and Chenglong Li.
"Multimodal Cross-Layer Bilinear Pooling for RGBT Tracking." TMM (2021). [paper] -
MANet++: Andong Lu, Chenglong Li, Yuqing Yan, Jin Tang, Bin Luo.
"RGBT Tracking via Multi-Adapter Network with Hierarchical Divergence Loss." TIP (2021). [paper] -
CMR: Li, Chenglong and Xiang, Zhiqiang and Tang, Jin and Luo, Bin and Wang, Futian.
"RGBT Tracking via Noise-Robust Cross-Modal Ranking." TNNLS (2021). [paper] -
GCMP: Rui Yang, Xiao Wang, Chenglong Li, Jinmin Hu, Jin Tang.
"RGBT tracking via cross-modality message passing." Neurocomputing (2021). [paper] -
HDINet: Mei, Jiatian and Zhou, Dongming and Cao, Jinde and Nie, Rencan and Guo, Yanbu.
"HDINet: Hierarchical Dual-Sensor Interaction Network for RGBT Tracking." IEEE Sensors Journal (2021). [paper]
-
CMPP: Chaoqun Wang, Chunyan Xu, Zhen Cui, Ling Zhou, Tong Zhang, Xiaoya Zhang, Jian Yang.
"Cross-Modal Pattern-Propagation for RGB-T Tracking."CVPR (2020). [paper] -
CAT: Chenglong Li, Lei Liu, Andong Lu, Qing Ji, Jin Tang.
"Challenge-Aware RGBT Tracking." ECCV (2020). [paper] -
FANet: Yabin Zhu, Chenglong Li, Bin Luo, Jin Tang .
"FANet: Quality-Aware Feature Aggregation Network for Robust RGB-T Tracking." TIV (2020). [paper]
-
mfDiMP: Lichao Zhang, Martin Danelljan, Abel Gonzalez-Garcia, Joost van de Weijer, Fahad Shahbaz Khan.
"Multi-Modal Fusion for End-to-End RGB-T Tracking." ICCVW (2019). [paper] [code] -
DAPNet: Yabin Zhu, Chenglong Li, Bin Luo, Jin Tang, Xiao Wang.
"Dense Feature Aggregation and Pruning for RGBT Tracking." ACM MM (2019). [paper] -
DAFNet: Yuan Gao, Chenglong Li, Yabin Zhu, Jin Tang, Tao He, Futian Wang.
"Deep Adaptive Fusion Network for High Performance RGBT Tracking." ICCVW (2019). [paper] [code] -
MANet: Chenglong Li, Andong Lu, Aihua Zheng, Zhengzheng Tu, Jin Tang.
"Multi-Adapter RGBT Tracking." ICCV (2019). [paper] [code]
| Dataset | Pub. & Date | WebSite | Introduction |
|---|---|---|---|
| WebUAV-3M | TPAMI-2023 | WebUAV-3M | 4500 videos, 3.3 million frames, UAV tracking, Vision-language-audio |
| UniMod1K | IJCV-2024 | UniMod1K | 1050 video pairs, 2.5 million frames, Vision-depth-language |
| QuadTrack600 | arXiv-2025 | QuadTrack600 | 600 video pairs, 348.7 K frames, RGB-TIR-Language-Event |
| UniBench300 | ACM MM-2025 | UniBench300 | A unified benchmark with 100 RGBT sequences, 100 RGBD sequences, and 100 RGBE sequences, 368.1K frames |
| RGBDT500 | arXiv-2025 | RGBDT500 | A multi-modal tracking dataset contained 500 videos with synchronised frames across RGB, depth, and thermal infrared modalities |
-
SMSTracker: Sixian Chan, Zedong Li, Wenhao Li, Shijian Lu, Chunhua Shen, Xiaoqin Zhang.
"SMSTracker: Tri-path Score Mask Sigma Fusion for Multi-Modal Tracking." ICCV (2025). [paper] [code] -
FA3T: Jiahao Wang, Fang Liu, Licheng Jiao, Hao Wang, Shuo Li, Lingling Li, Puhua Chen, Xu Liu, Xinyi Wang.
"FA3T: Feature-Aware Adversarial Attacks for Multi-modal Tracking." ACM MM (2025). [paper] -
UniSOT: Ma, Yinchao and Tang, Yuyang and Yang, Wenfei and Zhang, Tianzhu and Zhou, Xu and Wu, Feng.
"UniSOT: a Unified Framework for Multi-Modality Single Object Tracking." TPAMI (2025). [paper] -
RDTTrack: Xue-Feng Zhu, Tianyang Xu, Yifan Pan, Jinjie Gu, Xi Li, Jiwen Lu, Xiao-Jun Wu, Josef Kittler.
"Collaborating Vision, Depth, and Thermal Signals for Multi-Modal Tracking: Dataset and Algorithm." ArXiv (2025). [paper] -
MFATrack: Ziyu Li, Na You, Tanqing Sun, Mingjia Wang, Xianjun Zhang & Yuping Feng.
"MFATrack: multi-modal fusion tracking network with adapter tunning." Signal, Image and Video Processing (2025). [paper] -
SRTrack: Zhiwen Chen, Jinjian Wu, Zhiyu Zhu, Yifan Zhang, Guangming Shi, Junhui Hou.
"Optimizing Multi-Modal Trackers via Sensitivity-aware Regularized Tuning." ArXiv (2025). [paper] [code] -
UniBench300: Zhangyong Tang, Tianyang Xu, Xuefeng Zhu, Chunyang Cheng, Tao Zhou, Xiaojun Wu, Josef Kittler.
"Serial Over Parallel: Learning Continual Unification for Multi-Modal Visual Object Tracking and Benchmarking." ACM MM (2025). [paper] [code] -
DMTrack: Weihong Li, Shaohua Dong, Haonan Lu, Yanhao Zhang, Heng Fan, Libo Zhang.
"DMTrack: Spatio-Temporal Multimodal Tracking via Dual-Adapter." ArXiv (2025). [paper] -
UM-ODTrack: Yaozong Zheng, Bineng Zhong, Qihua Liang, Shengping Zhang, Guorong Li, Xianxian Li, Rongrong Ji.
"Towards Universal Modal Tracking with Online Dense Temporal Token Learning." TPAMI (2025). [paper] [code] -
MPVT: Xie, Jianyu, Yan Fu, Junlin Zhou, Tianxiang He, Xiaopeng Wang, Yuke Fang, and Duanbing Chen.
"MPVT: An Efficient Multi-Modal Prompt Vision Tracker for Visual Target Tracking." Applied Sciences (2025). [paper] -
XTrack: Yuedong Tan, Zongwei Wu, Yuqian Fu, Zhuyun Zhou, Guolei Sun, Eduard Zamfi, Chao Ma, Danda Pani Paudel, Luc Van Gool, Radu Timofte.
"XTrack: Multimodal Training Boosts RGB-X Video Object Trackers." ICCV (2025). [paper] [code] -
FlexTrack: Yuedong Tan, Jiawei Shao, Eduard Zamfir, Ruanjun Li, Zhaochong An, Chao Ma, Danda Paudel, Luc Van Gool, Radu Timofte, Zongwei Wu.
"What You Have is What You Track: Adaptive and Robust Multimodal Tracking." ICCV (2025). [paper] [code] -
VMDA: Boyue Xu, Ruichao Hou, Tongwei Ren, Gangshan Wu.
"Visual and Memory Dual Adapter for Multi-Modal Object Tracking." ArXiv (2025). [paper] [code] -
CSTrack: X. Feng, D. Zhang, S. Hu, X. Li, M. Wu, J. Zhang, X. Chen, K. Huang.
"CSTrack: Enhancing RGB-X Tracking via Compact Spatiotemporal Features." ICML (2025). [paper] [code] -
Diff-MM: Shiyu Xuan, Zechao Li, Jinhui Tang.
"Diff-MM: Exploring Pre-trained Text-to-Image Generation Model for Unified Multi-modal Object Tracking." ArXiv (2025). [paper] -
M3Track: Tang, Zhangyong and Xu, Tianyang and Wu, Xiao-jun and Kittler, Josef.
"M3Track: Meta-Prompt for Multi-Modal Tracking." SPL (2025). [paper] [code] -
TransCMD: Tianlu Zhang; Qiang Zhang; Kurt Debattista; Jungong Han.
"Cross-Modality Distillation for Multi-modal Tracking." TPAMI (2025). [paper] [code] -
QuadFusion: Andong Lu, Mai Wen, Jinhu Wang, Yuanzhi Guo, Chenglong Li, Jin Tang, Bin Luo.
"Towards General Multimodal Visual Tracking." ArXiv (2025). [paper] -
UASTrack: He Wang, Tianyang Xu, Zhangyong Tang, Xiao-Jun Wu, Josef Kittler.
"UASTrack: A Unified Adaptive Selection Framework with Modality-Customization in Single Object Tracking." ArXiv (2025). [paper] [code] -
LightFC-X: Yunfeng Li, Bo Wang, Ye Li.
"LightFC-X: Lightweight Convolutional Tracker for RGB-X Tracking." ArXiv (2025). [paper] [code] -
APTrack: Xiantao Hu, Bineng Zhong, Qihua Liang, Zhiyi Mo, Liangtao Shi, Ying Tai, Jian Yang.
"Adaptive Perception for Unified Visual Multi-modal Object Tracking." ArXiv (2025). [paper] -
SUTrack: Xin Chen, Ben Kang, Wanting Geng, Jiawen Zhu, Yi Liu, Dong Wang, Huchuan Lu.
"SUTrack: Towards Simple and Unified Single Object Tracking." AAAI (2025). [paper] [code] -
STTrack: Xiantao Hu, Ying Tai, Xu Zhao, Chen Zhao, Zhenyu Zhang, Jun Li, Bineng Zhong, Jian Yang.
"Exploiting Multimodal Spatial-temporal Patterns for Video Object Tracking." AAAI (2025). [paper] [code]
-
EMTrack: Liu, Chang and Guan, Ziqi and Lai, Simiao and Liu, Yang and Lu, Huchuan and Wang, Dong.
"EMTrack: Efficient Multimodal Object Tracking." TCSVT (2024). [paper] -
VPLMMT: Simiao Lai, Yuntao Wei, Dong Wang & Huchuan Lu .
"Visual Prompt with Larger Model for Multi-modal Tracking." ICPR (2024). [paper] -
MixRGBX: Meng Sun and Xiaotao Liu and Hongyu Wang and Jing Liu.
"MixRGBX: Universal multi-modal tracking with symmetric mixed attention." Neurocomputing (2024). [paper] -
XTrack: Yuedong Tan, Zongwei Wu, Yuqian Fu, Zhuyun Zhou, Guolei Sun, Chao Ma, Danda Pani Paudel, Luc Van Gool, Radu Timofte.
"Towards a Generalist and Blind RGB-X Tracker." ArXiv (2024). [paper] [code] -
OneTracker: Lingyi Hong, Shilin Yan, Renrui Zhang, Wanyun Li, Xinyu Zhou, Pinxue Guo, Kaixun Jiang, Yiting Chen, Jinglun Li, Zhaoyu Chen, Wenqiang Zhang.
"OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning." CVPR (2024). [paper] -
SDSTrack: Xiaojun Hou, Jiazheng Xing, Yijie Qian, Yaowei Guo, Shuo Xin, Junhao Chen, Kai Tang, Mengmeng Wang, Zhengkai Jiang, Liang Liu, Yong Liu.
"SDSTrack: Self-Distillation Symmetric Adapter Learning for Multi-Modal Visual Object Tracking." CVPR (2024). [paper] [code] -
Un-Track: Zongwei Wu, Jilai Zheng, Xiangxuan Ren, Florin-Alexandru Vasluianu, Chao Ma, Danda Pani Paudel, Luc Van Gool, Radu Timofte.
"Single-Model and Any-Modality for Video Object Tracking." CVPR (2024). [paper] [code] -
ELTrack: Alansari, Mohamad and Alnuaimi, Khaled and Alansari, Sara and Werghi, Naoufel and Javed, Sajid.
"ELTrack: Correlating Events and Language for Visual Tracking." ArXiv (2024). [paper] [code] -
KSTrack: He, Yuhang and Ma, Zhiheng and Wei, Xing and Gong, Yihong.
"Knowledge Synergy Learning for Multi-Modal Tracking." TCSVT (2024). [paper] -
SeqTrackv2: Xin Chen, Ben Kang, Jiawen Zhu, Dong Wang, Houwen Peng, Huchuan Lu.
"Unified Sequence-to-Sequence Learning for Single- and Multi-Modal Visual Object Tracking." ArXiv (2024). [paper] [code]
- ViPT: Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, Huchuan Lu.
"Visual Prompt Multi-Modal Tracking." CVPR (2023). [paper] [code]
- ProTrack: Jinyu Yang, Zhe Li, Feng Zheng, Aleš Leonardis, Jingkuan Song.
"Prompting for Multi-Modal Tracking." ACM MM (2022). [paper]
Coming soon.
-
MMOT: Tianhao Li, Tingfa Xu, Ying Wang, Haolin Qin, Xu Lin, Jianan Li.
"MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking." ArXiv (2025). [paper] [code] -
MSITrack: Tao Feng, Tingfa Xu, Haolin Qin, Tianhao Li, Shuaihao Han, Xuyang Zou, Zhan Lv, Jianan Li.
"MSITrack: A Challenging Benchmark for Multispectral Single Object Tracking." ArXiv (2025). [paper] [code] -
HyMamba: Long Gao, Yunhe Zhang, Yan Jiang, Weiying Xie, Yunsong Li.
"Hyperspectral Mamba for Hyperspectral Object Tracking." ArXiv (2025). [paper] [code] -
VSS: Pengfei Wei, Liu Qiao, Zhenyu He, Di Yuan.
"A Multi-Stream Visual-Spectral-Spatial Adaptive Hyperspectral Object Tracking." ICMR (2025). [paper] -
HyA-T: Long Gao and Yunhe Zhang and Langkun Chen and Yan Jiang and Gang He and Weiying Xie and Yunsong Li.
"Domain Adapter for Visual Object Tracking based on Hyperspectral Video." Pattern Recognition (2025). [paper] -
HyperTrack: Tan, Yuedong and Sun, Wenfang and Li, Jingyuan and Hou, Shuwei and Li, Xiaobo and Wang, Zhe and Song, Beibei.
"HyperTrack: A Unified Network for Hyperspectral Video Object Tracking." TCSVT (2025). [paper] [code] -
SUIT: Fengchao Xiong, Zhenxing Wu, Sen Jia, Yuntao Qian.
"SUIT: Spatial-Spectral Union-Intersection Interaction Network for Hyperspectral Object Tracking." ArXiv (2025). [paper] [code] -
SpectralTrack: Chen, Yuzeng and Yuan, Qiangqiang and Xie, Hong and Tang, Yuqi and Xiao, Yi and He, Jiang and Guan, Renxiang and Liu, Xinwang and Zhang, Liangpei.
"Hyperspectral Video Tracking With Spectral–Spatial Fusion and Memory Enhancement." TIP (2025). [paper] [code] -
UBSTrack: Islam, Mohammad Aminul and Zhou, Jun and Xing, Wangzhi and Gao, Yongsheng and Paliwal, Kuldip K.
"UBSTrack: Unified Band Selection and Multimodel Ensemble for Hyperspectral Object Tracking." TGRS (2025). [paper] [code] -
HOPL: Zhang, Lu and Yao, Rui and Zhang, Yuhong and Zhou, Yong and Hu, Fuyuan and Zhao, Jiaqi and Shao, Zhiwen.
"Historical Object-Aware Prompt Learning for Universal Hyperspectral Object Tracking." TOMM (2025). [paper] [code]
-
Trans-DAT: Wu, Yinan and Jiao, Licheng and Liu, Xu and Liu, Fang and Yang, Shuyuan and Li, Lingling.
"Domain Adaptation-Aware Transformer for Hyperspectral Object Tracking." TCSVT (2024). [paper] [code] -
BihoT: Hanzheng Wang, Wei Li, Xiang-Gen Xia, Qian Du.
"BihoT: A Large-Scale Dataset and Benchmark for Hyperspectral Camouflaged Object Tracking." ArXiv (2024). [paper]
- HOT2020: Fengchao Xiong, Jun Zhou, Yuntao Qian.
"Material Based Object Tracking in Hyperspectral Videos." TIP (2020). [paper] [code]
-
VUOT & VTUTrack: Qinghua Song, Xiaolei Wang .
"Efficient Transformer Network for Visible and Ultraviolet Object Tracking." CVM (2025). [paper] [dataset] -
SonarT165: Yunfeng Li, Bo Wang, Jiahao Wan, Xueyi Wu, Ye Li.
"SonarT165: A Large-scale Benchmark and STFTrack Framework for Acoustic Object Tracking." ArXiv (2025). [paper] [code]
-
GSOT3D: Yifan Jiao, Yunhao Li, Junhua Ding, Qing Yang, Song Fu, Heng Fan, Libo Zhang.
"GSOT3D: Towards Generic 3D Single Object Tracking in the Wild." ArXiv (2024). [paper] [code] -
SCANet: Yunfeng Li, Bo Wang, Jiuran Sun, Xueyi Wu, Ye Li.
"RGB-Sonar Tracking Benchmark and Spatial Cross-Attention Transformer Tracker." TCSVT (2024). [paper] [code]
- Awesome-MultiModal-Visual-Object-Tracking
- Awesome-Visual-Language-Tracking
- Vision-Language_Tracking_Paper_List
- VisEvent_SOT_Benchmark
- RGBD-tracking-review
- Datasets-and-benchmark-code
- RGBT-Tracking-Results-Datasets-and-Methods
- Multimodal-Tracking-Survey
- Hyperspectral-object-tracking-paperlist
This project is released under the MIT license. Please see the LICENSE file for more information.