2023-04-24 19:41:25 +08:00
<!--  -->
2023-04-12 13:21:43 +08:00
2023-04-24 19:41:25 +08:00
<div align=center>
<img src="./assets/track-anything-logo.jpg"/>
</div>
2023-04-24 19:43:26 +08:00
<br/>
2023-04-24 19:41:25 +08:00
<div align=center>
2023-05-02 01:20:45 +08:00
<a src="https://img.shields.io/badge/%F0%9F%93%96-Arxiv_2304.11968-red.svg?style=flat-square" href="https://arxiv.org/abs/2304.11968">
2023-04-25 17:53:05 +08:00
<img src="https://img.shields.io/badge/%F0%9F%93%96-Arxiv_2304.11968-red.svg?style=flat-square">
</a>
2023-05-08 16:55:28 +08:00
<a src="https://img.shields.io/badge/%F0%9F%A4%97-Open_in_Spaces-informational.svg?style=flat-square" href="https://huggingface.co/spaces/VIPLab/Track-Anything?duplicate=true">
2023-04-26 17:06:18 +08:00
<img src="https://img.shields.io/badge/%F0%9F%A4%97-Hugging_Face_Space-informational.svg?style=flat-square">
2023-04-24 19:41:25 +08:00
</a>
2023-05-06 01:25:26 +08:00
<a src="https://img.shields.io/badge/%F0%9F%97%BA-Tutorials in Steps-2bb7b3.svg?style=flat-square" href="./doc/tutorials.md">
<img src="https://img.shields.io/badge/%F0%9F%97%BA-Tutorials in Steps-2bb7b3.svg?style=flat-square">
2023-05-06 01:00:08 +08:00
2023-05-02 01:20:45 +08:00
</a>
2023-05-06 01:25:26 +08:00
<a src="https://img.shields.io/badge/%F0%9F%9A%80-SUSTech_VIP_Lab-ed6c00.svg?style=flat-square" href="https://zhengfenglab.com/">
<img src="https://img.shields.io/badge/%F0%9F%9A%80-SUSTech_VIP_Lab-ed6c00.svg?style=flat-square">
2023-04-24 19:41:25 +08:00
</a>
</div>
2023-04-20 17:58:58 +08:00
2025-12-13 11:02:33 +00:00
<br>
> **Note:** :fire: If you are interested in **human mesh generation from videos** (beyond video segmentation), please check out **[SAM-Body4D](https://github.com/gaomingqi/sam-body4d)**.
2023-04-19 15:03:20 +08:00
***Track-Anything*** is a flexible and interactive tool for video object tracking and segmentation. It is developed upon [Segment Anything ](https://github.com/facebookresearch/segment-anything ), can specify anything to track and segment via user clicks only. During tracking, users can flexibly change the objects they wanna track or correct the region of interest if there are any ambiguities. These characteristics enable * * *Track-Anything*** to be suitable for:
2023-04-18 23:27:02 +08:00
- Video object tracking and segmentation with shot changes.
2023-05-12 15:05:13 +08:00
- Visualized development and data annotation for video object tracking and segmentation.
2023-04-18 23:27:02 +08:00
- Object-centric downstream video tasks, such as video inpainting and editing.
2023-04-14 00:10:07 +08:00
2023-04-24 19:41:25 +08:00
<div align=center>
2023-04-26 00:38:36 +08:00
<img src="./assets/avengers.gif" width="81%"/>
2023-04-24 19:41:25 +08:00
</div>
<!-- ![avengers]() -->
2023-04-21 20:21:18 +08:00
## :rocket: Updates
2023-05-02 01:01:10 +08:00
- 2023/05/02: We uploaded tutorials in steps :world_map:. Check [HERE ](./doc/tutorials.md ) for more details.
2023-04-29 07:37:26 +08:00
- 2023/04/29: We improved inpainting by decoupling GPU memory usage and video length. Now Track-Anything can inpaint videos with any length! :smiley_cat: Check [HERE ](https://github.com/gaomingqi/Track-Anything/issues/4#issuecomment-1528198165 ) for our GPU memory requirements.
2023-04-29 07:30:10 +08:00
- 2023/04/25: We are delighted to introduce [Caption-Anything ](https://github.com/ttengwang/Caption-Anything ) :writing_hand:, an inventive project from our lab that combines the capabilities of Segment Anything, Visual Captioning, and ChatGPT.
2023-04-25 20:59:44 +08:00
2023-05-08 16:55:28 +08:00
- 2023/04/20: We deployed [DEMO ](https://huggingface.co/spaces/VIPLab/Track-Anything?duplicate=true ) on Hugging Face :hugs:!
2023-04-12 13:31:20 +08:00
2023-04-27 22:51:33 +08:00
- 2023/04/14: We made Track-Anything public!
2023-05-06 01:00:08 +08:00
## :world_map: Video Tutorials ([Track-Anything Tutorials in Steps](./doc/tutorials.md))
2023-04-27 22:52:22 +08:00
2023-04-27 22:56:22 +08:00
https://user-images.githubusercontent.com/30309970/234902447-a4c59718-fcfe-443a-bd18-2f3f775cfc13.mp4
2023-04-18 23:27:02 +08:00
2023-04-28 04:33:02 +08:00
---
### :joystick: Example - Multiple Object Tracking and Segmentation (with [XMem](https://github.com/hkchengrex/XMem))
2023-04-19 17:41:26 +08:00
https://user-images.githubusercontent.com/39208339/233035206-0a151004-6461-4deb-b782-d1dbfe691493.mp4
2023-04-28 04:33:02 +08:00
---
### :joystick: Example - Video Object Tracking and Segmentation with Shot Changes (with [XMem](https://github.com/hkchengrex/XMem))
2023-04-18 23:27:02 +08:00
2023-04-19 00:52:34 +08:00
https://user-images.githubusercontent.com/30309970/232848349-f5e29e71-2ea4-4529-ac9a-94b9ca1e7055.mp4
2023-04-18 23:27:02 +08:00
2023-04-28 04:33:02 +08:00
---
### :joystick: Example - Video Inpainting (with [E2FGVI](https://github.com/MCG-NKU/E2FGVI))
2023-04-18 23:27:02 +08:00
2023-04-19 11:30:05 +08:00
https://user-images.githubusercontent.com/28050374/232959816-07f2826f-d267-4dda-8ae5-a5132173b8f4.mp4
2023-04-12 13:31:20 +08:00
2023-04-28 04:33:02 +08:00
## :computer: Get Started
2023-04-26 16:24:32 +08:00
#### Linux & Windows
```shell
2023-04-16 21:02:48 +08:00
# Clone the repository:
git clone https://github.com/gaomingqi/Track-Anything.git
cd Track-Anything
2023-04-13 20:15:02 +00:00
2023-04-18 00:39:39 +08:00
# Install dependencies:
2023-04-16 21:02:48 +08:00
pip install -r requirements.txt
2023-04-17 12:33:37 +08:00
2023-04-17 10:56:37 +08:00
# Run the Track-Anything gradio demo.
2023-04-26 10:49:16 +08:00
python app.py --device cuda:0
2023-04-26 16:24:32 +08:00
# python app.py --device cuda:0 --sam_model_type vit_b # for lower memory usage
2023-04-16 21:02:48 +08:00
```
2023-04-12 13:31:20 +08:00
2023-04-26 16:24:32 +08:00
2023-04-28 04:33:02 +08:00
## :book: Citation
2023-04-24 17:07:46 +00:00
If you find this work useful for your research or applications, please cite using this BibTeX:
```bibtex
2023-04-25 17:53:57 +08:00
@misc {yang2023track,
title={Track Anything: Segment Anything Meets Videos},
author={Jinyu Yang and Mingqi Gao and Zhe Li and Shang Gao and Fangjing Wang and Feng Zheng},
year={2023},
eprint={2304.11968},
archivePrefix={arXiv},
primaryClass={cs.CV}
2023-04-24 17:07:46 +00:00
}
```
2023-04-28 04:33:02 +08:00
## :clap: Acknowledgements
2023-04-12 13:31:20 +08:00
2023-04-19 15:03:20 +08:00
The project is based on [Segment Anything ](https://github.com/facebookresearch/segment-anything ), [XMem ](https://github.com/hkchengrex/XMem ), and [E2FGVI ](https://github.com/MCG-NKU/E2FGVI ). Thanks for the authors for their efforts.