mirror of
https://github.com/gaomingqi/Track-Anything.git
synced 2025-12-16 16:37:58 +01:00
update readme
This commit is contained in:
25
README.md
25
README.md
@@ -1,11 +1,25 @@
|
|||||||
# Track-Anything
|
# Track-Anything
|
||||||
**Track-Anything** is an Efficient Development Toolkit for Video Object Tracking and Segmentation, based on [Segment Anything](https://github.com/facebookresearch/segment-anything) and [XMem](https://github.com/hkchengrex/XMem).
|
|
||||||
|
|
||||||

|
***Track-Anything*** is a flexible and interactive tool for video object tracking and segmentation. It is developed upon [Segment Anything](https://github.com/facebookresearch/segment-anything) and [XMem](https://github.com/hkchengrex/XMem), can specify anything to track and segment via user clicks only. During tracking, users can flexibly change the objects they wanna track or correct the region of interest if there are any ambiguities. These characteristics enable ***Track-Anything*** to be suitable for:
|
||||||
|
- Video object tracking and segmentation with shot changes.
|
||||||
|
- Data annnotation for video object tracking and segmentation.
|
||||||
|
- Object-centric downstream video tasks, such as video inpainting and editing.
|
||||||
|
|
||||||
## Demo
|
## Demo
|
||||||
|
|
||||||
https://user-images.githubusercontent.com/28050374/232322963-140b44a1-0b65-409a-b3fa-ce9f780aa40e.MP4
|
one gif/video
|
||||||
|
|
||||||
|
### Video Object Tracking and Segmentation with Shot Changes
|
||||||
|
|
||||||
|
one gif/video
|
||||||
|
|
||||||
|
### Video Inpainting (with [E2FGVI](https://github.com/MCG-NKU/E2FGVI))
|
||||||
|
|
||||||
|
one gif/video
|
||||||
|
|
||||||
|
### Video Editing
|
||||||
|
|
||||||
|
one gif/video
|
||||||
|
|
||||||
## Get Started
|
## Get Started
|
||||||
#### Linux
|
#### Linux
|
||||||
@@ -17,10 +31,13 @@ cd Track-Anything
|
|||||||
# Install dependencies:
|
# Install dependencies:
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
# Install dependencies if using inpainting:
|
# Install dependencies for inpainting:
|
||||||
pip install -U openmim
|
pip install -U openmim
|
||||||
mim install mmcv
|
mim install mmcv
|
||||||
|
|
||||||
|
# Install dependencies for editing
|
||||||
|
pip install madgrad
|
||||||
|
|
||||||
# Run the Track-Anything gradio demo.
|
# Run the Track-Anything gradio demo.
|
||||||
python app.py --device cuda:0 --sam_model_type vit_h --port 12212
|
python app.py --device cuda:0 --sam_model_type vit_h --port 12212
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -145,12 +145,12 @@ if __name__ == '__main__':
|
|||||||
# ----------------------------------------------
|
# ----------------------------------------------
|
||||||
# 1/3: set checkpoint and device
|
# 1/3: set checkpoint and device
|
||||||
checkpoint = '/ssd1/gaomingqi/checkpoints/E2FGVI-HQ-CVPR22.pth'
|
checkpoint = '/ssd1/gaomingqi/checkpoints/E2FGVI-HQ-CVPR22.pth'
|
||||||
device = 'cuda:2'
|
device = 'cuda:6'
|
||||||
# 2/3: initialise inpainter
|
# 2/3: initialise inpainter
|
||||||
base_inpainter = BaseInpainter(checkpoint, device)
|
base_inpainter = BaseInpainter(checkpoint, device)
|
||||||
# 3/3: inpainting (frames: numpy array, T, H, W, 3; masks: numpy array, T, H, W)
|
# 3/3: inpainting (frames: numpy array, T, H, W, 3; masks: numpy array, T, H, W)
|
||||||
# ratio: (0, 1], ratio for down sample, default value is 1
|
# ratio: (0, 1], ratio for down sample, default value is 1
|
||||||
inpainted_frames = base_inpainter.inpaint(frames, masks, ratio=0.5) # numpy array, T, H, W, 3
|
inpainted_frames = base_inpainter.inpaint(frames, masks, ratio=1) # numpy array, T, H, W, 3
|
||||||
# ----------------------------------------------
|
# ----------------------------------------------
|
||||||
# end
|
# end
|
||||||
# ----------------------------------------------
|
# ----------------------------------------------
|
||||||
|
|||||||
Reference in New Issue
Block a user