CoTracker
vision
nikkar commited on
Commit
2c85b33
·
1 Parent(s): d066182

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -8
README.md CHANGED
@@ -20,17 +20,19 @@ CoTracker can track:
20
 
21
  ## How to use
22
  Here is how to use this model in the **offline mode**:
 
23
  ```python
24
  import torch
25
  # Download the video
26
  url = 'https://github.com/facebookresearch/co-tracker/blob/main/assets/apple.mp4'
27
- # pip install imageio[ffmpeg]
28
  import imageio.v3 as iio
29
  frames = iio.imread(url, plugin="FFMPEG") # plugin="pyav"
30
- video = torch.tensor(frames).permute(0, 3, 1, 2)[None].float().to(device) # B T C H W
31
 
32
- grid_size = 10
33
  device = 'cuda'
 
 
 
34
  # Run Offline CoTracker:
35
  cotracker = torch.hub.load("facebookresearch/co-tracker", "cotracker2").to(device)
36
  pred_tracks, pred_visibility = cotracker(video, grid_size=grid_size) # B T N 2, B T N 1
@@ -38,14 +40,16 @@ pred_tracks, pred_visibility = cotracker(video, grid_size=grid_size) # B T N 2,
38
  and in the **online mode**:
39
  ```python
40
  cotracker = torch.hub.load("facebookresearch/co-tracker", "cotracker2_online").to(device)
 
41
  # Run Online CoTracker, the same model with a different API:
 
 
 
 
42
  for ind in range(0, video.shape[1] - cotracker.step, cotracker.step):
43
  pred_tracks, pred_visibility = cotracker(
44
- video_chunk=video[:, ind : ind + cotracker.step * 2],
45
- is_first_step=(ind == 0),
46
- grid_size=grid_size
47
- ) # B T N 2, B T N 1
48
-
49
  ```
50
  Online processing is more memory-efficient and allows for the processing of longer videos or videos in real-time.
51
 
 
20
 
21
  ## How to use
22
  Here is how to use this model in the **offline mode**:
23
+ ```pip install imageio[ffmpeg]```, then:
24
  ```python
25
  import torch
26
  # Download the video
27
  url = 'https://github.com/facebookresearch/co-tracker/blob/main/assets/apple.mp4'
28
+
29
  import imageio.v3 as iio
30
  frames = iio.imread(url, plugin="FFMPEG") # plugin="pyav"
 
31
 
 
32
  device = 'cuda'
33
+ grid_size = 10
34
+ video = torch.tensor(frames).permute(0, 3, 1, 2)[None].float().to(device) # B T C H W
35
+
36
  # Run Offline CoTracker:
37
  cotracker = torch.hub.load("facebookresearch/co-tracker", "cotracker2").to(device)
38
  pred_tracks, pred_visibility = cotracker(video, grid_size=grid_size) # B T N 2, B T N 1
 
40
  and in the **online mode**:
41
  ```python
42
  cotracker = torch.hub.load("facebookresearch/co-tracker", "cotracker2_online").to(device)
43
+
44
  # Run Online CoTracker, the same model with a different API:
45
+ # Initialize online processing
46
+ cotracker(video_chunk=video, is_first_step=True, grid_size=grid_size)
47
+
48
+ # Process the video
49
  for ind in range(0, video.shape[1] - cotracker.step, cotracker.step):
50
  pred_tracks, pred_visibility = cotracker(
51
+ video_chunk=video[:, ind : ind + cotracker.step * 2]
52
+ ) # B T N 2, B T N 1
 
 
 
53
  ```
54
  Online processing is more memory-efficient and allows for the processing of longer videos or videos in real-time.
55