stevenbucaille commited on
Commit
d1495a7
1 Parent(s): 8a65862

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -1
README.md CHANGED
@@ -4,4 +4,144 @@ tags:
4
  - image-matching
5
  inference: false
6
  pipeline_tag: keypoint-detection
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - image-matching
5
  inference: false
6
  pipeline_tag: keypoint-detection
7
+ ---
8
+
9
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
10
+
11
+ Licensed under the MIT License; you may not use this file except in compliance with
12
+ the License.
13
+
14
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
15
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
16
+ specific language governing permissions and limitations under the License.
17
+
18
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
19
+ rendered properly in your Markdown viewer.
20
+
21
+
22
+ -->
23
+
24
+ # SuperPoint
25
+
26
+ ## Overview
27
+
28
+ The SuperPoint model was proposed
29
+ in [SuperPoint: Self-Supervised Interest Point Detection and Description](https://arxiv.org/abs/1712.07629) by Daniel
30
+ DeTone, Tomasz Malisiewicz and Andrew Rabinovich.
31
+
32
+ This model is the result of a self-supervised training of a fully-convolutional network for interest point detection and
33
+ description. The model is able to detect interest points that are repeatable under homographic transformations and
34
+ provide a descriptor for each point. The use of the model in its own is limited, but it can be used as a feature
35
+ extractor for other tasks such as homography estimation, image matching, etc.
36
+
37
+ The abstract from the paper is the following:
38
+
39
+ *This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a
40
+ large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our
41
+ fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and
42
+ associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography
43
+ approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g.,
44
+ synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able
45
+ to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other
46
+ traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches
47
+ when compared to LIFT, SIFT and ORB.*
48
+
49
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/superpoint_architecture.png"
50
+ alt="drawing" width="500"/>
51
+
52
+ <small> SuperPoint overview. Taken from the <a href="https://arxiv.org/abs/1712.07629v4">original paper.</a> </small>
53
+
54
+ ## Usage tips
55
+
56
+ Here is a quick example of using the model to detect interest points in an image:
57
+
58
+ ```python
59
+ from transformers import AutoImageProcessor, SuperPointForKeypointDetection
60
+ import torch
61
+ from PIL import Image
62
+ import requests
63
+
64
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
65
+ image = Image.open(requests.get(url, stream=True).raw)
66
+
67
+ processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint")
68
+ model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint")
69
+
70
+ inputs = processor(image, return_tensors="pt")
71
+ outputs = model(**inputs)
72
+ ```
73
+
74
+ The outputs contain the list of keypoint coordinates with their respective score and description (a 256-long vector).
75
+
76
+ You can also feed multiple images to the model. Due to the nature of SuperPoint, to output a dynamic number of keypoints,
77
+ you will need to use the mask attribute to retrieve the respective information :
78
+
79
+ ```python
80
+ from transformers import AutoImageProcessor, SuperPointForKeypointDetection
81
+ import torch
82
+ from PIL import Image
83
+ import requests
84
+
85
+ url_image_1 = "http://images.cocodataset.org/val2017/000000039769.jpg"
86
+ image_1 = Image.open(requests.get(url_image_1, stream=True).raw)
87
+ url_image_2 = "http://images.cocodataset.org/test-stuff2017/000000000568.jpg"
88
+ image_2 = Image.open(requests.get(url_image_2, stream=True).raw)
89
+
90
+ images = [image_1, image_2]
91
+
92
+ processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint")
93
+ model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint")
94
+
95
+ inputs = processor(images, return_tensors="pt")
96
+ outputs = model(**inputs)
97
+ image_sizes = [(image.size[1], image.size[0]) for image in images]
98
+ outputs = processor.post_process_keypoint_detection(outputs, image_sizes)
99
+
100
+ for output in outputs:
101
+ keypoints = output["keypoints"]
102
+ scores = output["scores"]
103
+ descriptors = output["descriptors"]
104
+ ```
105
+
106
+ You can then print the keypoints on the image of your choice to visualize the result:
107
+ ```python
108
+ import matplotlib.pyplot as plt
109
+
110
+ plt.axis("off")
111
+ plt.imshow(image)
112
+ plt.scatter(
113
+ keypoints[:, 0],
114
+ keypoints[:, 1],
115
+ c=scores * 100,
116
+ s=scores * 50,
117
+ alpha=0.8
118
+ )
119
+ plt.savefig(f"output_image.png")
120
+ ```
121
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/ZtFmphEhx8tcbEQqOolyE.png)
122
+
123
+ This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille).
124
+ The original code can be found [here](https://github.com/magicleap/SuperPointPretrainedNetwork).
125
+
126
+ ## Resources
127
+
128
+ A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SuperPoint. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
129
+
130
+ - A notebook showcasing inference and visualization with SuperPoint can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SuperPoint/Inference_with_SuperPoint_to_detect_interest_points_in_an_image.ipynb). 🌎
131
+
132
+ ## SuperPointConfig
133
+
134
+ [[autodoc]] SuperPointConfig
135
+
136
+ ## SuperPointImageProcessor
137
+
138
+ [[autodoc]] SuperPointImageProcessor
139
+
140
+ - preprocess
141
+ - post_process_keypoint_detection
142
+
143
+ ## SuperPointForKeypointDetection
144
+
145
+ [[autodoc]] SuperPointForKeypointDetection
146
+
147
+ - forward