Spaces:
Running
on
Zero
Running
on
Zero
JianyuanWang
commited on
Commit
•
092d5ee
1
Parent(s):
7d141d7
upload apple example
Browse files- .gitattributes +1 -0
- app.py +3 -2
- images_to_videos.py +1 -1
- vggsfm_code/examples/apple/.DS_Store +0 -0
- vggsfm_code/examples/apple/images/frame000001.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000010.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000019.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000028.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000037.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000046.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000055.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000064.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000073.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000082.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000091.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000100.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000109.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000118.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000127.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000136.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000145.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000154.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000163.jpg +0 -0
- vggsfm_code/examples/apple/images/frame000172.jpg +0 -0
- vggsfm_code/examples/videos/apple_video.mp4 +3 -0
.gitattributes
CHANGED
@@ -35,3 +35,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
*.JPG filter=lfs diff=lfs merge=lfs -text
|
37 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
|
|
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
*.JPG filter=lfs diff=lfs merge=lfs -text
|
37 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
38 |
+
vggsfm_code/examples/ filter=lfs diff=lfs merge=lfs -text
|
app.py
CHANGED
@@ -203,8 +203,9 @@ with gr.Blocks() as demo:
|
|
203 |
<li>upload the images (.jpg, .png, etc.), or </li>
|
204 |
<li>upload a video (.mp4, .mov, etc.) </li>
|
205 |
</ul>
|
206 |
-
<p>
|
207 |
<p>SfM methods are designed for <strong> rigid/static reconstruction </strong>. When dealing with dynamic/moving inputs, these methods may still work by focusing on the rigid parts of the scene. However, to ensure high-quality results, it is better to minimize the presence of moving objects in the input data. </p>
|
|
|
208 |
<p>If you meet any problem, feel free to create an issue in our <a href="https://github.com/facebookresearch/vggsfm" target="_blank">GitHub Repo</a> ⭐</p>
|
209 |
<p>(Please note that running reconstruction on Hugging Face space is slower than on a local machine.) </p>
|
210 |
</div>
|
@@ -216,7 +217,7 @@ with gr.Blocks() as demo:
|
|
216 |
input_images = gr.File(file_count="multiple", label="Input Images", interactive=True)
|
217 |
num_query_images = gr.Slider(minimum=1, maximum=10, step=1, value=5, label="Number of query images (key frames)",
|
218 |
info="More query images usually lead to better reconstruction at lower speeds. If the viewpoint differences between your images are minimal, you can set this value to 1. ")
|
219 |
-
num_query_points = gr.Slider(minimum=512, maximum=
|
220 |
info="More query points usually lead to denser reconstruction at lower speeds.")
|
221 |
|
222 |
with gr.Column(scale=3):
|
|
|
203 |
<li>upload the images (.jpg, .png, etc.), or </li>
|
204 |
<li>upload a video (.mp4, .mov, etc.) </li>
|
205 |
</ul>
|
206 |
+
<p>If both images and videos are uploaded, the demo will only reconstruct the uploaded images. By default, we extract <strong> 1 image frame per second from the input video </strong>. To prevent crashes on the Hugging Face space, we currently limit reconstruction to the first 20 image frames. </p>
|
207 |
<p>SfM methods are designed for <strong> rigid/static reconstruction </strong>. When dealing with dynamic/moving inputs, these methods may still work by focusing on the rigid parts of the scene. However, to ensure high-quality results, it is better to minimize the presence of moving objects in the input data. </p>
|
208 |
+
<p>The reconstruction should typically take up to 90 seconds. If it takes longer, the input data is likely not well-conditioned. </p>
|
209 |
<p>If you meet any problem, feel free to create an issue in our <a href="https://github.com/facebookresearch/vggsfm" target="_blank">GitHub Repo</a> ⭐</p>
|
210 |
<p>(Please note that running reconstruction on Hugging Face space is slower than on a local machine.) </p>
|
211 |
</div>
|
|
|
217 |
input_images = gr.File(file_count="multiple", label="Input Images", interactive=True)
|
218 |
num_query_images = gr.Slider(minimum=1, maximum=10, step=1, value=5, label="Number of query images (key frames)",
|
219 |
info="More query images usually lead to better reconstruction at lower speeds. If the viewpoint differences between your images are minimal, you can set this value to 1. ")
|
220 |
+
num_query_points = gr.Slider(minimum=512, maximum=3072, step=1, value=1024, label="Number of query points",
|
221 |
info="More query points usually lead to denser reconstruction at lower speeds.")
|
222 |
|
223 |
with gr.Column(scale=3):
|
images_to_videos.py
CHANGED
@@ -2,7 +2,7 @@ import cv2
|
|
2 |
import os
|
3 |
|
4 |
# Parameters
|
5 |
-
name = "
|
6 |
folder_path = f'vggsfm_code/examples/{name}/images' # Update with the path to your images
|
7 |
video_path = f'vggsfm_code/examples/videos/{name}_video.mp4'
|
8 |
fps = 1 # frames per second
|
|
|
2 |
import os
|
3 |
|
4 |
# Parameters
|
5 |
+
name = "apple"
|
6 |
folder_path = f'vggsfm_code/examples/{name}/images' # Update with the path to your images
|
7 |
video_path = f'vggsfm_code/examples/videos/{name}_video.mp4'
|
8 |
fps = 1 # frames per second
|
vggsfm_code/examples/apple/.DS_Store
ADDED
Binary file (6.15 kB). View file
|
|
vggsfm_code/examples/apple/images/frame000001.jpg
ADDED
vggsfm_code/examples/apple/images/frame000010.jpg
ADDED
vggsfm_code/examples/apple/images/frame000019.jpg
ADDED
vggsfm_code/examples/apple/images/frame000028.jpg
ADDED
vggsfm_code/examples/apple/images/frame000037.jpg
ADDED
vggsfm_code/examples/apple/images/frame000046.jpg
ADDED
vggsfm_code/examples/apple/images/frame000055.jpg
ADDED
vggsfm_code/examples/apple/images/frame000064.jpg
ADDED
vggsfm_code/examples/apple/images/frame000073.jpg
ADDED
vggsfm_code/examples/apple/images/frame000082.jpg
ADDED
vggsfm_code/examples/apple/images/frame000091.jpg
ADDED
vggsfm_code/examples/apple/images/frame000100.jpg
ADDED
vggsfm_code/examples/apple/images/frame000109.jpg
ADDED
vggsfm_code/examples/apple/images/frame000118.jpg
ADDED
vggsfm_code/examples/apple/images/frame000127.jpg
ADDED
vggsfm_code/examples/apple/images/frame000136.jpg
ADDED
vggsfm_code/examples/apple/images/frame000145.jpg
ADDED
vggsfm_code/examples/apple/images/frame000154.jpg
ADDED
vggsfm_code/examples/apple/images/frame000163.jpg
ADDED
vggsfm_code/examples/apple/images/frame000172.jpg
ADDED
vggsfm_code/examples/videos/apple_video.mp4
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:53a7f05247a574e0f77926345bb68a3b3c9044adcd6c6432c25f7e2ccc38b38b
|
3 |
+
size 1846808
|