Dividend9853 commited on
Commit
db1c682
1 Parent(s): 7ac1f40

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +22 -0
  2. depth-anything/depth_anything-2024.1.22.0-py2.py3-none-any.whl +0 -0
  3. depth-anything/depth_anything-2024.1.22.0-py2.py3-none-any.whl.metadata +171 -0
  4. depth-anything/depth_anything-2024.6.15.0-py2.py3-none-any.whl +0 -0
  5. depth-anything/depth_anything-2024.6.15.0-py2.py3-none-any.whl.metadata +288 -0
  6. depth-anything/index.html +24 -0
  7. dlib/dlib-19.22.99-cp310-cp310-win_amd64.whl +3 -0
  8. dlib/dlib-19.22.99-cp310-cp310-win_amd64.whl.metadata +35 -0
  9. dlib/dlib-19.22.99-cp37-cp37m-win_amd64.whl +3 -0
  10. dlib/dlib-19.22.99-cp37-cp37m-win_amd64.whl.metadata +35 -0
  11. dlib/dlib-19.22.99-cp38-cp38-win_amd64.whl +3 -0
  12. dlib/dlib-19.22.99-cp38-cp38-win_amd64.whl.metadata +35 -0
  13. dlib/dlib-19.22.99-cp39-cp39-win_amd64.whl +3 -0
  14. dlib/dlib-19.22.99-cp39-cp39-win_amd64.whl.metadata +35 -0
  15. dlib/dlib-19.24.1-cp311-cp311-win_amd64.whl +3 -0
  16. dlib/dlib-19.24.1-cp311-cp311-win_amd64.whl.metadata +32 -0
  17. dlib/dlib-19.24.99-cp312-cp312-win_amd64.whl +3 -0
  18. dlib/dlib-19.24.99-cp312-cp312-win_amd64.whl.metadata +26 -0
  19. dlib/index.html +40 -0
  20. handrefinerportable/handrefinerportable-2024.1.18.0-py2.py3-none-any.whl +3 -0
  21. handrefinerportable/handrefinerportable-2024.1.18.0-py2.py3-none-any.whl.metadata +16 -0
  22. handrefinerportable/handrefinerportable-2024.2.12.0-py2.py3-none-any.whl +3 -0
  23. handrefinerportable/handrefinerportable-2024.2.12.0-py2.py3-none-any.whl.metadata +16 -0
  24. handrefinerportable/index.html +24 -0
  25. index.html +49 -0
  26. insightface/index.html +32 -0
  27. insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl +0 -0
  28. insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl.metadata +176 -0
  29. insightface/insightface-0.7.3-cp311-cp311-win_amd64.whl +0 -0
  30. insightface/insightface-0.7.3-cp311-cp311-win_amd64.whl.metadata +176 -0
  31. insightface/insightface-0.7.3-cp312-cp312-win_amd64.whl +0 -0
  32. insightface/insightface-0.7.3-cp312-cp312-win_amd64.whl.metadata +176 -0
  33. insightface/insightface-0.7.3-cp39-cp39-win_amd64.whl +0 -0
  34. insightface/insightface-0.7.3-cp39-cp39-win_amd64.whl.metadata +176 -0
  35. intel-extension-for-pytorch/index.html +36 -0
  36. intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+git632f70a-cp310-cp310-win_amd64.whl +3 -0
  37. intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+git632f70a-cp310-cp310-win_amd64.whl.metadata +108 -0
  38. intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl +3 -0
  39. intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl.metadata +108 -0
  40. intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl +3 -0
  41. intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl.metadata +118 -0
  42. intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64_2.whl +3 -0
  43. intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64_2.whl.metadata +118 -0
  44. intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.20+git4849f3b-cp310-cp310-win_amd64.whl +3 -0
  45. intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.20+git4849f3b-cp310-cp310-win_amd64.whl.metadata +132 -0
  46. torch/index.html +36 -0
  47. torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl +3 -0
  48. torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl.metadata +483 -0
  49. torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64_2.whl +3 -0
  50. torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64_2.whl.metadata +483 -0
.gitattributes CHANGED
@@ -33,3 +33,25 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ dlib/dlib-19.22.99-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
37
+ dlib/dlib-19.22.99-cp37-cp37m-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
38
+ dlib/dlib-19.22.99-cp38-cp38-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
39
+ dlib/dlib-19.22.99-cp39-cp39-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
40
+ dlib/dlib-19.24.1-cp311-cp311-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
41
+ dlib/dlib-19.24.99-cp312-cp312-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
42
+ handrefinerportable/handrefinerportable-2024.1.18.0-py2.py3-none-any.whl filter=lfs diff=lfs merge=lfs -text
43
+ handrefinerportable/handrefinerportable-2024.2.12.0-py2.py3-none-any.whl filter=lfs diff=lfs merge=lfs -text
44
+ intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+git632f70a-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
45
+ intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
46
+ intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
47
+ intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64_2.whl filter=lfs diff=lfs merge=lfs -text
48
+ intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.20+git4849f3b-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
49
+ torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
50
+ torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64_2.whl filter=lfs diff=lfs merge=lfs -text
51
+ torch/torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
52
+ torch/torch-2.1.0a0+cxx11.abi-cp311-cp311-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
53
+ torch/torch-2.1.0a0+git7bcf7da-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
54
+ torchaudio/torchaudio-2.1.0+6ea1133-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
55
+ torchaudio/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
56
+ torchaudio/torchaudio-2.1.0a0+cxx11.abi-cp311-cp311-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
57
+ xformers/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
depth-anything/depth_anything-2024.1.22.0-py2.py3-none-any.whl ADDED
Binary file (13.1 kB). View file
 
depth-anything/depth_anything-2024.1.22.0-py2.py3-none-any.whl.metadata ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: depth_anything
3
+ Version: 2024.1.22.0
4
+ Project-URL: Documentation, https://github.com/LiheYoung/Depth-Anything
5
+ Project-URL: Issues, https://github.com/LiheYoung/Depth-Anything/issues
6
+ Project-URL: Source, https://github.com/LiheYoung/Depth-Anything
7
+ License-File: LICENSE
8
+ Requires-Dist: opencv-python
9
+ Requires-Dist: torch
10
+ Requires-Dist: torchvision
11
+ Description-Content-Type: text/markdown
12
+
13
+ <div align="center">
14
+ <h2>Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data</h2>
15
+
16
+ [**Lihe Yang**](https://liheyoung.github.io/)<sup>1</sup> · [**Bingyi Kang**](https://scholar.google.com/citations?user=NmHgX-wAAAAJ)<sup>2+</sup> · [**Zilong Huang**](http://speedinghzl.github.io/)<sup>2</sup> · [**Xiaogang Xu**](https://xiaogang00.github.io/)<sup>3,4</sup> · [**Jiashi Feng**](https://sites.google.com/site/jshfeng/)<sup>2</sup> · [**Hengshuang Zhao**](https://hszhao.github.io/)<sup>1+</sup>
17
+
18
+ <sup>1</sup>The University of Hong Kong · <sup>2</sup>TikTok · <sup>3</sup>Zhejiang Lab · <sup>4</sup>Zhejiang University
19
+
20
+ <sup>+</sup>corresponding authors
21
+
22
+ <a href="https://arxiv.org/abs/2401.10891"><img src='https://img.shields.io/badge/arXiv-Depth Anything-red' alt='Paper PDF'></a>
23
+ <a href='https://depth-anything.github.io'><img src='https://img.shields.io/badge/Project_Page-Depth Anything-green' alt='Project Page'></a>
24
+ <a href='https://huggingface.co/spaces/LiheYoung/Depth-Anything'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
25
+ </div>
26
+
27
+ This work presents Depth Anything, a highly practical solution for robust monocular depth estimation by training on a combination of 1.5M labeled images and **62M+ unlabeled images**.
28
+
29
+ ![teaser](assets/teaser.png)
30
+
31
+ ## News
32
+
33
+ * **2024-01-22:** Paper, project page, code, models, and demo are released.
34
+
35
+
36
+ ## Features of Depth Anything
37
+
38
+ - **Relative depth estimation**:
39
+
40
+ Our foundation models listed [here](https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints) can provide relative depth estimation for any given image robustly. Please refer [here](#running) for details.
41
+
42
+ - **Metric depth estimation**
43
+
44
+ We fine-tune our Depth Anything model with metric depth information from NYUv2 or KITTI. It offers strong capabilities of both in-domain and zero-shot metric depth estimation. Please refer [here](./metric_depth) for details.
45
+
46
+
47
+ - **Better depth-conditioned ControlNet**
48
+
49
+ We re-train **a better depth-conditioned ControlNet** based on Depth Anything. It offers more precise synthesis than the previous MiDaS-based ControlNet. Please refer [here](./controlnet/) for details.
50
+
51
+ - **Downstream high-level scene understanding**
52
+
53
+ The Depth Anything encoder can be fine-tuned to downstream high-level perception tasks, *e.g.*, semantic segmentation, 86.2 mIoU on Cityscapes and 59.4 mIoU on ADE20K. Please refer [here](./semseg/) for details.
54
+
55
+
56
+ ## Performance
57
+
58
+ Here we compare our Depth Anything with the previously best MiDaS v3.1 BEiT<sub>L-512</sub> model.
59
+
60
+ Please note that the latest MiDaS is also trained on KITTI and NYUv2, while we do not.
61
+
62
+ | Method | Params | KITTI || NYUv2 || Sintel || DDAD || ETH3D || DIODE ||
63
+ |-|-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
64
+ | | | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ |
65
+ | MiDaS | 345.0M | 0.127 | 0.850 | 0.048 | *0.980* | 0.587 | 0.699 | 0.251 | 0.766 | 0.139 | 0.867 | 0.075 | 0.942 |
66
+ | **Ours-S** | 24.8M | 0.080 | 0.936 | 0.053 | 0.972 | 0.464 | 0.739 | 0.247 | 0.768 | 0.127 | **0.885** | 0.076 | 0.939 |
67
+ | **Ours-B** | 97.5M | *0.080* | *0.939* | *0.046* | 0.979 | **0.432** | *0.756* | *0.232* | *0.786* | **0.126** | *0.884* | *0.069* | *0.946* |
68
+ | **Ours-L** | 335.3M | **0.076** | **0.947** | **0.043** | **0.981** | *0.458* | **0.760** | **0.230** | **0.789** | *0.127* | 0.882 | **0.066** | **0.952** |
69
+
70
+ We highlight the **best** and *second best* results in **bold** and *italic* respectively (**better results**: AbsRel $\downarrow$ , $\delta_1 \uparrow$).
71
+
72
+ ## Pre-trained models
73
+
74
+ We provide three models of varying scales for robust relatve depth estimation:
75
+
76
+ - Depth-Anything-ViT-Small (24.8M)
77
+
78
+ - Depth-Anything-ViT-Base (97.5M)
79
+
80
+ - Depth-Anything-ViT-Large (335.3M)
81
+
82
+ Download our pre-trained models [here](https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints), and put them under the ``checkpoints`` directory.
83
+
84
+ ## Usage
85
+
86
+ ### Installation
87
+
88
+ The setup is very simple. Just make ensure ``torch``, ``torchvision``, and ``cv2`` are supported in your environment.
89
+
90
+ ```bash
91
+ git clone https://github.com/LiheYoung/Depth-Anything
92
+ cd Depth-Anything
93
+ pip install -r requirements.txt
94
+ ```
95
+
96
+ ### Running
97
+
98
+ ```bash
99
+ python run.py --encoder <vits | vitb | vitl> --load-from <pretrained-model> --img-path <img-directory | single-img | txt-file> --outdir <outdir> --localhub
100
+ ```
101
+ For the ``img-path``, you can either 1) point it to an image directory storing all interested images, 2) point it to a single image, or 3) point it to a text file storing all image paths.
102
+
103
+ For example:
104
+ ```bash
105
+ python run.py --encoder vitl --load-from checkpoints/depth_anything_vitl14.pth --img-path demo_images --outdir depth_visualization --localhub
106
+ ```
107
+
108
+
109
+ ### Gradio demo
110
+
111
+ To use our gradio demo locally:
112
+
113
+ ```bash
114
+ python app.py
115
+ ```
116
+
117
+ You can also try our [online demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything).
118
+
119
+ ### Import Depth Anything to your project
120
+
121
+ If you want to use Depth Anything in your own project, you can simply follow [``run.py``](run.py) to load our models and define data pre-processing.
122
+
123
+ <details>
124
+ <summary>Code snippet (note the difference between our data pre-processing and that of MiDaS)</summary>
125
+
126
+ ```python
127
+ from depth_anything.dpt import DPT_DINOv2
128
+ from depth_anything.util.transform import Resize, NormalizeImage, PrepareForNet
129
+
130
+ import cv2
131
+ import torch
132
+
133
+ depth_anything = DPT_DINOv2(encoder='vitl', features=256, out_channels=[256, 512, 1024, 1024], localhub=True)
134
+ depth_anything.load_state_dict(torch.load('checkpoints/depth_anything_vitl14.pth'))
135
+
136
+ transform = Compose([
137
+ Resize(
138
+ width=518,
139
+ height=518,
140
+ resize_target=False,
141
+ keep_aspect_ratio=True,
142
+ ensure_multiple_of=14,
143
+ resize_method='lower_bound',
144
+ image_interpolation_method=cv2.INTER_CUBIC,
145
+ ),
146
+ NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
147
+ PrepareForNet(),
148
+ ])
149
+
150
+ image = cv2.cvtColor(cv2.imread('your image path'), cv2.COLOR_BGR2RGB) / 255.0
151
+ image = transform({'image': image})['image']
152
+ image = torch.from_numpy(image).unsqueeze(0)
153
+
154
+ # depth shape: 1xHxW
155
+ depth = depth_anything(image)
156
+ ```
157
+ </details>
158
+
159
+
160
+ ## Citation
161
+
162
+ If you find this project useful, please consider citing:
163
+
164
+ ```bibtex
165
+ @article{depthanything,
166
+ title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
167
+ author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
168
+ journal={arXiv:2401.10891},
169
+ year={2024}
170
+ }
171
+ ```
depth-anything/depth_anything-2024.6.15.0-py2.py3-none-any.whl ADDED
Binary file (15.7 kB). View file
 
depth-anything/depth_anything-2024.6.15.0-py2.py3-none-any.whl.metadata ADDED
@@ -0,0 +1,288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.3
2
+ Name: depth_anything
3
+ Version: 2024.6.15.0
4
+ Project-URL: Documentation, https://github.com/LiheYoung/Depth-Anything
5
+ Project-URL: Issues, https://github.com/LiheYoung/Depth-Anything/issues
6
+ Project-URL: Source, https://github.com/LiheYoung/Depth-Anything
7
+ License-File: LICENSE
8
+ Requires-Dist: opencv-python
9
+ Requires-Dist: torch
10
+ Requires-Dist: torchvision
11
+ Description-Content-Type: text/markdown
12
+
13
+ <div align="center">
14
+ <h2>Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data</h2>
15
+
16
+ [**Lihe Yang**](https://liheyoung.github.io/)<sup>1</sup> · [**Bingyi Kang**](https://scholar.google.com/citations?user=NmHgX-wAAAAJ)<sup>2&dagger;</sup> · [**Zilong Huang**](http://speedinghzl.github.io/)<sup>2</sup> · [**Xiaogang Xu**](https://xiaogang00.github.io/)<sup>3,4</sup> · [**Jiashi Feng**](https://sites.google.com/site/jshfeng/)<sup>2</sup> · [**Hengshuang Zhao**](https://hszhao.github.io/)<sup>1*</sup>
17
+
18
+ <sup>1</sup>HKU&emsp;&emsp;&emsp;&emsp;<sup>2</sup>TikTok&emsp;&emsp;&emsp;&emsp;<sup>3</sup>CUHK&emsp;&emsp;&emsp;&emsp;<sup>4</sup>ZJU
19
+
20
+ &dagger;project lead&emsp;*corresponding author
21
+
22
+ **CVPR 2024**
23
+
24
+ <a href="https://arxiv.org/abs/2401.10891"><img src='https://img.shields.io/badge/arXiv-Depth Anything-red' alt='Paper PDF'></a>
25
+ <a href='https://depth-anything.github.io'><img src='https://img.shields.io/badge/Project_Page-Depth Anything-green' alt='Project Page'></a>
26
+ <a href='https://huggingface.co/spaces/LiheYoung/Depth-Anything'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
27
+ <a href='https://huggingface.co/papers/2401.10891'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Paper-yellow'></a>
28
+ </div>
29
+
30
+ This work presents Depth Anything, a highly practical solution for robust monocular depth estimation by training on a combination of 1.5M labeled images and **62M+ unlabeled images**.
31
+
32
+ ![teaser](assets/teaser.png)
33
+
34
+ <div align="center">
35
+ <a href="https://huggingface.co/spaces/depth-anything/Depth-Anything-V2/blob/main/README_Github.md"><b>Try our latest Depth Anything V2 models!</b></a><br>
36
+ (Due to the issue with our V2 Github repositories, we temporarily upload the content to Huggingface space)
37
+ </div>
38
+
39
+ ## News
40
+
41
+ * **2024-06-14:** [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2) is released.
42
+ * **2024-02-27:** Depth Anything is accepted by CVPR 2024.
43
+ * **2024-02-05:** [Depth Anything Gallery](./gallery.md) is released. Thank all the users!
44
+ * **2024-02-02:** Depth Anything serves as the default depth processor for [InstantID](https://github.com/InstantID/InstantID) and [InvokeAI](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.6.1).
45
+ * **2024-01-25:** Support [video depth visualization](./run_video.py). An [online demo for video](https://huggingface.co/spaces/JohanDL/Depth-Anything-Video) is also available.
46
+ * **2024-01-23:** The new ControlNet based on Depth Anything is integrated into [ControlNet WebUI](https://github.com/Mikubill/sd-webui-controlnet) and [ComfyUI's ControlNet](https://github.com/Fannovel16/comfyui_controlnet_aux).
47
+ * **2024-01-23:** Depth Anything [ONNX](https://github.com/fabio-sim/Depth-Anything-ONNX) and [TensorRT](https://github.com/spacewalk01/depth-anything-tensorrt) versions are supported.
48
+ * **2024-01-22:** Paper, project page, code, models, and demo ([HuggingFace](https://huggingface.co/spaces/LiheYoung/Depth-Anything), [OpenXLab](https://openxlab.org.cn/apps/detail/yyfan/depth_anything)) are released.
49
+
50
+
51
+ ## Features of Depth Anything
52
+
53
+ ***If you need other features, please first check [existing community supports](#community-support).***
54
+
55
+ - **Relative depth estimation**:
56
+
57
+ Our foundation models listed [here](https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints) can provide relative depth estimation for any given image robustly. Please refer [here](#running) for details.
58
+
59
+ - **Metric depth estimation**
60
+
61
+ We fine-tune our Depth Anything model with metric depth information from NYUv2 or KITTI. It offers strong capabilities of both in-domain and zero-shot metric depth estimation. Please refer [here](./metric_depth) for details.
62
+
63
+
64
+ - **Better depth-conditioned ControlNet**
65
+
66
+ We re-train **a better depth-conditioned ControlNet** based on Depth Anything. It offers more precise synthesis than the previous MiDaS-based ControlNet. Please refer [here](./controlnet/) for details. You can also use our new ControlNet based on Depth Anything in [ControlNet WebUI](https://github.com/Mikubill/sd-webui-controlnet) or [ComfyUI's ControlNet](https://github.com/Fannovel16/comfyui_controlnet_aux).
67
+
68
+ - **Downstream high-level scene understanding**
69
+
70
+ The Depth Anything encoder can be fine-tuned to downstream high-level perception tasks, *e.g.*, semantic segmentation, 86.2 mIoU on Cityscapes and 59.4 mIoU on ADE20K. Please refer [here](./semseg/) for details.
71
+
72
+
73
+ ## Performance
74
+
75
+ Here we compare our Depth Anything with the previously best MiDaS v3.1 BEiT<sub>L-512</sub> model.
76
+
77
+ Please note that the latest MiDaS is also trained on KITTI and NYUv2, while we do not.
78
+
79
+ | Method | Params | KITTI || NYUv2 || Sintel || DDAD || ETH3D || DIODE ||
80
+ |-|-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
81
+ | | | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ | AbsRel | $\delta_1$ |
82
+ | MiDaS | 345.0M | 0.127 | 0.850 | 0.048 | *0.980* | 0.587 | 0.699 | 0.251 | 0.766 | 0.139 | 0.867 | 0.075 | 0.942 |
83
+ | **Ours-S** | 24.8M | 0.080 | 0.936 | 0.053 | 0.972 | 0.464 | 0.739 | 0.247 | 0.768 | 0.127 | **0.885** | 0.076 | 0.939 |
84
+ | **Ours-B** | 97.5M | *0.080* | *0.939* | *0.046* | 0.979 | **0.432** | *0.756* | *0.232* | *0.786* | **0.126** | *0.884* | *0.069* | *0.946* |
85
+ | **Ours-L** | 335.3M | **0.076** | **0.947** | **0.043** | **0.981** | *0.458* | **0.760** | **0.230** | **0.789** | *0.127* | 0.882 | **0.066** | **0.952** |
86
+
87
+ We highlight the **best** and *second best* results in **bold** and *italic* respectively (**better results**: AbsRel $\downarrow$ , $\delta_1 \uparrow$).
88
+
89
+ ## Pre-trained models
90
+
91
+ We provide three models of varying scales for robust relative depth estimation:
92
+
93
+ | Model | Params | Inference Time on V100 (ms) | A100 | RTX4090 ([TensorRT](https://github.com/spacewalk01/depth-anything-tensorrt)) |
94
+ |:-|-:|:-:|:-:|:-:|
95
+ | Depth-Anything-Small | 24.8M | 12 | 8 | 3 |
96
+ | Depth-Anything-Base | 97.5M | 13 | 9 | 6 |
97
+ | Depth-Anything-Large | 335.3M | 20 | 13 | 12 |
98
+
99
+ Note that the V100 and A100 inference time (*without TensorRT*) is computed by excluding the pre-processing and post-processing stages, whereas the last column RTX4090 (*with TensorRT*) is computed by including these two stages (please refer to [Depth-Anything-TensorRT](https://github.com/spacewalk01/depth-anything-tensorrt)).
100
+
101
+ You can easily load our pre-trained models by:
102
+ ```python
103
+ from depth_anything.dpt import DepthAnything
104
+
105
+ encoder = 'vits' # can also be 'vitb' or 'vitl'
106
+ depth_anything = DepthAnything.from_pretrained('LiheYoung/depth_anything_{:}14'.format(encoder))
107
+ ```
108
+
109
+ Depth Anything is also supported in [``transformers``](https://github.com/huggingface/transformers). You can use it for depth prediction within [3 lines of code](https://huggingface.co/docs/transformers/main/model_doc/depth_anything) (credit to [@niels](https://huggingface.co/nielsr)).
110
+
111
+ ### *No network connection, cannot load these models?*
112
+
113
+ <details>
114
+ <summary>Click here for solutions</summary>
115
+
116
+ - First, manually download the three checkpoints: [depth-anything-large](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints/depth_anything_vitl14.pth), [depth-anything-base](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints/depth_anything_vitb14.pth), and [depth-anything-small](https://huggingface.co/spaces/LiheYoung/Depth-Anything/blob/main/checkpoints/depth_anything_vits14.pth).
117
+
118
+ - Second, upload the folder containing the checkpoints to your remote server.
119
+
120
+ - Lastly, load the model locally:
121
+ ```python
122
+ from depth_anything.dpt import DepthAnything
123
+
124
+ model_configs = {
125
+ 'vitl': {'encoder': 'vitl', 'features': 256, 'out_channels': [256, 512, 1024, 1024]},
126
+ 'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]},
127
+ 'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]}
128
+ }
129
+
130
+ encoder = 'vitl' # or 'vitb', 'vits'
131
+ depth_anything = DepthAnything(model_configs[encoder])
132
+ depth_anything.load_state_dict(torch.load(f'./checkpoints/depth_anything_{encoder}14.pth'))
133
+ ```
134
+ Note that in this locally loading manner, you also do not have to install the ``huggingface_hub`` package. In this way, please feel free to delete this [line](https://github.com/LiheYoung/Depth-Anything/blob/e7ef4b4b7a0afd8a05ce9564f04c1e5b68268516/depth_anything/dpt.py#L5) and the ``PyTorchModelHubMixin`` in this [line](https://github.com/LiheYoung/Depth-Anything/blob/e7ef4b4b7a0afd8a05ce9564f04c1e5b68268516/depth_anything/dpt.py#L169).
135
+ </details>
136
+
137
+
138
+ ## Usage
139
+
140
+ ### Installation
141
+
142
+ ```bash
143
+ git clone https://github.com/LiheYoung/Depth-Anything
144
+ cd Depth-Anything
145
+ pip install -r requirements.txt
146
+ ```
147
+
148
+ ### Running
149
+
150
+ ```bash
151
+ python run.py --encoder <vits | vitb | vitl> --img-path <img-directory | single-img | txt-file> --outdir <outdir> [--pred-only] [--grayscale]
152
+ ```
153
+ Arguments:
154
+ - ``--img-path``: you can either 1) point it to an image directory storing all interested images, 2) point it to a single image, or 3) point it to a text file storing all image paths.
155
+ - ``--pred-only`` is set to save the predicted depth map only. Without it, by default, we visualize both image and its depth map side by side.
156
+ - ``--grayscale`` is set to save the grayscale depth map. Without it, by default, we apply a color palette to the depth map.
157
+
158
+ For example:
159
+ ```bash
160
+ python run.py --encoder vitl --img-path assets/examples --outdir depth_vis
161
+ ```
162
+
163
+ **If you want to use Depth Anything on videos:**
164
+ ```bash
165
+ python run_video.py --encoder vitl --video-path assets/examples_video --outdir video_depth_vis
166
+ ```
167
+
168
+ ### Gradio demo <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a>
169
+
170
+ To use our gradio demo locally:
171
+
172
+ ```bash
173
+ python app.py
174
+ ```
175
+
176
+ You can also try our [online demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything).
177
+
178
+ ### Import Depth Anything to your project
179
+
180
+ If you want to use Depth Anything in your own project, you can simply follow [``run.py``](run.py) to load our models and define data pre-processing.
181
+
182
+ <details>
183
+ <summary>Code snippet (note the difference between our data pre-processing and that of MiDaS)</summary>
184
+
185
+ ```python
186
+ from depth_anything.dpt import DepthAnything
187
+ from depth_anything.util.transform import Resize, NormalizeImage, PrepareForNet
188
+
189
+ import cv2
190
+ import torch
191
+ from torchvision.transforms import Compose
192
+
193
+ encoder = 'vits' # can also be 'vitb' or 'vitl'
194
+ depth_anything = DepthAnything.from_pretrained('LiheYoung/depth_anything_{:}14'.format(encoder)).eval()
195
+
196
+ transform = Compose([
197
+ Resize(
198
+ width=518,
199
+ height=518,
200
+ resize_target=False,
201
+ keep_aspect_ratio=True,
202
+ ensure_multiple_of=14,
203
+ resize_method='lower_bound',
204
+ image_interpolation_method=cv2.INTER_CUBIC,
205
+ ),
206
+ NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
207
+ PrepareForNet(),
208
+ ])
209
+
210
+ image = cv2.cvtColor(cv2.imread('your image path'), cv2.COLOR_BGR2RGB) / 255.0
211
+ image = transform({'image': image})['image']
212
+ image = torch.from_numpy(image).unsqueeze(0)
213
+
214
+ # depth shape: 1xHxW
215
+ depth = depth_anything(image)
216
+ ```
217
+ </details>
218
+
219
+ ### Do not want to define image pre-processing or download model definition files?
220
+
221
+ Easily use Depth Anything through [``transformers``](https://github.com/huggingface/transformers) within 3 lines of code! Please refer to [these instructions](https://huggingface.co/docs/transformers/main/model_doc/depth_anything) (credit to [@niels](https://huggingface.co/nielsr)).
222
+
223
+ **Note:** If you encounter ``KeyError: 'depth_anything'``, please install the latest [``transformers``](https://github.com/huggingface/transformers) from source:
224
+ ```bash
225
+ pip install git+https://github.com/huggingface/transformers.git
226
+ ```
227
+ <details>
228
+ <summary>Click here for a brief demo:</summary>
229
+
230
+ ```python
231
+ from transformers import pipeline
232
+ from PIL import Image
233
+
234
+ image = Image.open('Your-image-path')
235
+ pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-small-hf")
236
+ depth = pipe(image)["depth"]
237
+ ```
238
+ </details>
239
+
240
+ ## Community Support
241
+
242
+ **We sincerely appreciate all the extensions built on our Depth Anything from the community. Thank you a lot!**
243
+
244
+ Here we list the extensions we have found:
245
+ - Depth Anything TensorRT:
246
+ - https://github.com/spacewalk01/depth-anything-tensorrt
247
+ - https://github.com/thinvy/DepthAnythingTensorrtDeploy
248
+ - https://github.com/daniel89710/trt-depth-anything
249
+ - Depth Anything ONNX: https://github.com/fabio-sim/Depth-Anything-ONNX
250
+ - Depth Anything in Transformers.js (3D visualization): https://huggingface.co/spaces/Xenova/depth-anything-web
251
+ - Depth Anything for video (online demo): https://huggingface.co/spaces/JohanDL/Depth-Anything-Video
252
+ - Depth Anything in ControlNet WebUI: https://github.com/Mikubill/sd-webui-controlnet
253
+ - Depth Anything in ComfyUI's ControlNet: https://github.com/Fannovel16/comfyui_controlnet_aux
254
+ - Depth Anything in X-AnyLabeling: https://github.com/CVHub520/X-AnyLabeling
255
+ - Depth Anything in OpenXLab: https://openxlab.org.cn/apps/detail/yyfan/depth_anything
256
+ - Depth Anything in OpenVINO: https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/280-depth-anything
257
+ - Depth Anything ROS:
258
+ - https://github.com/scepter914/DepthAnything-ROS
259
+ - https://github.com/polatztrk/depth_anything_ros
260
+ - Depth Anything Android:
261
+ - https://github.com/FeiGeChuanShu/ncnn-android-depth_anything
262
+ - https://github.com/shubham0204/Depth-Anything-Android
263
+ - Depth Anything in TouchDesigner: https://github.com/olegchomp/TDDepthAnything
264
+ - LearnOpenCV research article on Depth Anything: https://learnopencv.com/depth-anything
265
+ - Learn more about the DPT architecture we used: https://github.com/heyoeyo/muggled_dpt
266
+
267
+
268
+ If you have your amazing projects supporting or improving (*e.g.*, speed) Depth Anything, please feel free to drop an issue. We will add them here.
269
+
270
+
271
+ ## Acknowledgement
272
+
273
+ We would like to express our deepest gratitude to [AK(@_akhaliq)](https://twitter.com/_akhaliq) and the awesome HuggingFace team ([@niels](https://huggingface.co/nielsr), [@hysts](https://huggingface.co/hysts), and [@yuvraj](https://huggingface.co/ysharma)) for helping improve the online demo and build the HF models.
274
+
275
+ Besides, we thank the [MagicEdit](https://magic-edit.github.io/) team for providing some video examples for video depth estimation, and [Tiancheng Shen](https://scholar.google.com/citations?user=iRY1YVoAAAAJ) for evaluating the depth maps with MagicEdit.
276
+
277
+ ## Citation
278
+
279
+ If you find this project useful, please consider citing:
280
+
281
+ ```bibtex
282
+ @inproceedings{depthanything,
283
+ title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
284
+ author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
285
+ booktitle={CVPR},
286
+ year={2024}
287
+ }
288
+ ```
depth-anything/index.html ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta name="generator" content="simple503 version 0.4.0" />
5
+ <meta name="pypi:repository-version" content="1.0" />
6
+ <meta charset="UTF-8" />
7
+ <title>
8
+ Links for depth-anything
9
+ </title>
10
+ </head>
11
+ <body>
12
+ <h1>
13
+ Links for depth-anything
14
+ </h1>
15
+ <a href="/depth-anything/depth_anything-2024.6.15.0-py2.py3-none-any.whl#sha256=4993e6e385c2bd919bd0ea2a0c786d79310ef5bb56d7a7ad5022a1e77ec0be00" data-dist-info-metadata="sha256=e50a7155ae2fd8dc8efff94149df45344d980aaeafb493b1f670a062505d59db">
16
+ depth_anything-2024.6.15.0-py2.py3-none-any.whl
17
+ </a>
18
+ <br />
19
+ <a href="/depth-anything/depth_anything-2024.1.22.0-py2.py3-none-any.whl#sha256=26c1d38b8c3c306b4a2197d725a4b989ff65f7ebcf4fb5a96a1b6db7fbd56780" data-dist-info-metadata="sha256=98297d5a7c1e6feec7230e577431486a45660a665dd17a90d3699850dc3a5bae">
20
+ depth_anything-2024.1.22.0-py2.py3-none-any.whl
21
+ </a>
22
+ <br />
23
+ </body>
24
+ </html>
dlib/dlib-19.22.99-cp310-cp310-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2181b6724669fb0147c9ebc326d72b94de41bced1812eff5df4b133b5d0b575
3
+ size 2960298
dlib/dlib-19.22.99-cp310-cp310-win_amd64.whl.metadata ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: dlib
3
+ Version: 19.22.99
4
+ Summary: A toolkit for making real world machine learning and data analysis applications
5
+ Home-page: https://github.com/davisking/dlib
6
+ Author: Davis King
7
+ Author-email: [email protected]
8
+ License: Boost Software License
9
+ Keywords: dlib,Computer Vision,Machine Learning
10
+ Platform: UNKNOWN
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Operating System :: MacOS :: MacOS X
15
+ Classifier: Operating System :: POSIX
16
+ Classifier: Operating System :: POSIX :: Linux
17
+ Classifier: Operating System :: Microsoft
18
+ Classifier: Operating System :: Microsoft :: Windows
19
+ Classifier: Programming Language :: C++
20
+ Classifier: Programming Language :: Python
21
+ Classifier: Programming Language :: Python :: 2
22
+ Classifier: Programming Language :: Python :: 2.6
23
+ Classifier: Programming Language :: Python :: 2.7
24
+ Classifier: Programming Language :: Python :: 3
25
+ Classifier: Programming Language :: Python :: 3.4
26
+ Classifier: Programming Language :: Python :: 3.5
27
+ Classifier: Programming Language :: Python :: 3.6
28
+ Classifier: Topic :: Scientific/Engineering
29
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
30
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
31
+ Classifier: Topic :: Software Development
32
+ License-File: LICENSE.txt
33
+
34
+ See http://dlib.net for documentation.
35
+
dlib/dlib-19.22.99-cp37-cp37m-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf28f39a48d812870193e3bf2c44cacc92a7d2b94a17a7a1eb8d0ef3f3d02988
3
+ size 2930358
dlib/dlib-19.22.99-cp37-cp37m-win_amd64.whl.metadata ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: dlib
3
+ Version: 19.22.99
4
+ Summary: A toolkit for making real world machine learning and data analysis applications
5
+ Home-page: https://github.com/davisking/dlib
6
+ Author: Davis King
7
+ Author-email: [email protected]
8
+ License: Boost Software License
9
+ Keywords: dlib,Computer Vision,Machine Learning
10
+ Platform: UNKNOWN
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Operating System :: MacOS :: MacOS X
15
+ Classifier: Operating System :: POSIX
16
+ Classifier: Operating System :: POSIX :: Linux
17
+ Classifier: Operating System :: Microsoft
18
+ Classifier: Operating System :: Microsoft :: Windows
19
+ Classifier: Programming Language :: C++
20
+ Classifier: Programming Language :: Python
21
+ Classifier: Programming Language :: Python :: 2
22
+ Classifier: Programming Language :: Python :: 2.6
23
+ Classifier: Programming Language :: Python :: 2.7
24
+ Classifier: Programming Language :: Python :: 3
25
+ Classifier: Programming Language :: Python :: 3.4
26
+ Classifier: Programming Language :: Python :: 3.5
27
+ Classifier: Programming Language :: Python :: 3.6
28
+ Classifier: Topic :: Scientific/Engineering
29
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
30
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
31
+ Classifier: Topic :: Software Development
32
+ License-File: LICENSE.txt
33
+
34
+ See http://dlib.net for documentation.
35
+
dlib/dlib-19.22.99-cp38-cp38-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b02321413b8765159ae552967ab9781a981324b2a389924bdb8c4ccab227f160
3
+ size 2959754
dlib/dlib-19.22.99-cp38-cp38-win_amd64.whl.metadata ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: dlib
3
+ Version: 19.22.99
4
+ Summary: A toolkit for making real world machine learning and data analysis applications
5
+ Home-page: https://github.com/davisking/dlib
6
+ Author: Davis King
7
+ Author-email: [email protected]
8
+ License: Boost Software License
9
+ Keywords: dlib,Computer Vision,Machine Learning
10
+ Platform: UNKNOWN
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Operating System :: MacOS :: MacOS X
15
+ Classifier: Operating System :: POSIX
16
+ Classifier: Operating System :: POSIX :: Linux
17
+ Classifier: Operating System :: Microsoft
18
+ Classifier: Operating System :: Microsoft :: Windows
19
+ Classifier: Programming Language :: C++
20
+ Classifier: Programming Language :: Python
21
+ Classifier: Programming Language :: Python :: 2
22
+ Classifier: Programming Language :: Python :: 2.6
23
+ Classifier: Programming Language :: Python :: 2.7
24
+ Classifier: Programming Language :: Python :: 3
25
+ Classifier: Programming Language :: Python :: 3.4
26
+ Classifier: Programming Language :: Python :: 3.5
27
+ Classifier: Programming Language :: Python :: 3.6
28
+ Classifier: Topic :: Scientific/Engineering
29
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
30
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
31
+ Classifier: Topic :: Software Development
32
+
33
+ See http://dlib.net for documentation.
34
+
35
+
dlib/dlib-19.22.99-cp39-cp39-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fd828405c77af2df1ff6a09964a8ba7f14838c538d9017046804f353e9b4bc2
3
+ size 2960600
dlib/dlib-19.22.99-cp39-cp39-win_amd64.whl.metadata ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: dlib
3
+ Version: 19.22.99
4
+ Summary: A toolkit for making real world machine learning and data analysis applications
5
+ Home-page: https://github.com/davisking/dlib
6
+ Author: Davis King
7
+ Author-email: [email protected]
8
+ License: Boost Software License
9
+ Keywords: dlib,Computer Vision,Machine Learning
10
+ Platform: UNKNOWN
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Operating System :: MacOS :: MacOS X
15
+ Classifier: Operating System :: POSIX
16
+ Classifier: Operating System :: POSIX :: Linux
17
+ Classifier: Operating System :: Microsoft
18
+ Classifier: Operating System :: Microsoft :: Windows
19
+ Classifier: Programming Language :: C++
20
+ Classifier: Programming Language :: Python
21
+ Classifier: Programming Language :: Python :: 2
22
+ Classifier: Programming Language :: Python :: 2.6
23
+ Classifier: Programming Language :: Python :: 2.7
24
+ Classifier: Programming Language :: Python :: 3
25
+ Classifier: Programming Language :: Python :: 3.4
26
+ Classifier: Programming Language :: Python :: 3.5
27
+ Classifier: Programming Language :: Python :: 3.6
28
+ Classifier: Topic :: Scientific/Engineering
29
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
30
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
31
+ Classifier: Topic :: Software Development
32
+
33
+ See http://dlib.net for documentation.
34
+
35
+
dlib/dlib-19.24.1-cp311-cp311-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f1a5ee167975d7952b28e0ce4495f1d9a77644761cf5720fb66d7c6188ae496
3
+ size 2825619
dlib/dlib-19.24.1-cp311-cp311-win_amd64.whl.metadata ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: dlib
3
+ Version: 19.24.1
4
+ Summary: A toolkit for making real world machine learning and data analysis applications
5
+ Home-page: https://github.com/davisking/dlib
6
+ Author: Davis King
7
+ Author-email: [email protected]
8
+ License: Boost Software License
9
+ Keywords: dlib,Computer Vision,Machine Learning
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Intended Audience :: Science/Research
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Operating System :: MacOS :: MacOS X
14
+ Classifier: Operating System :: POSIX
15
+ Classifier: Operating System :: POSIX :: Linux
16
+ Classifier: Operating System :: Microsoft
17
+ Classifier: Operating System :: Microsoft :: Windows
18
+ Classifier: Programming Language :: C++
19
+ Classifier: Programming Language :: Python
20
+ Classifier: Programming Language :: Python :: 2
21
+ Classifier: Programming Language :: Python :: 2.6
22
+ Classifier: Programming Language :: Python :: 2.7
23
+ Classifier: Programming Language :: Python :: 3
24
+ Classifier: Programming Language :: Python :: 3.4
25
+ Classifier: Programming Language :: Python :: 3.5
26
+ Classifier: Programming Language :: Python :: 3.6
27
+ Classifier: Topic :: Scientific/Engineering
28
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
29
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
30
+ Classifier: Topic :: Software Development
31
+
32
+ See http://dlib.net for documentation.
dlib/dlib-19.24.99-cp312-cp312-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20c62e606ca4c9961305f7be3d03990380d3e6c17f8d27798996e97a73271862
3
+ size 2869640
dlib/dlib-19.24.99-cp312-cp312-win_amd64.whl.metadata ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: dlib
3
+ Version: 19.24.99
4
+ Summary: A toolkit for making real world machine learning and data analysis applications
5
+ Home-page: https://github.com/davisking/dlib
6
+ Author: Davis King
7
+ Author-email: [email protected]
8
+ License: Boost Software License
9
+ Keywords: dlib,Computer Vision,Machine Learning
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Intended Audience :: Science/Research
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Operating System :: MacOS :: MacOS X
14
+ Classifier: Operating System :: POSIX
15
+ Classifier: Operating System :: POSIX :: Linux
16
+ Classifier: Operating System :: Microsoft
17
+ Classifier: Operating System :: Microsoft :: Windows
18
+ Classifier: Programming Language :: C++
19
+ Classifier: Programming Language :: Python
20
+ Classifier: Topic :: Scientific/Engineering
21
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
22
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
23
+ Classifier: Topic :: Software Development
24
+ License-File: LICENSE.txt
25
+
26
+ See http://dlib.net for documentation.
dlib/index.html ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta name="generator" content="simple503 version 0.4.0" />
5
+ <meta name="pypi:repository-version" content="1.0" />
6
+ <meta charset="UTF-8" />
7
+ <title>
8
+ Links for dlib
9
+ </title>
10
+ </head>
11
+ <body>
12
+ <h1>
13
+ Links for dlib
14
+ </h1>
15
+ <a href="/dlib/dlib-19.24.99-cp312-cp312-win_amd64.whl#sha256=20c62e606ca4c9961305f7be3d03990380d3e6c17f8d27798996e97a73271862" data-dist-info-metadata="sha256=f6f749500c666242781eeba745d11d947fc5b885c39854daad38a399892722e5">
16
+ dlib-19.24.99-cp312-cp312-win_amd64.whl
17
+ </a>
18
+ <br />
19
+ <a href="/dlib/dlib-19.24.1-cp311-cp311-win_amd64.whl#sha256=6f1a5ee167975d7952b28e0ce4495f1d9a77644761cf5720fb66d7c6188ae496" data-dist-info-metadata="sha256=504febcc2e270b24bc233ae0444690ef90a7476e4edb21c628654915e16f3f15">
20
+ dlib-19.24.1-cp311-cp311-win_amd64.whl
21
+ </a>
22
+ <br />
23
+ <a href="/dlib/dlib-19.22.99-cp310-cp310-win_amd64.whl#sha256=f2181b6724669fb0147c9ebc326d72b94de41bced1812eff5df4b133b5d0b575" data-dist-info-metadata="sha256=dfd94fa9eda993909c449cca88a11faf025b505a344d1e6d44c1dd4d069dbd21">
24
+ dlib-19.22.99-cp310-cp310-win_amd64.whl
25
+ </a>
26
+ <br />
27
+ <a href="/dlib/dlib-19.22.99-cp39-cp39-win_amd64.whl#sha256=0fd828405c77af2df1ff6a09964a8ba7f14838c538d9017046804f353e9b4bc2" data-dist-info-metadata="sha256=bb56385e2c7f24fc7e08720b940b027d8c461d6dc260c37a2b8a4738c11774a9">
28
+ dlib-19.22.99-cp39-cp39-win_amd64.whl
29
+ </a>
30
+ <br />
31
+ <a href="/dlib/dlib-19.22.99-cp38-cp38-win_amd64.whl#sha256=b02321413b8765159ae552967ab9781a981324b2a389924bdb8c4ccab227f160" data-dist-info-metadata="sha256=bb56385e2c7f24fc7e08720b940b027d8c461d6dc260c37a2b8a4738c11774a9">
32
+ dlib-19.22.99-cp38-cp38-win_amd64.whl
33
+ </a>
34
+ <br />
35
+ <a href="/dlib/dlib-19.22.99-cp37-cp37m-win_amd64.whl#sha256=bf28f39a48d812870193e3bf2c44cacc92a7d2b94a17a7a1eb8d0ef3f3d02988" data-dist-info-metadata="sha256=dfd94fa9eda993909c449cca88a11faf025b505a344d1e6d44c1dd4d069dbd21">
36
+ dlib-19.22.99-cp37-cp37m-win_amd64.whl
37
+ </a>
38
+ <br />
39
+ </body>
40
+ </html>
handrefinerportable/handrefinerportable-2024.1.18.0-py2.py3-none-any.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9425f4d59d149727f5b2467a8e0a7369caa1183cc5dbd855845d99462650cf4
3
+ size 13084072
handrefinerportable/handrefinerportable-2024.1.18.0-py2.py3-none-any.whl.metadata ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: handrefinerportable
3
+ Version: 2024.1.18.0
4
+ Project-URL: Documentation, https://github.com/huchenlei/HandRefinerPortable
5
+ Project-URL: Issues, https://github.com/huchenlei/HandRefinerPortable/issues
6
+ Project-URL: Source, https://github.com/huchenlei/HandRefinerPortable
7
+ Requires-Dist: mediapipe
8
+ Requires-Dist: rtree
9
+ Requires-Dist: trimesh[easy]
10
+ Description-Content-Type: text/markdown
11
+
12
+ # HandRefinerPortable
13
+
14
+ This is a convenience package used by
15
+ [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet)
16
+ to package the dependencies and model used by the hand refiner preprocessor.
handrefinerportable/handrefinerportable-2024.2.12.0-py2.py3-none-any.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e6c702905919f4c49bcb2db7b20d334e8458a7555cd57630600584ec38ca6a9
3
+ size 13084081
handrefinerportable/handrefinerportable-2024.2.12.0-py2.py3-none-any.whl.metadata ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: handrefinerportable
3
+ Version: 2024.2.12.0
4
+ Project-URL: Documentation, https://github.com/huchenlei/HandRefinerPortable
5
+ Project-URL: Issues, https://github.com/huchenlei/HandRefinerPortable/issues
6
+ Project-URL: Source, https://github.com/huchenlei/HandRefinerPortable
7
+ Requires-Dist: mediapipe
8
+ Requires-Dist: rtree
9
+ Requires-Dist: trimesh[easy]
10
+ Description-Content-Type: text/markdown
11
+
12
+ # HandRefinerPortable
13
+
14
+ This is a convenience package used by
15
+ [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet)
16
+ to package the dependencies and model used by the hand refiner preprocessor.
handrefinerportable/index.html ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta name="generator" content="simple503 version 0.4.0" />
5
+ <meta name="pypi:repository-version" content="1.0" />
6
+ <meta charset="UTF-8" />
7
+ <title>
8
+ Links for handrefinerportable
9
+ </title>
10
+ </head>
11
+ <body>
12
+ <h1>
13
+ Links for handrefinerportable
14
+ </h1>
15
+ <a href="/handrefinerportable/handrefinerportable-2024.2.12.0-py2.py3-none-any.whl#sha256=1e6c702905919f4c49bcb2db7b20d334e8458a7555cd57630600584ec38ca6a9" data-dist-info-metadata="sha256=82a4d40d40c9adfc7ea39e910c1283e06e5a7ed0d96ba7c114c0b3279613795a">
16
+ handrefinerportable-2024.2.12.0-py2.py3-none-any.whl
17
+ </a>
18
+ <br />
19
+ <a href="/handrefinerportable/handrefinerportable-2024.1.18.0-py2.py3-none-any.whl#sha256=d9425f4d59d149727f5b2467a8e0a7369caa1183cc5dbd855845d99462650cf4" data-dist-info-metadata="sha256=aca6bd05baf6c17071d386f3cd94382e40222df764d9567b3b430509ea49ed90">
20
+ handrefinerportable-2024.1.18.0-py2.py3-none-any.whl
21
+ </a>
22
+ <br />
23
+ </body>
24
+ </html>
index.html ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta name="generator" content="simple503 version 0.4.0" />
5
+ <meta name="pypi:repository-version" content="1.0" />
6
+ <meta charset="UTF-8" />
7
+ <title>
8
+ Simple Package Repository
9
+ </title>
10
+ </head>
11
+ <body>
12
+ <a href="/depth-anything/">
13
+ depth_anything
14
+ </a>
15
+ <br />
16
+ <a href="/dlib/">
17
+ dlib
18
+ </a>
19
+ <br />
20
+ <a href="/handrefinerportable/">
21
+ handrefinerportable
22
+ </a>
23
+ <br />
24
+ <a href="/insightface/">
25
+ insightface
26
+ </a>
27
+ <br />
28
+ <a href="/intel-extension-for-pytorch/">
29
+ intel-extension-for-pytorch
30
+ </a>
31
+ <br />
32
+ <a href="/torch/">
33
+ torch
34
+ </a>
35
+ <br />
36
+ <a href="/torchaudio/">
37
+ torchaudio
38
+ </a>
39
+ <br />
40
+ <a href="/torchvision/">
41
+ torchvision
42
+ </a>
43
+ <br />
44
+ <a href="/xformers/">
45
+ xformers
46
+ </a>
47
+ <br />
48
+ </body>
49
+ </html>
insightface/index.html ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta name="generator" content="simple503 version 0.4.0" />
5
+ <meta name="pypi:repository-version" content="1.0" />
6
+ <meta charset="UTF-8" />
7
+ <title>
8
+ Links for insightface
9
+ </title>
10
+ </head>
11
+ <body>
12
+ <h1>
13
+ Links for insightface
14
+ </h1>
15
+ <a href="/insightface/insightface-0.7.3-cp312-cp312-win_amd64.whl#sha256=4e58a504433ba5a500d48328689e7d6c69873165653ded7553ce804beb8723db" data-dist-info-metadata="sha256=396cc8fa66cb633f8a8ad7a879e8d31741ec3a9230aa7d188ac4ef3a9d0bbcdd">
16
+ insightface-0.7.3-cp312-cp312-win_amd64.whl
17
+ </a>
18
+ <br />
19
+ <a href="/insightface/insightface-0.7.3-cp311-cp311-win_amd64.whl#sha256=ea9b96de0f3cada1c031e3c566d2dfcb2ff45b0b7cca88e27f583ed9ef386561" data-dist-info-metadata="sha256=396cc8fa66cb633f8a8ad7a879e8d31741ec3a9230aa7d188ac4ef3a9d0bbcdd">
20
+ insightface-0.7.3-cp311-cp311-win_amd64.whl
21
+ </a>
22
+ <br />
23
+ <a href="/insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl#sha256=47aa0571b2aadd8545d4bc7615dfbc374c10180c283b7ac65058fcb41ed4df86" data-dist-info-metadata="sha256=396cc8fa66cb633f8a8ad7a879e8d31741ec3a9230aa7d188ac4ef3a9d0bbcdd">
24
+ insightface-0.7.3-cp310-cp310-win_amd64.whl
25
+ </a>
26
+ <br />
27
+ <a href="/insightface/insightface-0.7.3-cp39-cp39-win_amd64.whl#sha256=dd688a1a83a0977bff59d7d21a6fb07ccba59bcc189f27a3ee62510562eef3ef" data-dist-info-metadata="sha256=396cc8fa66cb633f8a8ad7a879e8d31741ec3a9230aa7d188ac4ef3a9d0bbcdd">
28
+ insightface-0.7.3-cp39-cp39-win_amd64.whl
29
+ </a>
30
+ <br />
31
+ </body>
32
+ </html>
insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl ADDED
Binary file (842 kB). View file
 
insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl.metadata ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: insightface
3
+ Version: 0.7.3
4
+ Summary: InsightFace Python Library
5
+ Home-page: https://github.com/deepinsight/insightface
6
+ Author: InsightFace Contributors
7
+ Author-email: [email protected]
8
+ License: MIT
9
+ Description-Content-Type: text/markdown
10
+ Requires-Dist: numpy
11
+ Requires-Dist: onnx
12
+ Requires-Dist: tqdm
13
+ Requires-Dist: requests
14
+ Requires-Dist: matplotlib
15
+ Requires-Dist: Pillow
16
+ Requires-Dist: scipy
17
+ Requires-Dist: scikit-learn
18
+ Requires-Dist: scikit-image
19
+ Requires-Dist: easydict
20
+ Requires-Dist: cython
21
+ Requires-Dist: albumentations
22
+ Requires-Dist: prettytable
23
+
24
+ # InsightFace Python Library
25
+
26
+ ## License
27
+
28
+ The code of InsightFace Python Library is released under the MIT License. There is no limitation for both academic and commercial usage.
29
+
30
+ **The pretrained models we provided with this library are available for non-commercial research purposes only, including both auto-downloading models and manual-downloading models.**
31
+
32
+ ## Install
33
+
34
+ ### Install Inference Backend
35
+
36
+ For ``insightface<=0.1.5``, we use MXNet as inference backend.
37
+
38
+ Starting from insightface>=0.2, we use onnxruntime as inference backend.
39
+
40
+ You have to install ``onnxruntime-gpu`` manually to enable GPU inference, or install ``onnxruntime`` to use CPU only inference.
41
+
42
+ ## Change Log
43
+
44
+ ### [0.7.1] - 2022-12-14
45
+
46
+ #### Changed
47
+
48
+ - Change model downloading provider to cloudfront.
49
+
50
+ ### [0.7] - 2022-11-28
51
+
52
+ #### Added
53
+
54
+ - Add face swapping model and example.
55
+
56
+ #### Changed
57
+
58
+ - Set default ORT provider to CUDA and CPU.
59
+
60
+ ### [0.6] - 2022-01-29
61
+
62
+ #### Added
63
+
64
+ - Add pose estimation in face-analysis app.
65
+
66
+ #### Changed
67
+
68
+ - Change model automated downloading url, to ucloud.
69
+
70
+
71
+ ## Quick Example
72
+
73
+ ```
74
+ import cv2
75
+ import numpy as np
76
+ import insightface
77
+ from insightface.app import FaceAnalysis
78
+ from insightface.data import get_image as ins_get_image
79
+
80
+ app = FaceAnalysis(providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
81
+ app.prepare(ctx_id=0, det_size=(640, 640))
82
+ img = ins_get_image('t1')
83
+ faces = app.get(img)
84
+ rimg = app.draw_on(img, faces)
85
+ cv2.imwrite("./t1_output.jpg", rimg)
86
+ ```
87
+
88
+ This quick example will detect faces from the ``t1.jpg`` image and draw detection results on it.
89
+
90
+
91
+
92
+ ## Model Zoo
93
+
94
+ In the latest version of insightface library, we provide following model packs:
95
+
96
+ Name in **bold** is the default model pack. **Auto** means we can download the model pack through the python library directly.
97
+
98
+ Once you manually downloaded the zip model pack, unzip it under `~/.insightface/models/` first before you call the program.
99
+
100
+ | Name | Detection Model | Recognition Model | Alignment | Attributes | Model-Size | Link | Auto |
101
+ | ------------- | --------------- | -------------------- | ------------ | ---------- | ---------- | ------------------------------------------------------------ | ------------- |
102
+ | antelopev2 | SCRFD-10GF | ResNet100@Glint360K | 2d106 & 3d68 | Gender&Age | 407MB | [link](https://drive.google.com/file/d/18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8/view?usp=sharing) | N |
103
+ | **buffalo_l** | SCRFD-10GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 326MB | [link](https://drive.google.com/file/d/1qXsQJ8ZT42_xSmWIYy85IcidpiZudOCB/view?usp=sharing) | Y |
104
+ | buffalo_m | SCRFD-2.5GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 313MB | [link](https://drive.google.com/file/d/1net68yNxF33NNV6WP7k56FS6V53tq-64/view?usp=sharing) | N |
105
+ | buffalo_s | SCRFD-500MF | MBF@WebFace600K | 2d106 & 3d68 | Gender&Age | 159MB | [link](https://drive.google.com/file/d/1pKIusApEfoHKDjeBTXYB3yOQ0EtTonNE/view?usp=sharing) | N |
106
+ | buffalo_sc | SCRFD-500MF | MBF@WebFace600K | - | - | 16MB | [link](https://drive.google.com/file/d/19I-MZdctYKmVf3nu5Da3HS6KH5LBfdzG/view?usp=sharing) | N |
107
+
108
+
109
+
110
+ Recognition Accuracy:
111
+
112
+ | Name | MR-ALL | African | Caucasian | South Asian | East Asian | LFW | CFP-FP | AgeDB-30 | IJB-C(E4) |
113
+ | :-------- | ------ | ------- | --------- | ----------- | ---------- | ----- | ------ | -------- | --------- |
114
+ | buffalo_l | 91.25 | 90.29 | 94.70 | 93.16 | 74.96 | 99.83 | 99.33 | 98.23 | 97.25 |
115
+ | buffalo_s | 71.87 | 69.45 | 80.45 | 73.39 | 51.03 | 99.70 | 98.00 | 96.58 | 95.02 |
116
+
117
+ *buffalo_m has the same accuracy with buffalo_l.*
118
+
119
+ *buffalo_sc has the same accuracy with buffalo_s.*
120
+
121
+
122
+
123
+ **Note that these models are available for non-commercial research purposes only.**
124
+
125
+
126
+
127
+ For insightface>=0.3.3, models will be downloaded automatically once we init ``app = FaceAnalysis()`` instance.
128
+
129
+ For insightface==0.3.2, you must first download the model package by command:
130
+
131
+ ```
132
+ insightface-cli model.download buffalo_l
133
+ ```
134
+
135
+ ## Use Your Own Licensed Model
136
+
137
+ You can simply create a new model directory under ``~/.insightface/models/`` and replace the pretrained models we provide with your own models. And then call ``app = FaceAnalysis(name='your_model_zoo')`` to load these models.
138
+
139
+ ## Call Models
140
+
141
+ The latest insightface libary only supports onnx models. Once you have trained detection or recognition models by PyTorch, MXNet or any other frameworks, you can convert it to the onnx format and then they can be called with insightface library.
142
+
143
+ ### Call Detection Models
144
+
145
+ ```
146
+ import cv2
147
+ import numpy as np
148
+ import insightface
149
+ from insightface.app import FaceAnalysis
150
+ from insightface.data import get_image as ins_get_image
151
+
152
+ # Method-1, use FaceAnalysis
153
+ app = FaceAnalysis(allowed_modules=['detection']) # enable detection model only
154
+ app.prepare(ctx_id=0, det_size=(640, 640))
155
+
156
+ # Method-2, load model directly
157
+ detector = insightface.model_zoo.get_model('your_detection_model.onnx')
158
+ detector.prepare(ctx_id=0, input_size=(640, 640))
159
+
160
+ ```
161
+
162
+ ### Call Recognition Models
163
+
164
+ ```
165
+ import cv2
166
+ import numpy as np
167
+ import insightface
168
+ from insightface.app import FaceAnalysis
169
+ from insightface.data import get_image as ins_get_image
170
+
171
+ handler = insightface.model_zoo.get_model('your_recognition_model.onnx')
172
+ handler.prepare(ctx_id=0)
173
+
174
+ ```
175
+
176
+
insightface/insightface-0.7.3-cp311-cp311-win_amd64.whl ADDED
Binary file (872 kB). View file
 
insightface/insightface-0.7.3-cp311-cp311-win_amd64.whl.metadata ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: insightface
3
+ Version: 0.7.3
4
+ Summary: InsightFace Python Library
5
+ Home-page: https://github.com/deepinsight/insightface
6
+ Author: InsightFace Contributors
7
+ Author-email: [email protected]
8
+ License: MIT
9
+ Description-Content-Type: text/markdown
10
+ Requires-Dist: numpy
11
+ Requires-Dist: onnx
12
+ Requires-Dist: tqdm
13
+ Requires-Dist: requests
14
+ Requires-Dist: matplotlib
15
+ Requires-Dist: Pillow
16
+ Requires-Dist: scipy
17
+ Requires-Dist: scikit-learn
18
+ Requires-Dist: scikit-image
19
+ Requires-Dist: easydict
20
+ Requires-Dist: cython
21
+ Requires-Dist: albumentations
22
+ Requires-Dist: prettytable
23
+
24
+ # InsightFace Python Library
25
+
26
+ ## License
27
+
28
+ The code of InsightFace Python Library is released under the MIT License. There is no limitation for both academic and commercial usage.
29
+
30
+ **The pretrained models we provided with this library are available for non-commercial research purposes only, including both auto-downloading models and manual-downloading models.**
31
+
32
+ ## Install
33
+
34
+ ### Install Inference Backend
35
+
36
+ For ``insightface<=0.1.5``, we use MXNet as inference backend.
37
+
38
+ Starting from insightface>=0.2, we use onnxruntime as inference backend.
39
+
40
+ You have to install ``onnxruntime-gpu`` manually to enable GPU inference, or install ``onnxruntime`` to use CPU only inference.
41
+
42
+ ## Change Log
43
+
44
+ ### [0.7.1] - 2022-12-14
45
+
46
+ #### Changed
47
+
48
+ - Change model downloading provider to cloudfront.
49
+
50
+ ### [0.7] - 2022-11-28
51
+
52
+ #### Added
53
+
54
+ - Add face swapping model and example.
55
+
56
+ #### Changed
57
+
58
+ - Set default ORT provider to CUDA and CPU.
59
+
60
+ ### [0.6] - 2022-01-29
61
+
62
+ #### Added
63
+
64
+ - Add pose estimation in face-analysis app.
65
+
66
+ #### Changed
67
+
68
+ - Change model automated downloading url, to ucloud.
69
+
70
+
71
+ ## Quick Example
72
+
73
+ ```
74
+ import cv2
75
+ import numpy as np
76
+ import insightface
77
+ from insightface.app import FaceAnalysis
78
+ from insightface.data import get_image as ins_get_image
79
+
80
+ app = FaceAnalysis(providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
81
+ app.prepare(ctx_id=0, det_size=(640, 640))
82
+ img = ins_get_image('t1')
83
+ faces = app.get(img)
84
+ rimg = app.draw_on(img, faces)
85
+ cv2.imwrite("./t1_output.jpg", rimg)
86
+ ```
87
+
88
+ This quick example will detect faces from the ``t1.jpg`` image and draw detection results on it.
89
+
90
+
91
+
92
+ ## Model Zoo
93
+
94
+ In the latest version of insightface library, we provide following model packs:
95
+
96
+ Name in **bold** is the default model pack. **Auto** means we can download the model pack through the python library directly.
97
+
98
+ Once you manually downloaded the zip model pack, unzip it under `~/.insightface/models/` first before you call the program.
99
+
100
+ | Name | Detection Model | Recognition Model | Alignment | Attributes | Model-Size | Link | Auto |
101
+ | ------------- | --------------- | -------------------- | ------------ | ---------- | ---------- | ------------------------------------------------------------ | ------------- |
102
+ | antelopev2 | SCRFD-10GF | ResNet100@Glint360K | 2d106 & 3d68 | Gender&Age | 407MB | [link](https://drive.google.com/file/d/18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8/view?usp=sharing) | N |
103
+ | **buffalo_l** | SCRFD-10GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 326MB | [link](https://drive.google.com/file/d/1qXsQJ8ZT42_xSmWIYy85IcidpiZudOCB/view?usp=sharing) | Y |
104
+ | buffalo_m | SCRFD-2.5GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 313MB | [link](https://drive.google.com/file/d/1net68yNxF33NNV6WP7k56FS6V53tq-64/view?usp=sharing) | N |
105
+ | buffalo_s | SCRFD-500MF | MBF@WebFace600K | 2d106 & 3d68 | Gender&Age | 159MB | [link](https://drive.google.com/file/d/1pKIusApEfoHKDjeBTXYB3yOQ0EtTonNE/view?usp=sharing) | N |
106
+ | buffalo_sc | SCRFD-500MF | MBF@WebFace600K | - | - | 16MB | [link](https://drive.google.com/file/d/19I-MZdctYKmVf3nu5Da3HS6KH5LBfdzG/view?usp=sharing) | N |
107
+
108
+
109
+
110
+ Recognition Accuracy:
111
+
112
+ | Name | MR-ALL | African | Caucasian | South Asian | East Asian | LFW | CFP-FP | AgeDB-30 | IJB-C(E4) |
113
+ | :-------- | ------ | ------- | --------- | ----------- | ---------- | ----- | ------ | -------- | --------- |
114
+ | buffalo_l | 91.25 | 90.29 | 94.70 | 93.16 | 74.96 | 99.83 | 99.33 | 98.23 | 97.25 |
115
+ | buffalo_s | 71.87 | 69.45 | 80.45 | 73.39 | 51.03 | 99.70 | 98.00 | 96.58 | 95.02 |
116
+
117
+ *buffalo_m has the same accuracy with buffalo_l.*
118
+
119
+ *buffalo_sc has the same accuracy with buffalo_s.*
120
+
121
+
122
+
123
+ **Note that these models are available for non-commercial research purposes only.**
124
+
125
+
126
+
127
+ For insightface>=0.3.3, models will be downloaded automatically once we init ``app = FaceAnalysis()`` instance.
128
+
129
+ For insightface==0.3.2, you must first download the model package by command:
130
+
131
+ ```
132
+ insightface-cli model.download buffalo_l
133
+ ```
134
+
135
+ ## Use Your Own Licensed Model
136
+
137
+ You can simply create a new model directory under ``~/.insightface/models/`` and replace the pretrained models we provide with your own models. And then call ``app = FaceAnalysis(name='your_model_zoo')`` to load these models.
138
+
139
+ ## Call Models
140
+
141
+ The latest insightface libary only supports onnx models. Once you have trained detection or recognition models by PyTorch, MXNet or any other frameworks, you can convert it to the onnx format and then they can be called with insightface library.
142
+
143
+ ### Call Detection Models
144
+
145
+ ```
146
+ import cv2
147
+ import numpy as np
148
+ import insightface
149
+ from insightface.app import FaceAnalysis
150
+ from insightface.data import get_image as ins_get_image
151
+
152
+ # Method-1, use FaceAnalysis
153
+ app = FaceAnalysis(allowed_modules=['detection']) # enable detection model only
154
+ app.prepare(ctx_id=0, det_size=(640, 640))
155
+
156
+ # Method-2, load model directly
157
+ detector = insightface.model_zoo.get_model('your_detection_model.onnx')
158
+ detector.prepare(ctx_id=0, input_size=(640, 640))
159
+
160
+ ```
161
+
162
+ ### Call Recognition Models
163
+
164
+ ```
165
+ import cv2
166
+ import numpy as np
167
+ import insightface
168
+ from insightface.app import FaceAnalysis
169
+ from insightface.data import get_image as ins_get_image
170
+
171
+ handler = insightface.model_zoo.get_model('your_recognition_model.onnx')
172
+ handler.prepare(ctx_id=0)
173
+
174
+ ```
175
+
176
+
insightface/insightface-0.7.3-cp312-cp312-win_amd64.whl ADDED
Binary file (873 kB). View file
 
insightface/insightface-0.7.3-cp312-cp312-win_amd64.whl.metadata ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: insightface
3
+ Version: 0.7.3
4
+ Summary: InsightFace Python Library
5
+ Home-page: https://github.com/deepinsight/insightface
6
+ Author: InsightFace Contributors
7
+ Author-email: [email protected]
8
+ License: MIT
9
+ Description-Content-Type: text/markdown
10
+ Requires-Dist: numpy
11
+ Requires-Dist: onnx
12
+ Requires-Dist: tqdm
13
+ Requires-Dist: requests
14
+ Requires-Dist: matplotlib
15
+ Requires-Dist: Pillow
16
+ Requires-Dist: scipy
17
+ Requires-Dist: scikit-learn
18
+ Requires-Dist: scikit-image
19
+ Requires-Dist: easydict
20
+ Requires-Dist: cython
21
+ Requires-Dist: albumentations
22
+ Requires-Dist: prettytable
23
+
24
+ # InsightFace Python Library
25
+
26
+ ## License
27
+
28
+ The code of InsightFace Python Library is released under the MIT License. There is no limitation for both academic and commercial usage.
29
+
30
+ **The pretrained models we provided with this library are available for non-commercial research purposes only, including both auto-downloading models and manual-downloading models.**
31
+
32
+ ## Install
33
+
34
+ ### Install Inference Backend
35
+
36
+ For ``insightface<=0.1.5``, we use MXNet as inference backend.
37
+
38
+ Starting from insightface>=0.2, we use onnxruntime as inference backend.
39
+
40
+ You have to install ``onnxruntime-gpu`` manually to enable GPU inference, or install ``onnxruntime`` to use CPU only inference.
41
+
42
+ ## Change Log
43
+
44
+ ### [0.7.1] - 2022-12-14
45
+
46
+ #### Changed
47
+
48
+ - Change model downloading provider to cloudfront.
49
+
50
+ ### [0.7] - 2022-11-28
51
+
52
+ #### Added
53
+
54
+ - Add face swapping model and example.
55
+
56
+ #### Changed
57
+
58
+ - Set default ORT provider to CUDA and CPU.
59
+
60
+ ### [0.6] - 2022-01-29
61
+
62
+ #### Added
63
+
64
+ - Add pose estimation in face-analysis app.
65
+
66
+ #### Changed
67
+
68
+ - Change model automated downloading url, to ucloud.
69
+
70
+
71
+ ## Quick Example
72
+
73
+ ```
74
+ import cv2
75
+ import numpy as np
76
+ import insightface
77
+ from insightface.app import FaceAnalysis
78
+ from insightface.data import get_image as ins_get_image
79
+
80
+ app = FaceAnalysis(providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
81
+ app.prepare(ctx_id=0, det_size=(640, 640))
82
+ img = ins_get_image('t1')
83
+ faces = app.get(img)
84
+ rimg = app.draw_on(img, faces)
85
+ cv2.imwrite("./t1_output.jpg", rimg)
86
+ ```
87
+
88
+ This quick example will detect faces from the ``t1.jpg`` image and draw detection results on it.
89
+
90
+
91
+
92
+ ## Model Zoo
93
+
94
+ In the latest version of insightface library, we provide following model packs:
95
+
96
+ Name in **bold** is the default model pack. **Auto** means we can download the model pack through the python library directly.
97
+
98
+ Once you manually downloaded the zip model pack, unzip it under `~/.insightface/models/` first before you call the program.
99
+
100
+ | Name | Detection Model | Recognition Model | Alignment | Attributes | Model-Size | Link | Auto |
101
+ | ------------- | --------------- | -------------------- | ------------ | ---------- | ---------- | ------------------------------------------------------------ | ------------- |
102
+ | antelopev2 | SCRFD-10GF | ResNet100@Glint360K | 2d106 & 3d68 | Gender&Age | 407MB | [link](https://drive.google.com/file/d/18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8/view?usp=sharing) | N |
103
+ | **buffalo_l** | SCRFD-10GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 326MB | [link](https://drive.google.com/file/d/1qXsQJ8ZT42_xSmWIYy85IcidpiZudOCB/view?usp=sharing) | Y |
104
+ | buffalo_m | SCRFD-2.5GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 313MB | [link](https://drive.google.com/file/d/1net68yNxF33NNV6WP7k56FS6V53tq-64/view?usp=sharing) | N |
105
+ | buffalo_s | SCRFD-500MF | MBF@WebFace600K | 2d106 & 3d68 | Gender&Age | 159MB | [link](https://drive.google.com/file/d/1pKIusApEfoHKDjeBTXYB3yOQ0EtTonNE/view?usp=sharing) | N |
106
+ | buffalo_sc | SCRFD-500MF | MBF@WebFace600K | - | - | 16MB | [link](https://drive.google.com/file/d/19I-MZdctYKmVf3nu5Da3HS6KH5LBfdzG/view?usp=sharing) | N |
107
+
108
+
109
+
110
+ Recognition Accuracy:
111
+
112
+ | Name | MR-ALL | African | Caucasian | South Asian | East Asian | LFW | CFP-FP | AgeDB-30 | IJB-C(E4) |
113
+ | :-------- | ------ | ------- | --------- | ----------- | ---------- | ----- | ------ | -------- | --------- |
114
+ | buffalo_l | 91.25 | 90.29 | 94.70 | 93.16 | 74.96 | 99.83 | 99.33 | 98.23 | 97.25 |
115
+ | buffalo_s | 71.87 | 69.45 | 80.45 | 73.39 | 51.03 | 99.70 | 98.00 | 96.58 | 95.02 |
116
+
117
+ *buffalo_m has the same accuracy with buffalo_l.*
118
+
119
+ *buffalo_sc has the same accuracy with buffalo_s.*
120
+
121
+
122
+
123
+ **Note that these models are available for non-commercial research purposes only.**
124
+
125
+
126
+
127
+ For insightface>=0.3.3, models will be downloaded automatically once we init ``app = FaceAnalysis()`` instance.
128
+
129
+ For insightface==0.3.2, you must first download the model package by command:
130
+
131
+ ```
132
+ insightface-cli model.download buffalo_l
133
+ ```
134
+
135
+ ## Use Your Own Licensed Model
136
+
137
+ You can simply create a new model directory under ``~/.insightface/models/`` and replace the pretrained models we provide with your own models. And then call ``app = FaceAnalysis(name='your_model_zoo')`` to load these models.
138
+
139
+ ## Call Models
140
+
141
+ The latest insightface libary only supports onnx models. Once you have trained detection or recognition models by PyTorch, MXNet or any other frameworks, you can convert it to the onnx format and then they can be called with insightface library.
142
+
143
+ ### Call Detection Models
144
+
145
+ ```
146
+ import cv2
147
+ import numpy as np
148
+ import insightface
149
+ from insightface.app import FaceAnalysis
150
+ from insightface.data import get_image as ins_get_image
151
+
152
+ # Method-1, use FaceAnalysis
153
+ app = FaceAnalysis(allowed_modules=['detection']) # enable detection model only
154
+ app.prepare(ctx_id=0, det_size=(640, 640))
155
+
156
+ # Method-2, load model directly
157
+ detector = insightface.model_zoo.get_model('your_detection_model.onnx')
158
+ detector.prepare(ctx_id=0, input_size=(640, 640))
159
+
160
+ ```
161
+
162
+ ### Call Recognition Models
163
+
164
+ ```
165
+ import cv2
166
+ import numpy as np
167
+ import insightface
168
+ from insightface.app import FaceAnalysis
169
+ from insightface.data import get_image as ins_get_image
170
+
171
+ handler = insightface.model_zoo.get_model('your_recognition_model.onnx')
172
+ handler.prepare(ctx_id=0)
173
+
174
+ ```
175
+
176
+
insightface/insightface-0.7.3-cp39-cp39-win_amd64.whl ADDED
Binary file (842 kB). View file
 
insightface/insightface-0.7.3-cp39-cp39-win_amd64.whl.metadata ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: insightface
3
+ Version: 0.7.3
4
+ Summary: InsightFace Python Library
5
+ Home-page: https://github.com/deepinsight/insightface
6
+ Author: InsightFace Contributors
7
+ Author-email: [email protected]
8
+ License: MIT
9
+ Description-Content-Type: text/markdown
10
+ Requires-Dist: numpy
11
+ Requires-Dist: onnx
12
+ Requires-Dist: tqdm
13
+ Requires-Dist: requests
14
+ Requires-Dist: matplotlib
15
+ Requires-Dist: Pillow
16
+ Requires-Dist: scipy
17
+ Requires-Dist: scikit-learn
18
+ Requires-Dist: scikit-image
19
+ Requires-Dist: easydict
20
+ Requires-Dist: cython
21
+ Requires-Dist: albumentations
22
+ Requires-Dist: prettytable
23
+
24
+ # InsightFace Python Library
25
+
26
+ ## License
27
+
28
+ The code of InsightFace Python Library is released under the MIT License. There is no limitation for both academic and commercial usage.
29
+
30
+ **The pretrained models we provided with this library are available for non-commercial research purposes only, including both auto-downloading models and manual-downloading models.**
31
+
32
+ ## Install
33
+
34
+ ### Install Inference Backend
35
+
36
+ For ``insightface<=0.1.5``, we use MXNet as inference backend.
37
+
38
+ Starting from insightface>=0.2, we use onnxruntime as inference backend.
39
+
40
+ You have to install ``onnxruntime-gpu`` manually to enable GPU inference, or install ``onnxruntime`` to use CPU only inference.
41
+
42
+ ## Change Log
43
+
44
+ ### [0.7.1] - 2022-12-14
45
+
46
+ #### Changed
47
+
48
+ - Change model downloading provider to cloudfront.
49
+
50
+ ### [0.7] - 2022-11-28
51
+
52
+ #### Added
53
+
54
+ - Add face swapping model and example.
55
+
56
+ #### Changed
57
+
58
+ - Set default ORT provider to CUDA and CPU.
59
+
60
+ ### [0.6] - 2022-01-29
61
+
62
+ #### Added
63
+
64
+ - Add pose estimation in face-analysis app.
65
+
66
+ #### Changed
67
+
68
+ - Change model automated downloading url, to ucloud.
69
+
70
+
71
+ ## Quick Example
72
+
73
+ ```
74
+ import cv2
75
+ import numpy as np
76
+ import insightface
77
+ from insightface.app import FaceAnalysis
78
+ from insightface.data import get_image as ins_get_image
79
+
80
+ app = FaceAnalysis(providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
81
+ app.prepare(ctx_id=0, det_size=(640, 640))
82
+ img = ins_get_image('t1')
83
+ faces = app.get(img)
84
+ rimg = app.draw_on(img, faces)
85
+ cv2.imwrite("./t1_output.jpg", rimg)
86
+ ```
87
+
88
+ This quick example will detect faces from the ``t1.jpg`` image and draw detection results on it.
89
+
90
+
91
+
92
+ ## Model Zoo
93
+
94
+ In the latest version of insightface library, we provide following model packs:
95
+
96
+ Name in **bold** is the default model pack. **Auto** means we can download the model pack through the python library directly.
97
+
98
+ Once you manually downloaded the zip model pack, unzip it under `~/.insightface/models/` first before you call the program.
99
+
100
+ | Name | Detection Model | Recognition Model | Alignment | Attributes | Model-Size | Link | Auto |
101
+ | ------------- | --------------- | -------------------- | ------------ | ---------- | ---------- | ------------------------------------------------------------ | ------------- |
102
+ | antelopev2 | SCRFD-10GF | ResNet100@Glint360K | 2d106 & 3d68 | Gender&Age | 407MB | [link](https://drive.google.com/file/d/18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8/view?usp=sharing) | N |
103
+ | **buffalo_l** | SCRFD-10GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 326MB | [link](https://drive.google.com/file/d/1qXsQJ8ZT42_xSmWIYy85IcidpiZudOCB/view?usp=sharing) | Y |
104
+ | buffalo_m | SCRFD-2.5GF | ResNet50@WebFace600K | 2d106 & 3d68 | Gender&Age | 313MB | [link](https://drive.google.com/file/d/1net68yNxF33NNV6WP7k56FS6V53tq-64/view?usp=sharing) | N |
105
+ | buffalo_s | SCRFD-500MF | MBF@WebFace600K | 2d106 & 3d68 | Gender&Age | 159MB | [link](https://drive.google.com/file/d/1pKIusApEfoHKDjeBTXYB3yOQ0EtTonNE/view?usp=sharing) | N |
106
+ | buffalo_sc | SCRFD-500MF | MBF@WebFace600K | - | - | 16MB | [link](https://drive.google.com/file/d/19I-MZdctYKmVf3nu5Da3HS6KH5LBfdzG/view?usp=sharing) | N |
107
+
108
+
109
+
110
+ Recognition Accuracy:
111
+
112
+ | Name | MR-ALL | African | Caucasian | South Asian | East Asian | LFW | CFP-FP | AgeDB-30 | IJB-C(E4) |
113
+ | :-------- | ------ | ------- | --------- | ----------- | ---------- | ----- | ------ | -------- | --------- |
114
+ | buffalo_l | 91.25 | 90.29 | 94.70 | 93.16 | 74.96 | 99.83 | 99.33 | 98.23 | 97.25 |
115
+ | buffalo_s | 71.87 | 69.45 | 80.45 | 73.39 | 51.03 | 99.70 | 98.00 | 96.58 | 95.02 |
116
+
117
+ *buffalo_m has the same accuracy with buffalo_l.*
118
+
119
+ *buffalo_sc has the same accuracy with buffalo_s.*
120
+
121
+
122
+
123
+ **Note that these models are available for non-commercial research purposes only.**
124
+
125
+
126
+
127
+ For insightface>=0.3.3, models will be downloaded automatically once we init ``app = FaceAnalysis()`` instance.
128
+
129
+ For insightface==0.3.2, you must first download the model package by command:
130
+
131
+ ```
132
+ insightface-cli model.download buffalo_l
133
+ ```
134
+
135
+ ## Use Your Own Licensed Model
136
+
137
+ You can simply create a new model directory under ``~/.insightface/models/`` and replace the pretrained models we provide with your own models. And then call ``app = FaceAnalysis(name='your_model_zoo')`` to load these models.
138
+
139
+ ## Call Models
140
+
141
+ The latest insightface libary only supports onnx models. Once you have trained detection or recognition models by PyTorch, MXNet or any other frameworks, you can convert it to the onnx format and then they can be called with insightface library.
142
+
143
+ ### Call Detection Models
144
+
145
+ ```
146
+ import cv2
147
+ import numpy as np
148
+ import insightface
149
+ from insightface.app import FaceAnalysis
150
+ from insightface.data import get_image as ins_get_image
151
+
152
+ # Method-1, use FaceAnalysis
153
+ app = FaceAnalysis(allowed_modules=['detection']) # enable detection model only
154
+ app.prepare(ctx_id=0, det_size=(640, 640))
155
+
156
+ # Method-2, load model directly
157
+ detector = insightface.model_zoo.get_model('your_detection_model.onnx')
158
+ detector.prepare(ctx_id=0, input_size=(640, 640))
159
+
160
+ ```
161
+
162
+ ### Call Recognition Models
163
+
164
+ ```
165
+ import cv2
166
+ import numpy as np
167
+ import insightface
168
+ from insightface.app import FaceAnalysis
169
+ from insightface.data import get_image as ins_get_image
170
+
171
+ handler = insightface.model_zoo.get_model('your_recognition_model.onnx')
172
+ handler.prepare(ctx_id=0)
173
+
174
+ ```
175
+
176
+
intel-extension-for-pytorch/index.html ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta name="generator" content="simple503 version 0.4.0" />
5
+ <meta name="pypi:repository-version" content="1.0" />
6
+ <meta charset="UTF-8" />
7
+ <title>
8
+ Links for intel-extension-for-pytorch
9
+ </title>
10
+ </head>
11
+ <body>
12
+ <h1>
13
+ Links for intel-extension-for-pytorch
14
+ </h1>
15
+ <a href="/intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.20+git4849f3b-cp310-cp310-win_amd64.whl#sha256=0207f841efc2a742f402613abe391470a61d301e3a7a843b59c70f5c2a1f9255" data-dist-info-metadata="sha256=94ce2770d749bbd17ec8842acb1eff3eb2e62725ad7abb74e6b034ce8a33596e">
16
+ intel_extension_for_pytorch-2.1.20+git4849f3b-cp310-cp310-win_amd64.whl
17
+ </a>
18
+ <br />
19
+ <a href="/intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64_2.whl#sha256=c8f52e6b118cff5f310d498f82337ee869d6e03a25b9b1218cc47a2dc7077a9e" data-dist-info-metadata="sha256=84ea36486fc23bbce7e8fb8822a5678a7d871da36f5908302919759a5f428c90">
20
+ intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64_2.whl
21
+ </a>
22
+ <br />
23
+ <a href="/intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl#sha256=83b4963eb9d8bb857292a61c6b5bc50863c8e8774fb71d553c24b9592b59332c" data-dist-info-metadata="sha256=84ea36486fc23bbce7e8fb8822a5678a7d871da36f5908302919759a5f428c90">
24
+ intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl
25
+ </a>
26
+ <br />
27
+ <a href="/intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl#sha256=b0c3b42a2092d1a9000e62adf05ca718a2927cd36a0cb586830b6ff97fc9b54a" data-dist-info-metadata="sha256=a11cfa0213e94fa02fa570c6a304e4368234010ef0840561b9f74af4885eb192">
28
+ intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl
29
+ </a>
30
+ <br />
31
+ <a href="/intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+git632f70a-cp310-cp310-win_amd64.whl#sha256=d0f59ee70a5d9fe59890cc799201af9ffca031e039947a3511fae64c66100464" data-dist-info-metadata="sha256=14f1085ded0149ce0caf81479ea0f6d14e25f97abff8746b23e39cc2d712565b">
32
+ intel_extension_for_pytorch-2.0.110+git632f70a-cp310-cp310-win_amd64.whl
33
+ </a>
34
+ <br />
35
+ </body>
36
+ </html>
intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+git632f70a-cp310-cp310-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0f59ee70a5d9fe59890cc799201af9ffca031e039947a3511fae64c66100464
3
+ size 551451963
intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+git632f70a-cp310-cp310-win_amd64.whl.metadata ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: intel-extension-for-pytorch
3
+ Version: 2.0.110+git632f70a
4
+ Summary: Intel® Extension for PyTorch*
5
+ Home-page: https://github.com/intel/intel-extension-for-pytorch
6
+ Author: Intel Corp.
7
+ License: https://www.apache.org/licenses/LICENSE-2.0
8
+ Classifier: License :: OSI Approved :: Apache Software License
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE
11
+ Requires-Dist: psutil
12
+ Requires-Dist: numpy
13
+
14
+ # Intel® Extension for PyTorch\*
15
+
16
+ Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch\* `xpu` device, Intel® Extension for PyTorch\* provides easy GPU acceleration for Intel discrete GPUs with PyTorch\*.
17
+
18
+ Intel® Extension for PyTorch\* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch\* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch\* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice.
19
+
20
+ The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
21
+
22
+ * Check [CPU tutorial](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® CPUs. Source code is available at the [master branch](https://github.com/intel/intel-extension-for-pytorch/tree/master).
23
+ * Check [GPU tutorial](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® GPUs. Source code is available at the [xpu-master branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master).
24
+
25
+ ## Installation
26
+
27
+ ### CPU version
28
+
29
+ You can use either of the following 2 commands to install Intel® Extension for PyTorch\* CPU version.
30
+
31
+ ```bash
32
+ python -m pip install intel_extension_for_pytorch
33
+ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu
34
+ ```
35
+
36
+ **Note:** Intel® Extension for PyTorch\* has PyTorch version requirement. Please check more detailed information via the URL below.
37
+
38
+ More installation methods can be found at [CPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
39
+
40
+ Compilation instruction of the latest CPU code base `master` branch can be found at [Installation Guide](https://github.com/intel/intel-extension-for-pytorch/blob/master/docs/tutorials/installation.md#install-via-compiling-from-source).
41
+
42
+ ### GPU version
43
+
44
+ You can install Intel® Extension for PyTorch\* for GPU via command below.
45
+
46
+ ```bash
47
+ python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
48
+ ```
49
+
50
+ **Note:** The patched PyTorch 2.0.1 is required to work with Intel® Extension for PyTorch\* on Intel® graphics card for now.
51
+
52
+ More installation methods can be found at [GPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html).
53
+
54
+ Compilation instruction of the latest GPU code base `xpu-master` branch can be found at [Installation Guide For Linux/WSL2](https://github.com/intel/intel-extension-for-pytorch/blob/xpu-master/docs/tutorials/installations/linux.rst#install-via-compiling-from-source) and [Installation Guide For Windows](https://github.com/intel/intel-extension-for-pytorch/blob/xpu-master/docs/tutorials/installations/windows.rst#install-via-compiling-from-source).
55
+
56
+ ## Getting Started
57
+
58
+ Minor code changes are required for users to get start with Intel® Extension for PyTorch\*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch\* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.
59
+
60
+ The following code snippet shows an inference code with FP32 data type. More examples on CPU, including training and C++ examples, are available at [CPU Example page](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html). More examples on GPU are available at [GPU Example page](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/examples.html).
61
+
62
+ ### Inference on CPU
63
+
64
+ ```python
65
+ import torch
66
+ import torchvision.models as models
67
+
68
+ model = models.resnet50(pretrained=True)
69
+ model.eval()
70
+ data = torch.rand(1, 3, 224, 224)
71
+
72
+ import intel_extension_for_pytorch as ipex
73
+ model = ipex.optimize(model)
74
+
75
+ with torch.no_grad():
76
+ model(data)
77
+ ```
78
+
79
+ ### Inference on GPU
80
+
81
+ ```python
82
+ import torch
83
+ import torchvision.models as models
84
+
85
+ model = models.resnet50(pretrained=True)
86
+ model.eval()
87
+ data = torch.rand(1, 3, 224, 224)
88
+
89
+ import intel_extension_for_pytorch as ipex
90
+ model = model.to('xpu')
91
+ data = data.to('xpu')
92
+ model = ipex.optimize(model)
93
+
94
+ with torch.no_grad():
95
+ model(data)
96
+ ```
97
+
98
+ ## License
99
+
100
+ _Apache License_, Version _2.0_. As found in [LICENSE](https://github.com/intel/intel-extension-for-pytorch/blob/master/LICENSE) file.
101
+
102
+ ## Security
103
+
104
+ See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html)
105
+ for information on how to report a potential security issue or vulnerability.
106
+
107
+ See also: [Security Policy](SECURITY.md)
108
+
intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0c3b42a2092d1a9000e62adf05ca718a2927cd36a0cb586830b6ff97fc9b54a
3
+ size 466814181
intel-extension-for-pytorch/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl.metadata ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: intel-extension-for-pytorch
3
+ Version: 2.0.110+gitc6ea20b
4
+ Summary: Intel® Extension for PyTorch*
5
+ Home-page: https://github.com/intel/intel-extension-for-pytorch
6
+ Author: Intel Corp.
7
+ License: https://www.apache.org/licenses/LICENSE-2.0
8
+ Classifier: License :: OSI Approved :: Apache Software License
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE
11
+ Requires-Dist: psutil
12
+ Requires-Dist: numpy
13
+
14
+ # Intel® Extension for PyTorch\*
15
+
16
+ Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch\* `xpu` device, Intel® Extension for PyTorch\* provides easy GPU acceleration for Intel discrete GPUs with PyTorch\*.
17
+
18
+ Intel® Extension for PyTorch\* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch\* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch\* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice.
19
+
20
+ The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
21
+
22
+ * Check [CPU tutorial](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® CPUs. Source code is available at the [master branch](https://github.com/intel/intel-extension-for-pytorch/tree/master).
23
+ * Check [GPU tutorial](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® GPUs. Source code is available at the [xpu-master branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master).
24
+
25
+ ## Installation
26
+
27
+ ### CPU version
28
+
29
+ You can use either of the following 2 commands to install Intel® Extension for PyTorch\* CPU version.
30
+
31
+ ```bash
32
+ python -m pip install intel_extension_for_pytorch
33
+ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu
34
+ ```
35
+
36
+ **Note:** Intel® Extension for PyTorch\* has PyTorch version requirement. Please check more detailed information via the URL below.
37
+
38
+ More installation methods can be found at [CPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
39
+
40
+ Compilation instruction of the latest CPU code base `master` branch can be found at [Installation Guide](https://github.com/intel/intel-extension-for-pytorch/blob/master/docs/tutorials/installation.md#install-via-compiling-from-source).
41
+
42
+ ### GPU version
43
+
44
+ You can install Intel® Extension for PyTorch\* for GPU via command below.
45
+
46
+ ```bash
47
+ python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
48
+ ```
49
+
50
+ **Note:** The patched PyTorch 2.0.1 is required to work with Intel® Extension for PyTorch\* on Intel® graphics card for now.
51
+
52
+ More installation methods can be found at [GPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html).
53
+
54
+ Compilation instruction of the latest GPU code base `xpu-master` branch can be found at [Installation Guide For Linux/WSL2](https://github.com/intel/intel-extension-for-pytorch/blob/xpu-master/docs/tutorials/installations/linux.rst#install-via-compiling-from-source) and [Installation Guide For Windows](https://github.com/intel/intel-extension-for-pytorch/blob/xpu-master/docs/tutorials/installations/windows.rst#install-via-compiling-from-source).
55
+
56
+ ## Getting Started
57
+
58
+ Minor code changes are required for users to get start with Intel® Extension for PyTorch\*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch\* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.
59
+
60
+ The following code snippet shows an inference code with FP32 data type. More examples on CPU, including training and C++ examples, are available at [CPU Example page](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html). More examples on GPU are available at [GPU Example page](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/examples.html).
61
+
62
+ ### Inference on CPU
63
+
64
+ ```python
65
+ import torch
66
+ import torchvision.models as models
67
+
68
+ model = models.resnet50(pretrained=True)
69
+ model.eval()
70
+ data = torch.rand(1, 3, 224, 224)
71
+
72
+ import intel_extension_for_pytorch as ipex
73
+ model = ipex.optimize(model)
74
+
75
+ with torch.no_grad():
76
+ model(data)
77
+ ```
78
+
79
+ ### Inference on GPU
80
+
81
+ ```python
82
+ import torch
83
+ import torchvision.models as models
84
+
85
+ model = models.resnet50(pretrained=True)
86
+ model.eval()
87
+ data = torch.rand(1, 3, 224, 224)
88
+
89
+ import intel_extension_for_pytorch as ipex
90
+ model = model.to('xpu')
91
+ data = data.to('xpu')
92
+ model = ipex.optimize(model)
93
+
94
+ with torch.no_grad():
95
+ model(data)
96
+ ```
97
+
98
+ ## License
99
+
100
+ _Apache License_, Version _2.0_. As found in [LICENSE](https://github.com/intel/intel-extension-for-pytorch/blob/master/LICENSE) file.
101
+
102
+ ## Security
103
+
104
+ See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html)
105
+ for information on how to report a potential security issue or vulnerability.
106
+
107
+ See also: [Security Policy](SECURITY.md)
108
+
intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83b4963eb9d8bb857292a61c6b5bc50863c8e8774fb71d553c24b9592b59332c
3
+ size 367153302
intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl.metadata ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: intel-extension-for-pytorch
3
+ Version: 2.1.10+xpu
4
+ Summary: Intel® Extension for PyTorch*
5
+ Home-page: https://github.com/intel/intel-extension-for-pytorch
6
+ Author: Intel Corp.
7
+ License: https://www.apache.org/licenses/LICENSE-2.0
8
+ Classifier: License :: OSI Approved :: Apache Software License
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE
11
+ Requires-Dist: psutil
12
+ Requires-Dist: numpy
13
+ Requires-Dist: packaging
14
+ Requires-Dist: pydantic
15
+
16
+ <div align="center">
17
+
18
+ Intel® Extension for Pytorch*
19
+ ===========================
20
+
21
+ [💻Examples](./docs/tutorials/examples.md)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖CPU Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖GPU Documentations](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)
22
+ </div>
23
+
24
+
25
+
26
+ Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch\* `xpu` device, Intel® Extension for PyTorch\* provides easy GPU acceleration for Intel discrete GPUs with PyTorch\*.
27
+
28
+ Intel® Extension for PyTorch\* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch\* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch\* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice.
29
+
30
+ The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
31
+
32
+ * Check [CPU tutorial](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® CPUs. Source code is available at the [main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main).
33
+ * Check [GPU tutorial](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® GPUs. Source code is available at the [xpu-main branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main).
34
+
35
+ ## Installation
36
+
37
+ ### CPU version
38
+
39
+ You can use either of the following 2 commands to install Intel® Extension for PyTorch\* CPU version.
40
+
41
+ ```bash
42
+ python -m pip install intel_extension_for_pytorch
43
+ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu
44
+ ```
45
+
46
+ **Note:** Intel® Extension for PyTorch\* has PyTorch version requirement. Please check more detailed information via the URL below.
47
+
48
+ More installation methods can be found at [CPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
49
+
50
+ Compilation instruction of the latest CPU code base `main` branch can be found at [Installation Guide](https://github.com/intel/intel-extension-for-pytorch/blob/main/docs/tutorials/installation.md#install-via-compiling-from-source).
51
+
52
+ ### GPU version
53
+
54
+ You can install Intel® Extension for PyTorch\* for GPU via command below.
55
+
56
+ ```bash
57
+ python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 intel_extension_for_pytorch==2.1.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
58
+ ```
59
+
60
+ **Note:** The patched PyTorch 2.1.0 is required to work with Intel® Extension for PyTorch\* on Intel® graphics card for now.
61
+
62
+ More installation methods can be found at [GPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html).
63
+
64
+ Compilation instruction of the latest GPU code base `xpu-main` branch can be found at [Installation Guide For Linux/WSL2](https://github.com/intel/intel-extension-for-pytorch/blob/xpu-main/docs/tutorials/installations/linux.rst#install-via-compiling-from-source) and [Installation Guide For Windows](https://github.com/intel/intel-extension-for-pytorch/blob/xpu-main/docs/tutorials/installations/windows.rst#install-via-compiling-from-source).
65
+
66
+ ## Getting Started
67
+
68
+ Minor code changes are required for users to get start with Intel® Extension for PyTorch\*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch\* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.
69
+
70
+ The following code snippet shows an inference code with FP32 data type. More examples on CPU, including training and C++ examples, are available at [CPU Example page](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html). More examples on GPU are available at [GPU Example page](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/examples.html).
71
+
72
+ ### Inference on CPU
73
+
74
+ ```python
75
+ import torch
76
+ import torchvision.models as models
77
+
78
+ model = models.resnet50(pretrained=True)
79
+ model.eval()
80
+ data = torch.rand(1, 3, 224, 224)
81
+
82
+ import intel_extension_for_pytorch as ipex
83
+ model = ipex.optimize(model)
84
+
85
+ with torch.no_grad():
86
+ model(data)
87
+ ```
88
+
89
+ ### Inference on GPU
90
+
91
+ ```python
92
+ import torch
93
+ import torchvision.models as models
94
+
95
+ model = models.resnet50(pretrained=True)
96
+ model.eval()
97
+ data = torch.rand(1, 3, 224, 224)
98
+
99
+ import intel_extension_for_pytorch as ipex
100
+ model = model.to('xpu')
101
+ data = data.to('xpu')
102
+ model = ipex.optimize(model)
103
+
104
+ with torch.no_grad():
105
+ model(data)
106
+ ```
107
+
108
+ ## License
109
+
110
+ _Apache License_, Version _2.0_. As found in [LICENSE](https://github.com/intel/intel-extension-for-pytorch/blob/main/LICENSE) file.
111
+
112
+ ## Security
113
+
114
+ See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html)
115
+ for information on how to report a potential security issue or vulnerability.
116
+
117
+ See also: [Security Policy](SECURITY.md)
118
+
intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64_2.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8f52e6b118cff5f310d498f82337ee869d6e03a25b9b1218cc47a2dc7077a9e
3
+ size 367159341
intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.10+xpu-cp311-cp311-win_amd64_2.whl.metadata ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: intel-extension-for-pytorch
3
+ Version: 2.1.10+xpu
4
+ Summary: Intel® Extension for PyTorch*
5
+ Home-page: https://github.com/intel/intel-extension-for-pytorch
6
+ Author: Intel Corp.
7
+ License: https://www.apache.org/licenses/LICENSE-2.0
8
+ Classifier: License :: OSI Approved :: Apache Software License
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE
11
+ Requires-Dist: psutil
12
+ Requires-Dist: numpy
13
+ Requires-Dist: packaging
14
+ Requires-Dist: pydantic
15
+
16
+ <div align="center">
17
+
18
+ Intel® Extension for Pytorch*
19
+ ===========================
20
+
21
+ [💻Examples](./docs/tutorials/examples.md)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖CPU Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖GPU Documentations](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)
22
+ </div>
23
+
24
+
25
+
26
+ Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch\* `xpu` device, Intel® Extension for PyTorch\* provides easy GPU acceleration for Intel discrete GPUs with PyTorch\*.
27
+
28
+ Intel® Extension for PyTorch\* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch\* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch\* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice.
29
+
30
+ The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
31
+
32
+ * Check [CPU tutorial](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® CPUs. Source code is available at the [main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main).
33
+ * Check [GPU tutorial](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® GPUs. Source code is available at the [xpu-main branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main).
34
+
35
+ ## Installation
36
+
37
+ ### CPU version
38
+
39
+ You can use either of the following 2 commands to install Intel® Extension for PyTorch\* CPU version.
40
+
41
+ ```bash
42
+ python -m pip install intel_extension_for_pytorch
43
+ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu
44
+ ```
45
+
46
+ **Note:** Intel® Extension for PyTorch\* has PyTorch version requirement. Please check more detailed information via the URL below.
47
+
48
+ More installation methods can be found at [CPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
49
+
50
+ Compilation instruction of the latest CPU code base `main` branch can be found at [Installation Guide](https://github.com/intel/intel-extension-for-pytorch/blob/main/docs/tutorials/installation.md#install-via-compiling-from-source).
51
+
52
+ ### GPU version
53
+
54
+ You can install Intel® Extension for PyTorch\* for GPU via command below.
55
+
56
+ ```bash
57
+ python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 intel_extension_for_pytorch==2.1.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
58
+ ```
59
+
60
+ **Note:** The patched PyTorch 2.1.0 is required to work with Intel® Extension for PyTorch\* on Intel® graphics card for now.
61
+
62
+ More installation methods can be found at [GPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html).
63
+
64
+ Compilation instruction of the latest GPU code base `xpu-main` branch can be found at [Installation Guide For Linux/WSL2](https://github.com/intel/intel-extension-for-pytorch/blob/xpu-main/docs/tutorials/installations/linux.rst#install-via-compiling-from-source) and [Installation Guide For Windows](https://github.com/intel/intel-extension-for-pytorch/blob/xpu-main/docs/tutorials/installations/windows.rst#install-via-compiling-from-source).
65
+
66
+ ## Getting Started
67
+
68
+ Minor code changes are required for users to get start with Intel® Extension for PyTorch\*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch\* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.
69
+
70
+ The following code snippet shows an inference code with FP32 data type. More examples on CPU, including training and C++ examples, are available at [CPU Example page](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html). More examples on GPU are available at [GPU Example page](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/examples.html).
71
+
72
+ ### Inference on CPU
73
+
74
+ ```python
75
+ import torch
76
+ import torchvision.models as models
77
+
78
+ model = models.resnet50(pretrained=True)
79
+ model.eval()
80
+ data = torch.rand(1, 3, 224, 224)
81
+
82
+ import intel_extension_for_pytorch as ipex
83
+ model = ipex.optimize(model)
84
+
85
+ with torch.no_grad():
86
+ model(data)
87
+ ```
88
+
89
+ ### Inference on GPU
90
+
91
+ ```python
92
+ import torch
93
+ import torchvision.models as models
94
+
95
+ model = models.resnet50(pretrained=True)
96
+ model.eval()
97
+ data = torch.rand(1, 3, 224, 224)
98
+
99
+ import intel_extension_for_pytorch as ipex
100
+ model = model.to('xpu')
101
+ data = data.to('xpu')
102
+ model = ipex.optimize(model)
103
+
104
+ with torch.no_grad():
105
+ model(data)
106
+ ```
107
+
108
+ ## License
109
+
110
+ _Apache License_, Version _2.0_. As found in [LICENSE](https://github.com/intel/intel-extension-for-pytorch/blob/main/LICENSE) file.
111
+
112
+ ## Security
113
+
114
+ See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html)
115
+ for information on how to report a potential security issue or vulnerability.
116
+
117
+ See also: [Security Policy](SECURITY.md)
118
+
intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.20+git4849f3b-cp310-cp310-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0207f841efc2a742f402613abe391470a61d301e3a7a843b59c70f5c2a1f9255
3
+ size 483056266
intel-extension-for-pytorch/intel_extension_for_pytorch-2.1.20+git4849f3b-cp310-cp310-win_amd64.whl.metadata ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: intel-extension-for-pytorch
3
+ Version: 2.1.20+git4849f3b
4
+ Summary: Intel® Extension for PyTorch*
5
+ Home-page: https://github.com/intel/intel-extension-for-pytorch
6
+ Author: Intel Corp.
7
+ License: https://www.apache.org/licenses/LICENSE-2.0
8
+ Classifier: License :: OSI Approved :: Apache Software License
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE
11
+ Requires-Dist: psutil
12
+ Requires-Dist: numpy
13
+ Requires-Dist: packaging
14
+ Requires-Dist: pydantic
15
+
16
+ <div align="center">
17
+
18
+ Intel® Extension for Pytorch*
19
+ ===========================
20
+
21
+ [💻Examples](./docs/tutorials/examples.md)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖CPU Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖GPU Documentations](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)
22
+ </div>
23
+
24
+
25
+
26
+ Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch\* `xpu` device, Intel® Extension for PyTorch\* provides easy GPU acceleration for Intel discrete GPUs with PyTorch\*.
27
+
28
+ Intel® Extension for PyTorch\* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch\* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch\* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice.
29
+
30
+ The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
31
+
32
+ In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLMs are introduced in the Intel® Extension for PyTorch\*.
33
+
34
+ * Check [CPU tutorial](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® CPUs. Source code is available at the [main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main).
35
+ * Check [GPU tutorial](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) for detailed information of Intel® Extension for PyTorch\* for Intel® GPUs. Source code is available at the [xpu-main branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main).
36
+
37
+
38
+
39
+ ## Large Language Models (LLMs) Optimization
40
+
41
+ In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch\*. Check [LLM optimizations CPU](./examples/cpu/inference/python/llm) and [LLM optimizations GPU](./examples/gpu/inference/python/llm) for details.
42
+
43
+
44
+ ## Installation
45
+
46
+ ### CPU version
47
+
48
+ You can use either of the following 2 commands to install Intel® Extension for PyTorch\* CPU version.
49
+
50
+ ```bash
51
+ python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
52
+ python -m pip install intel-extension-for-pytorch --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
53
+ # for PRC user, you can check with the following link
54
+ python -m pip install intel-extension-for-pytorch --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/
55
+ ```
56
+
57
+ **Note:** Intel® Extension for PyTorch\* has PyTorch version requirement. Please check more detailed information via the URL below.
58
+
59
+ More installation methods can be found at [CPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
60
+
61
+ Compilation instruction of the latest CPU code base `main` branch can be found in the session Package `source` at [CPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html).
62
+
63
+ ### GPU version
64
+
65
+ You can install Intel® Extension for PyTorch\* for GPU via command below.
66
+
67
+ ```bash
68
+ python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
69
+ # for PRC user, you can check with the following link
70
+ python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
71
+
72
+ ```
73
+
74
+ **Note:** The patched PyTorch 2.1.0 is required to work with Intel® Extension for PyTorch\* on Intel® graphics card for now.
75
+
76
+ More installation methods can be found at [GPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html).
77
+
78
+ Compilation instruction of the latest GPU code base `xpu-main` branch can be found in the session Package `source` at [GPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html).
79
+
80
+ ## Getting Started
81
+
82
+ Minor code changes are required for users to get start with Intel® Extension for PyTorch\*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch\* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.
83
+
84
+ The following code snippet shows an inference code with FP32 data type. More examples on CPU, including training and C++ examples, are available at [CPU Example page](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html). More examples on GPU are available at [GPU Example page](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/examples.html).
85
+
86
+ ### Inference on CPU
87
+
88
+ ```python
89
+ import torch
90
+ import torchvision.models as models
91
+
92
+ model = models.resnet50(pretrained=True)
93
+ model.eval()
94
+ data = torch.rand(1, 3, 224, 224)
95
+
96
+ import intel_extension_for_pytorch as ipex
97
+ model = ipex.optimize(model)
98
+
99
+ with torch.no_grad():
100
+ model(data)
101
+ ```
102
+
103
+ ### Inference on GPU
104
+
105
+ ```python
106
+ import torch
107
+ import torchvision.models as models
108
+
109
+ model = models.resnet50(pretrained=True)
110
+ model.eval()
111
+ data = torch.rand(1, 3, 224, 224)
112
+
113
+ import intel_extension_for_pytorch as ipex
114
+ model = model.to('xpu')
115
+ data = data.to('xpu')
116
+ model = ipex.optimize(model)
117
+
118
+ with torch.no_grad():
119
+ model(data)
120
+ ```
121
+
122
+ ## License
123
+
124
+ _Apache License_, Version _2.0_. As found in [LICENSE](https://github.com/intel/intel-extension-for-pytorch/blob/main/LICENSE) file.
125
+
126
+ ## Security
127
+
128
+ See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html)
129
+ for information on how to report a potential security issue or vulnerability.
130
+
131
+ See also: [Security Policy](SECURITY.md)
132
+
torch/index.html ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta name="generator" content="simple503 version 0.4.0" />
5
+ <meta name="pypi:repository-version" content="1.0" />
6
+ <meta charset="UTF-8" />
7
+ <title>
8
+ Links for torch
9
+ </title>
10
+ </head>
11
+ <body>
12
+ <h1>
13
+ Links for torch
14
+ </h1>
15
+ <a href="/torch/torch-2.1.0a0+git7bcf7da-cp310-cp310-win_amd64.whl#sha256=9cfc345c2d275241b39f82c805725cecb39fe1c673dd0619f3b055ff4409ee98" data-requires-python="&gt;=3.8.0" data-dist-info-metadata="sha256=6eb6746e9e55182bde784e4e5814da5be88d4f4887fc51e4bc67112893707cb6">
16
+ torch-2.1.0a0+git7bcf7da-cp310-cp310-win_amd64.whl
17
+ </a>
18
+ <br />
19
+ <a href="/torch/torch-2.1.0a0+cxx11.abi-cp311-cp311-win_amd64.whl#sha256=baafac9ec83bc362604e767475f8933026aad85b4931c40deda01b5fd34f8fc9" data-requires-python="&gt;=3.8.0" data-dist-info-metadata="sha256=6003fefcda596bcc2ba2fd7b240ac5c14de68daa4ac348344e991d24e443a85d">
20
+ torch-2.1.0a0+cxx11.abi-cp311-cp311-win_amd64.whl
21
+ </a>
22
+ <br />
23
+ <a href="/torch/torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl#sha256=891d5c300207a443d89bbb46599f8bdce604f212d759b7cb536653741cb47f8a" data-requires-python="&gt;=3.8.0" data-dist-info-metadata="sha256=9f0da27526bb4cc39aab095a5c989547f6229fa1200a3527167998558d3ad8ed">
24
+ torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl
25
+ </a>
26
+ <br />
27
+ <a href="/torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64_2.whl#sha256=a9effde09a740a46ee5be2f7d9468c4e8cf56d52480839f13ed90f0a22dacb64" data-requires-python="&gt;=3.8.0" data-dist-info-metadata="sha256=e63d1af071eb68cb833c16d44b07f73075a199cc1294929ca663511675988ac2">
28
+ torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64_2.whl
29
+ </a>
30
+ <br />
31
+ <a href="/torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl#sha256=74c46c5931e12f125f8652667a182f5f990d48b2e1165da112bed108f40e16f7" data-requires-python="&gt;=3.8.0" data-dist-info-metadata="sha256=e63d1af071eb68cb833c16d44b07f73075a199cc1294929ca663511675988ac2">
32
+ torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl
33
+ </a>
34
+ <br />
35
+ </body>
36
+ </html>
torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74c46c5931e12f125f8652667a182f5f990d48b2e1165da112bed108f40e16f7
3
+ size 196277535
torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl.metadata ADDED
@@ -0,0 +1,483 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: torch
3
+ Version: 2.0.0a0+gite9ebda2
4
+ Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
5
+ Home-page: https://pytorch.org/
6
+ Download-URL: https://github.com/pytorch/pytorch/tags
7
+ Author: PyTorch Team
8
+ Author-email: [email protected]
9
+ License: BSD-3
10
+ Keywords: pytorch,machine learning
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Intended Audience :: Education
14
+ Classifier: Intended Audience :: Science/Research
15
+ Classifier: License :: OSI Approved :: BSD License
16
+ Classifier: Topic :: Scientific/Engineering
17
+ Classifier: Topic :: Scientific/Engineering :: Mathematics
18
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
+ Classifier: Topic :: Software Development
20
+ Classifier: Topic :: Software Development :: Libraries
21
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
22
+ Classifier: Programming Language :: C++
23
+ Classifier: Programming Language :: Python :: 3
24
+ Classifier: Programming Language :: Python :: 3.8
25
+ Classifier: Programming Language :: Python :: 3.9
26
+ Classifier: Programming Language :: Python :: 3.10
27
+ Requires-Python: >=3.8.0
28
+ Description-Content-Type: text/markdown
29
+ License-File: LICENSE
30
+ License-File: NOTICE
31
+ Requires-Dist: filelock
32
+ Requires-Dist: typing-extensions
33
+ Requires-Dist: sympy
34
+ Requires-Dist: networkx
35
+ Requires-Dist: jinja2
36
+ Provides-Extra: opt-einsum
37
+ Requires-Dist: opt-einsum >=3.3 ; extra == 'opt-einsum'
38
+
39
+ ![PyTorch Logo](https://github.com/pytorch/pytorch/blob/master/docs/source/_static/img/pytorch-logo-dark.png)
40
+
41
+ --------------------------------------------------------------------------------
42
+
43
+ PyTorch is a Python package that provides two high-level features:
44
+ - Tensor computation (like NumPy) with strong GPU acceleration
45
+ - Deep neural networks built on a tape-based autograd system
46
+
47
+ You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
48
+
49
+ Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.org](https://hud.pytorch.org/ci/pytorch/pytorch/master).
50
+
51
+ <!-- toc -->
52
+
53
+ - [More About PyTorch](#more-about-pytorch)
54
+ - [A GPU-Ready Tensor Library](#a-gpu-ready-tensor-library)
55
+ - [Dynamic Neural Networks: Tape-Based Autograd](#dynamic-neural-networks-tape-based-autograd)
56
+ - [Python First](#python-first)
57
+ - [Imperative Experiences](#imperative-experiences)
58
+ - [Fast and Lean](#fast-and-lean)
59
+ - [Extensions Without Pain](#extensions-without-pain)
60
+ - [Installation](#installation)
61
+ - [Binaries](#binaries)
62
+ - [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
63
+ - [From Source](#from-source)
64
+ - [Prerequisites](#prerequisites)
65
+ - [Install Dependencies](#install-dependencies)
66
+ - [Get the PyTorch Source](#get-the-pytorch-source)
67
+ - [Install PyTorch](#install-pytorch)
68
+ - [Adjust Build Options (Optional)](#adjust-build-options-optional)
69
+ - [Docker Image](#docker-image)
70
+ - [Using pre-built images](#using-pre-built-images)
71
+ - [Building the image yourself](#building-the-image-yourself)
72
+ - [Building the Documentation](#building-the-documentation)
73
+ - [Previous Versions](#previous-versions)
74
+ - [Getting Started](#getting-started)
75
+ - [Resources](#resources)
76
+ - [Communication](#communication)
77
+ - [Releases and Contributing](#releases-and-contributing)
78
+ - [The Team](#the-team)
79
+ - [License](#license)
80
+
81
+ <!-- tocstop -->
82
+
83
+ ## More About PyTorch
84
+
85
+ At a granular level, PyTorch is a library that consists of the following components:
86
+
87
+ | Component | Description |
88
+ | ---- | --- |
89
+ | [**torch**](https://pytorch.org/docs/stable/torch.html) | A Tensor library like NumPy, with strong GPU support |
90
+ | [**torch.autograd**](https://pytorch.org/docs/stable/autograd.html) | A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
91
+ | [**torch.jit**](https://pytorch.org/docs/stable/jit.html) | A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code |
92
+ | [**torch.nn**](https://pytorch.org/docs/stable/nn.html) | A neural networks library deeply integrated with autograd designed for maximum flexibility |
93
+ | [**torch.multiprocessing**](https://pytorch.org/docs/stable/multiprocessing.html) | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training |
94
+ | [**torch.utils**](https://pytorch.org/docs/stable/data.html) | DataLoader and other utility functions for convenience |
95
+
96
+ Usually, PyTorch is used either as:
97
+
98
+ - A replacement for NumPy to use the power of GPUs.
99
+ - A deep learning research platform that provides maximum flexibility and speed.
100
+
101
+ Elaborating Further:
102
+
103
+ ### A GPU-Ready Tensor Library
104
+
105
+ If you use NumPy, then you have used Tensors (a.k.a. ndarray).
106
+
107
+ ![Tensor illustration](./docs/source/_static/img/tensor_illustration.png)
108
+
109
+ PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the
110
+ computation by a huge amount.
111
+
112
+ We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs
113
+ such as slicing, indexing, mathematical operations, linear algebra, reductions.
114
+ And they are fast!
115
+
116
+ ### Dynamic Neural Networks: Tape-Based Autograd
117
+
118
+ PyTorch has a unique way of building neural networks: using and replaying a tape recorder.
119
+
120
+ Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world.
121
+ One has to build a neural network and reuse the same structure again and again.
122
+ Changing the way the network behaves means that one has to start from scratch.
123
+
124
+ With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to
125
+ change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes
126
+ from several research papers on this topic, as well as current and past work such as
127
+ [torch-autograd](https://github.com/twitter/torch-autograd),
128
+ [autograd](https://github.com/HIPS/autograd),
129
+ [Chainer](https://chainer.org), etc.
130
+
131
+ While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
132
+ You get the best of speed and flexibility for your crazy research.
133
+
134
+ ![Dynamic graph](https://github.com/pytorch/pytorch/blob/master/docs/source/_static/img/dynamic_graph.gif)
135
+
136
+ ### Python First
137
+
138
+ PyTorch is not a Python binding into a monolithic C++ framework.
139
+ It is built to be deeply integrated into Python.
140
+ You can use it naturally like you would use [NumPy](https://www.numpy.org/) / [SciPy](https://www.scipy.org/) / [scikit-learn](https://scikit-learn.org) etc.
141
+ You can write your new neural network layers in Python itself, using your favorite libraries
142
+ and use packages such as [Cython](https://cython.org/) and [Numba](http://numba.pydata.org/).
143
+ Our goal is to not reinvent the wheel where appropriate.
144
+
145
+ ### Imperative Experiences
146
+
147
+ PyTorch is designed to be intuitive, linear in thought, and easy to use.
148
+ When you execute a line of code, it gets executed. There isn't an asynchronous view of the world.
149
+ When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward.
150
+ The stack trace points to exactly where your code was defined.
151
+ We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.
152
+
153
+ ### Fast and Lean
154
+
155
+ PyTorch has minimal framework overhead. We integrate acceleration libraries
156
+ such as [Intel MKL](https://software.intel.com/mkl) and NVIDIA ([cuDNN](https://developer.nvidia.com/cudnn), [NCCL](https://developer.nvidia.com/nccl)) to maximize speed.
157
+ At the core, its CPU and GPU Tensor and neural network backends
158
+ are mature and have been tested for years.
159
+
160
+ Hence, PyTorch is quite fast – whether you run small or large neural networks.
161
+
162
+ The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives.
163
+ We've written custom memory allocators for the GPU to make sure that
164
+ your deep learning models are maximally memory efficient.
165
+ This enables you to train bigger deep learning models than before.
166
+
167
+ ### Extensions Without Pain
168
+
169
+ Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward
170
+ and with minimal abstractions.
171
+
172
+ You can write new neural network layers in Python using the torch API
173
+ [or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).
174
+
175
+ If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.
176
+ No wrapper code needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).
177
+
178
+
179
+ ## Installation
180
+
181
+ ### Binaries
182
+ Commands to install binaries via Conda or pip wheels are on our website: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
183
+
184
+
185
+ #### NVIDIA Jetson Platforms
186
+
187
+ Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) and the L4T container is published [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch)
188
+
189
+ They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv) and [@ptrblck](https://github.com/ptrblck) are maintaining them.
190
+
191
+
192
+ ### From Source
193
+
194
+ #### Prerequisites
195
+ If you are installing from source, you will need:
196
+ - Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
197
+ - A C++17 compatible compiler, such as clang
198
+
199
+ We highly recommend installing an [Anaconda](https://www.anaconda.com/distribution/#download-section) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
200
+
201
+ If you want to compile with CUDA support, install the following (note that CUDA is not supported on macOS)
202
+ - [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 11.0 or above
203
+ - [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v7 or above
204
+ - [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
205
+
206
+ Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/pdf/cuDNN-Support-Matrix.pdf) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
207
+
208
+ If you want to disable CUDA support, export the environment variable `USE_CUDA=0`.
209
+ Other potentially useful environment variables may be found in `setup.py`.
210
+
211
+ If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/)
212
+
213
+ If you want to compile with ROCm support, install
214
+ - [AMD ROCm](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html) 4.0 and above installation
215
+ - ROCm is currently supported only for Linux systems.
216
+
217
+ If you want to disable ROCm support, export the environment variable `USE_ROCM=0`.
218
+ Other potentially useful environment variables may be found in `setup.py`.
219
+
220
+ #### Install Dependencies
221
+
222
+ **Common**
223
+
224
+ ```bash
225
+ conda install cmake ninja
226
+ # Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
227
+ pip install -r requirements.txt
228
+ ```
229
+
230
+ **On Linux**
231
+
232
+ ```bash
233
+ conda install mkl mkl-include
234
+ # CUDA only: Add LAPACK support for the GPU if needed
235
+ conda install -c pytorch magma-cuda110 # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo
236
+ ```
237
+
238
+ **On MacOS**
239
+
240
+ ```bash
241
+ # Add this package on intel x86 processor machines only
242
+ conda install mkl mkl-include
243
+ # Add these packages if torch.distributed is needed
244
+ conda install pkg-config libuv
245
+ ```
246
+
247
+ **On Windows**
248
+
249
+ ```bash
250
+ conda install mkl mkl-include
251
+ # Add these packages if torch.distributed is needed.
252
+ # Distributed package support on Windows is a prototype feature and is subject to changes.
253
+ conda install -c conda-forge libuv=1.39
254
+ ```
255
+
256
+ #### Get the PyTorch Source
257
+ ```bash
258
+ git clone --recursive https://github.com/pytorch/pytorch
259
+ cd pytorch
260
+ # if you are updating an existing checkout
261
+ git submodule sync
262
+ git submodule update --init --recursive
263
+ ```
264
+
265
+ #### Install PyTorch
266
+ **On Linux**
267
+
268
+ If you're compiling for AMD ROCm then first run this command:
269
+ ```bash
270
+ # Only run this if you're compiling for ROCm
271
+ python tools/amd_build/build_amd.py
272
+ ```
273
+
274
+ Install PyTorch
275
+ ```bash
276
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
277
+ python setup.py develop
278
+ ```
279
+
280
+ > _Aside:_ If you are using [Anaconda](https://www.anaconda.com/distribution/#download-section), you may experience an error caused by the linker:
281
+ >
282
+ > ```plaintext
283
+ > build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
284
+ > collect2: error: ld returned 1 exit status
285
+ > error: command 'g++' failed with exit status 1
286
+ > ```
287
+ >
288
+ > This is caused by `ld` from the Conda environment shadowing the system `ld`. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.
289
+
290
+ **On macOS**
291
+
292
+ ```bash
293
+ python3 setup.py develop
294
+ ```
295
+
296
+ **On Windows**
297
+
298
+ Choose Correct Visual Studio Version.
299
+
300
+ PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise,
301
+ Professional, or Community Editions. You can also install the build tools from
302
+ https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools *do not*
303
+ come with Visual Studio Code by default.
304
+
305
+ If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
306
+
307
+ **CPU-only builds**
308
+
309
+ In this mode PyTorch computations will run on your CPU, not your GPU
310
+
311
+ ```cmd
312
+ conda activate
313
+ python setup.py develop
314
+ ```
315
+
316
+ Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/master/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
317
+
318
+ **CUDA based build**
319
+
320
+ In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching
321
+
322
+ [NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
323
+ NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox.
324
+ Make sure that CUDA with Nsight Compute is installed after Visual Studio.
325
+
326
+ Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If `ninja.exe` is detected in `PATH`, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
327
+ <br/> If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.
328
+
329
+ Additional libraries such as
330
+ [Magma](https://developer.nvidia.com/magma), [oneDNN, a.k.a MKLDNN or DNNL](https://github.com/oneapi-src/oneDNN), and [Sccache](https://github.com/mozilla/sccache) are often needed. Please refer to the [installation-helper](https://github.com/pytorch/pytorch/tree/master/.ci/pytorch/win-test-helpers/installation-helpers) to install them.
331
+
332
+ You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/master/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations
333
+
334
+
335
+ ```cmd
336
+ cmd
337
+
338
+ :: Set the environment variables after you have downloaded and unzipped the mkl package,
339
+ :: else CMake would throw an error as `Could NOT find OpenMP`.
340
+ set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
341
+ set LIB={Your directory}\mkl\lib;%LIB%
342
+
343
+ :: Read the content in the previous section carefully before you proceed.
344
+ :: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
345
+ :: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
346
+ :: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
347
+ set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
348
+ set DISTUTILS_USE_SDK=1
349
+ for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%
350
+
351
+ :: [Optional] If you want to override the CUDA host compiler
352
+ set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe
353
+
354
+ python setup.py develop
355
+
356
+ ```
357
+
358
+ ##### Adjust Build Options (Optional)
359
+
360
+ You can adjust the configuration of cmake variables optionally (without building first), by doing
361
+ the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done
362
+ with such a step.
363
+
364
+ On Linux
365
+ ```bash
366
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
367
+ python setup.py build --cmake-only
368
+ ccmake build # or cmake-gui build
369
+ ```
370
+
371
+ On macOS
372
+ ```bash
373
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
374
+ MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
375
+ ccmake build # or cmake-gui build
376
+ ```
377
+
378
+ ### Docker Image
379
+
380
+ #### Using pre-built images
381
+
382
+ You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+
383
+
384
+ ```bash
385
+ docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest
386
+ ```
387
+
388
+ Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.
389
+ for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
390
+ should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`.
391
+
392
+ #### Building the image yourself
393
+
394
+ **NOTE:** Must be built with a docker version > 18.06
395
+
396
+ The `Dockerfile` is supplied to build images with CUDA 11.1 support and cuDNN v8.
397
+ You can pass `PYTHON_VERSION=x.y` make variable to specify which Python version is to be used by Miniconda, or leave it
398
+ unset to use the default.
399
+ ```bash
400
+ make -f docker.Makefile
401
+ # images are tagged as docker.io/${your_docker_username}/pytorch
402
+ ```
403
+
404
+ ### Building the Documentation
405
+
406
+ To build documentation in various formats, you will need [Sphinx](http://www.sphinx-doc.org) and the
407
+ readthedocs theme.
408
+
409
+ ```bash
410
+ cd docs/
411
+ pip install -r requirements.txt
412
+ ```
413
+ You can then build the documentation by running `make <format>` from the
414
+ `docs/` folder. Run `make` to get a list of all available output formats.
415
+
416
+ If you get a katex error run `npm install katex`. If it persists, try
417
+ `npm install -g katex`
418
+
419
+ > Note: if you installed `nodejs` with a different package manager (e.g.,
420
+ `conda`) then `npm` will probably install a version of `katex` that is not
421
+ compatible with your version of `nodejs` and doc builds will fail.
422
+ A combination of versions that is known to work is `[email protected]` and
423
+ `[email protected]`. To install the latter with `npm` you can run
424
+ ```npm install -g [email protected]```
425
+
426
+ ### Previous Versions
427
+
428
+ Installation instructions and binaries for previous PyTorch versions may be found
429
+ on [our website](https://pytorch.org/previous-versions).
430
+
431
+
432
+ ## Getting Started
433
+
434
+ Three-pointers to get you started:
435
+ - [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
436
+ - [Examples: easy to understand PyTorch code across all domains](https://github.com/pytorch/examples)
437
+ - [The API Reference](https://pytorch.org/docs/)
438
+ - [Glossary](https://github.com/pytorch/pytorch/blob/master/GLOSSARY.md)
439
+
440
+ ## Resources
441
+
442
+ * [PyTorch.org](https://pytorch.org/)
443
+ * [PyTorch Tutorials](https://pytorch.org/tutorials/)
444
+ * [PyTorch Examples](https://github.com/pytorch/examples)
445
+ * [PyTorch Models](https://pytorch.org/hub/)
446
+ * [Intro to Deep Learning with PyTorch from Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188)
447
+ * [Intro to Machine Learning with PyTorch from Udacity](https://www.udacity.com/course/intro-to-machine-learning-nanodegree--nd229)
448
+ * [Deep Neural Networks with PyTorch from Coursera](https://www.coursera.org/learn/deep-neural-networks-with-pytorch)
449
+ * [PyTorch Twitter](https://twitter.com/PyTorch)
450
+ * [PyTorch Blog](https://pytorch.org/blog/)
451
+ * [PyTorch YouTube](https://www.youtube.com/channel/UCWXI5YeOsh03QvJ59PMaXFw)
452
+
453
+ ## Communication
454
+ * Forums: Discuss implementations, research, etc. https://discuss.pytorch.org
455
+ * GitHub Issues: Bug reports, feature requests, install issues, RFCs, thoughts, etc.
456
+ * Slack: The [PyTorch Slack](https://pytorch.slack.com/) hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is [PyTorch Forums](https://discuss.pytorch.org). If you need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1
457
+ * Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv
458
+ * Facebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch
459
+ * For brand guidelines, please visit our website at [pytorch.org](https://pytorch.org/)
460
+
461
+ ## Releases and Contributing
462
+
463
+ PyTorch has a 90-day release cycle (major releases). Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues).
464
+
465
+ We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
466
+
467
+ If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us.
468
+ Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.
469
+
470
+ To learn more about making a contribution to Pytorch, please see our [Contribution page](CONTRIBUTING.md).
471
+
472
+ ## The Team
473
+
474
+ PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.
475
+
476
+ PyTorch is currently maintained by [Adam Paszke](https://apaszke.github.io/), [Sam Gross](https://github.com/colesbury), [Soumith Chintala](http://soumith.ch) and [Gregory Chanan](https://github.com/gchanan) with major contributions coming from hundreds of talented individuals in various forms and means.
477
+ A non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.
478
+
479
+ Note: This project is unrelated to [hughperkins/pytorch](https://github.com/hughperkins/pytorch) with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.
480
+
481
+ ## License
482
+
483
+ PyTorch has a BSD-style license, as found in the [LICENSE](LICENSE) file.
torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64_2.whl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9effde09a740a46ee5be2f7d9468c4e8cf56d52480839f13ed90f0a22dacb64
3
+ size 196386536
torch/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64_2.whl.metadata ADDED
@@ -0,0 +1,483 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: torch
3
+ Version: 2.0.0a0+gite9ebda2
4
+ Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
5
+ Home-page: https://pytorch.org/
6
+ Download-URL: https://github.com/pytorch/pytorch/tags
7
+ Author: PyTorch Team
8
+ Author-email: [email protected]
9
+ License: BSD-3
10
+ Keywords: pytorch,machine learning
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Intended Audience :: Education
14
+ Classifier: Intended Audience :: Science/Research
15
+ Classifier: License :: OSI Approved :: BSD License
16
+ Classifier: Topic :: Scientific/Engineering
17
+ Classifier: Topic :: Scientific/Engineering :: Mathematics
18
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
+ Classifier: Topic :: Software Development
20
+ Classifier: Topic :: Software Development :: Libraries
21
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
22
+ Classifier: Programming Language :: C++
23
+ Classifier: Programming Language :: Python :: 3
24
+ Classifier: Programming Language :: Python :: 3.8
25
+ Classifier: Programming Language :: Python :: 3.9
26
+ Classifier: Programming Language :: Python :: 3.10
27
+ Requires-Python: >=3.8.0
28
+ Description-Content-Type: text/markdown
29
+ License-File: LICENSE
30
+ License-File: NOTICE
31
+ Requires-Dist: filelock
32
+ Requires-Dist: typing-extensions
33
+ Requires-Dist: sympy
34
+ Requires-Dist: networkx
35
+ Requires-Dist: jinja2
36
+ Provides-Extra: opt-einsum
37
+ Requires-Dist: opt-einsum >=3.3 ; extra == 'opt-einsum'
38
+
39
+ ![PyTorch Logo](https://github.com/pytorch/pytorch/blob/master/docs/source/_static/img/pytorch-logo-dark.png)
40
+
41
+ --------------------------------------------------------------------------------
42
+
43
+ PyTorch is a Python package that provides two high-level features:
44
+ - Tensor computation (like NumPy) with strong GPU acceleration
45
+ - Deep neural networks built on a tape-based autograd system
46
+
47
+ You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
48
+
49
+ Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.org](https://hud.pytorch.org/ci/pytorch/pytorch/master).
50
+
51
+ <!-- toc -->
52
+
53
+ - [More About PyTorch](#more-about-pytorch)
54
+ - [A GPU-Ready Tensor Library](#a-gpu-ready-tensor-library)
55
+ - [Dynamic Neural Networks: Tape-Based Autograd](#dynamic-neural-networks-tape-based-autograd)
56
+ - [Python First](#python-first)
57
+ - [Imperative Experiences](#imperative-experiences)
58
+ - [Fast and Lean](#fast-and-lean)
59
+ - [Extensions Without Pain](#extensions-without-pain)
60
+ - [Installation](#installation)
61
+ - [Binaries](#binaries)
62
+ - [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
63
+ - [From Source](#from-source)
64
+ - [Prerequisites](#prerequisites)
65
+ - [Install Dependencies](#install-dependencies)
66
+ - [Get the PyTorch Source](#get-the-pytorch-source)
67
+ - [Install PyTorch](#install-pytorch)
68
+ - [Adjust Build Options (Optional)](#adjust-build-options-optional)
69
+ - [Docker Image](#docker-image)
70
+ - [Using pre-built images](#using-pre-built-images)
71
+ - [Building the image yourself](#building-the-image-yourself)
72
+ - [Building the Documentation](#building-the-documentation)
73
+ - [Previous Versions](#previous-versions)
74
+ - [Getting Started](#getting-started)
75
+ - [Resources](#resources)
76
+ - [Communication](#communication)
77
+ - [Releases and Contributing](#releases-and-contributing)
78
+ - [The Team](#the-team)
79
+ - [License](#license)
80
+
81
+ <!-- tocstop -->
82
+
83
+ ## More About PyTorch
84
+
85
+ At a granular level, PyTorch is a library that consists of the following components:
86
+
87
+ | Component | Description |
88
+ | ---- | --- |
89
+ | [**torch**](https://pytorch.org/docs/stable/torch.html) | A Tensor library like NumPy, with strong GPU support |
90
+ | [**torch.autograd**](https://pytorch.org/docs/stable/autograd.html) | A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
91
+ | [**torch.jit**](https://pytorch.org/docs/stable/jit.html) | A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code |
92
+ | [**torch.nn**](https://pytorch.org/docs/stable/nn.html) | A neural networks library deeply integrated with autograd designed for maximum flexibility |
93
+ | [**torch.multiprocessing**](https://pytorch.org/docs/stable/multiprocessing.html) | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training |
94
+ | [**torch.utils**](https://pytorch.org/docs/stable/data.html) | DataLoader and other utility functions for convenience |
95
+
96
+ Usually, PyTorch is used either as:
97
+
98
+ - A replacement for NumPy to use the power of GPUs.
99
+ - A deep learning research platform that provides maximum flexibility and speed.
100
+
101
+ Elaborating Further:
102
+
103
+ ### A GPU-Ready Tensor Library
104
+
105
+ If you use NumPy, then you have used Tensors (a.k.a. ndarray).
106
+
107
+ ![Tensor illustration](./docs/source/_static/img/tensor_illustration.png)
108
+
109
+ PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the
110
+ computation by a huge amount.
111
+
112
+ We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs
113
+ such as slicing, indexing, mathematical operations, linear algebra, reductions.
114
+ And they are fast!
115
+
116
+ ### Dynamic Neural Networks: Tape-Based Autograd
117
+
118
+ PyTorch has a unique way of building neural networks: using and replaying a tape recorder.
119
+
120
+ Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world.
121
+ One has to build a neural network and reuse the same structure again and again.
122
+ Changing the way the network behaves means that one has to start from scratch.
123
+
124
+ With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to
125
+ change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes
126
+ from several research papers on this topic, as well as current and past work such as
127
+ [torch-autograd](https://github.com/twitter/torch-autograd),
128
+ [autograd](https://github.com/HIPS/autograd),
129
+ [Chainer](https://chainer.org), etc.
130
+
131
+ While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
132
+ You get the best of speed and flexibility for your crazy research.
133
+
134
+ ![Dynamic graph](https://github.com/pytorch/pytorch/blob/master/docs/source/_static/img/dynamic_graph.gif)
135
+
136
+ ### Python First
137
+
138
+ PyTorch is not a Python binding into a monolithic C++ framework.
139
+ It is built to be deeply integrated into Python.
140
+ You can use it naturally like you would use [NumPy](https://www.numpy.org/) / [SciPy](https://www.scipy.org/) / [scikit-learn](https://scikit-learn.org) etc.
141
+ You can write your new neural network layers in Python itself, using your favorite libraries
142
+ and use packages such as [Cython](https://cython.org/) and [Numba](http://numba.pydata.org/).
143
+ Our goal is to not reinvent the wheel where appropriate.
144
+
145
+ ### Imperative Experiences
146
+
147
+ PyTorch is designed to be intuitive, linear in thought, and easy to use.
148
+ When you execute a line of code, it gets executed. There isn't an asynchronous view of the world.
149
+ When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward.
150
+ The stack trace points to exactly where your code was defined.
151
+ We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.
152
+
153
+ ### Fast and Lean
154
+
155
+ PyTorch has minimal framework overhead. We integrate acceleration libraries
156
+ such as [Intel MKL](https://software.intel.com/mkl) and NVIDIA ([cuDNN](https://developer.nvidia.com/cudnn), [NCCL](https://developer.nvidia.com/nccl)) to maximize speed.
157
+ At the core, its CPU and GPU Tensor and neural network backends
158
+ are mature and have been tested for years.
159
+
160
+ Hence, PyTorch is quite fast – whether you run small or large neural networks.
161
+
162
+ The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives.
163
+ We've written custom memory allocators for the GPU to make sure that
164
+ your deep learning models are maximally memory efficient.
165
+ This enables you to train bigger deep learning models than before.
166
+
167
+ ### Extensions Without Pain
168
+
169
+ Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward
170
+ and with minimal abstractions.
171
+
172
+ You can write new neural network layers in Python using the torch API
173
+ [or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).
174
+
175
+ If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.
176
+ No wrapper code needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).
177
+
178
+
179
+ ## Installation
180
+
181
+ ### Binaries
182
+ Commands to install binaries via Conda or pip wheels are on our website: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
183
+
184
+
185
+ #### NVIDIA Jetson Platforms
186
+
187
+ Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) and the L4T container is published [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch)
188
+
189
+ They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv) and [@ptrblck](https://github.com/ptrblck) are maintaining them.
190
+
191
+
192
+ ### From Source
193
+
194
+ #### Prerequisites
195
+ If you are installing from source, you will need:
196
+ - Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
197
+ - A C++17 compatible compiler, such as clang
198
+
199
+ We highly recommend installing an [Anaconda](https://www.anaconda.com/distribution/#download-section) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
200
+
201
+ If you want to compile with CUDA support, install the following (note that CUDA is not supported on macOS)
202
+ - [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 11.0 or above
203
+ - [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v7 or above
204
+ - [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
205
+
206
+ Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/pdf/cuDNN-Support-Matrix.pdf) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
207
+
208
+ If you want to disable CUDA support, export the environment variable `USE_CUDA=0`.
209
+ Other potentially useful environment variables may be found in `setup.py`.
210
+
211
+ If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/)
212
+
213
+ If you want to compile with ROCm support, install
214
+ - [AMD ROCm](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html) 4.0 and above installation
215
+ - ROCm is currently supported only for Linux systems.
216
+
217
+ If you want to disable ROCm support, export the environment variable `USE_ROCM=0`.
218
+ Other potentially useful environment variables may be found in `setup.py`.
219
+
220
+ #### Install Dependencies
221
+
222
+ **Common**
223
+
224
+ ```bash
225
+ conda install cmake ninja
226
+ # Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
227
+ pip install -r requirements.txt
228
+ ```
229
+
230
+ **On Linux**
231
+
232
+ ```bash
233
+ conda install mkl mkl-include
234
+ # CUDA only: Add LAPACK support for the GPU if needed
235
+ conda install -c pytorch magma-cuda110 # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo
236
+ ```
237
+
238
+ **On MacOS**
239
+
240
+ ```bash
241
+ # Add this package on intel x86 processor machines only
242
+ conda install mkl mkl-include
243
+ # Add these packages if torch.distributed is needed
244
+ conda install pkg-config libuv
245
+ ```
246
+
247
+ **On Windows**
248
+
249
+ ```bash
250
+ conda install mkl mkl-include
251
+ # Add these packages if torch.distributed is needed.
252
+ # Distributed package support on Windows is a prototype feature and is subject to changes.
253
+ conda install -c conda-forge libuv=1.39
254
+ ```
255
+
256
+ #### Get the PyTorch Source
257
+ ```bash
258
+ git clone --recursive https://github.com/pytorch/pytorch
259
+ cd pytorch
260
+ # if you are updating an existing checkout
261
+ git submodule sync
262
+ git submodule update --init --recursive
263
+ ```
264
+
265
+ #### Install PyTorch
266
+ **On Linux**
267
+
268
+ If you're compiling for AMD ROCm then first run this command:
269
+ ```bash
270
+ # Only run this if you're compiling for ROCm
271
+ python tools/amd_build/build_amd.py
272
+ ```
273
+
274
+ Install PyTorch
275
+ ```bash
276
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
277
+ python setup.py develop
278
+ ```
279
+
280
+ > _Aside:_ If you are using [Anaconda](https://www.anaconda.com/distribution/#download-section), you may experience an error caused by the linker:
281
+ >
282
+ > ```plaintext
283
+ > build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
284
+ > collect2: error: ld returned 1 exit status
285
+ > error: command 'g++' failed with exit status 1
286
+ > ```
287
+ >
288
+ > This is caused by `ld` from the Conda environment shadowing the system `ld`. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.
289
+
290
+ **On macOS**
291
+
292
+ ```bash
293
+ python3 setup.py develop
294
+ ```
295
+
296
+ **On Windows**
297
+
298
+ Choose Correct Visual Studio Version.
299
+
300
+ PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise,
301
+ Professional, or Community Editions. You can also install the build tools from
302
+ https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools *do not*
303
+ come with Visual Studio Code by default.
304
+
305
+ If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
306
+
307
+ **CPU-only builds**
308
+
309
+ In this mode PyTorch computations will run on your CPU, not your GPU
310
+
311
+ ```cmd
312
+ conda activate
313
+ python setup.py develop
314
+ ```
315
+
316
+ Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/master/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
317
+
318
+ **CUDA based build**
319
+
320
+ In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching
321
+
322
+ [NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
323
+ NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox.
324
+ Make sure that CUDA with Nsight Compute is installed after Visual Studio.
325
+
326
+ Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If `ninja.exe` is detected in `PATH`, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
327
+ <br/> If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.
328
+
329
+ Additional libraries such as
330
+ [Magma](https://developer.nvidia.com/magma), [oneDNN, a.k.a MKLDNN or DNNL](https://github.com/oneapi-src/oneDNN), and [Sccache](https://github.com/mozilla/sccache) are often needed. Please refer to the [installation-helper](https://github.com/pytorch/pytorch/tree/master/.ci/pytorch/win-test-helpers/installation-helpers) to install them.
331
+
332
+ You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/master/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations
333
+
334
+
335
+ ```cmd
336
+ cmd
337
+
338
+ :: Set the environment variables after you have downloaded and unzipped the mkl package,
339
+ :: else CMake would throw an error as `Could NOT find OpenMP`.
340
+ set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
341
+ set LIB={Your directory}\mkl\lib;%LIB%
342
+
343
+ :: Read the content in the previous section carefully before you proceed.
344
+ :: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
345
+ :: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
346
+ :: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
347
+ set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
348
+ set DISTUTILS_USE_SDK=1
349
+ for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%
350
+
351
+ :: [Optional] If you want to override the CUDA host compiler
352
+ set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe
353
+
354
+ python setup.py develop
355
+
356
+ ```
357
+
358
+ ##### Adjust Build Options (Optional)
359
+
360
+ You can adjust the configuration of cmake variables optionally (without building first), by doing
361
+ the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done
362
+ with such a step.
363
+
364
+ On Linux
365
+ ```bash
366
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
367
+ python setup.py build --cmake-only
368
+ ccmake build # or cmake-gui build
369
+ ```
370
+
371
+ On macOS
372
+ ```bash
373
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
374
+ MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
375
+ ccmake build # or cmake-gui build
376
+ ```
377
+
378
+ ### Docker Image
379
+
380
+ #### Using pre-built images
381
+
382
+ You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+
383
+
384
+ ```bash
385
+ docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest
386
+ ```
387
+
388
+ Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.
389
+ for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
390
+ should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`.
391
+
392
+ #### Building the image yourself
393
+
394
+ **NOTE:** Must be built with a docker version > 18.06
395
+
396
+ The `Dockerfile` is supplied to build images with CUDA 11.1 support and cuDNN v8.
397
+ You can pass `PYTHON_VERSION=x.y` make variable to specify which Python version is to be used by Miniconda, or leave it
398
+ unset to use the default.
399
+ ```bash
400
+ make -f docker.Makefile
401
+ # images are tagged as docker.io/${your_docker_username}/pytorch
402
+ ```
403
+
404
+ ### Building the Documentation
405
+
406
+ To build documentation in various formats, you will need [Sphinx](http://www.sphinx-doc.org) and the
407
+ readthedocs theme.
408
+
409
+ ```bash
410
+ cd docs/
411
+ pip install -r requirements.txt
412
+ ```
413
+ You can then build the documentation by running `make <format>` from the
414
+ `docs/` folder. Run `make` to get a list of all available output formats.
415
+
416
+ If you get a katex error run `npm install katex`. If it persists, try
417
+ `npm install -g katex`
418
+
419
+ > Note: if you installed `nodejs` with a different package manager (e.g.,
420
+ `conda`) then `npm` will probably install a version of `katex` that is not
421
+ compatible with your version of `nodejs` and doc builds will fail.
422
+ A combination of versions that is known to work is `[email protected]` and
423
+ `[email protected]`. To install the latter with `npm` you can run
424
+ ```npm install -g [email protected]```
425
+
426
+ ### Previous Versions
427
+
428
+ Installation instructions and binaries for previous PyTorch versions may be found
429
+ on [our website](https://pytorch.org/previous-versions).
430
+
431
+
432
+ ## Getting Started
433
+
434
+ Three-pointers to get you started:
435
+ - [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
436
+ - [Examples: easy to understand PyTorch code across all domains](https://github.com/pytorch/examples)
437
+ - [The API Reference](https://pytorch.org/docs/)
438
+ - [Glossary](https://github.com/pytorch/pytorch/blob/master/GLOSSARY.md)
439
+
440
+ ## Resources
441
+
442
+ * [PyTorch.org](https://pytorch.org/)
443
+ * [PyTorch Tutorials](https://pytorch.org/tutorials/)
444
+ * [PyTorch Examples](https://github.com/pytorch/examples)
445
+ * [PyTorch Models](https://pytorch.org/hub/)
446
+ * [Intro to Deep Learning with PyTorch from Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188)
447
+ * [Intro to Machine Learning with PyTorch from Udacity](https://www.udacity.com/course/intro-to-machine-learning-nanodegree--nd229)
448
+ * [Deep Neural Networks with PyTorch from Coursera](https://www.coursera.org/learn/deep-neural-networks-with-pytorch)
449
+ * [PyTorch Twitter](https://twitter.com/PyTorch)
450
+ * [PyTorch Blog](https://pytorch.org/blog/)
451
+ * [PyTorch YouTube](https://www.youtube.com/channel/UCWXI5YeOsh03QvJ59PMaXFw)
452
+
453
+ ## Communication
454
+ * Forums: Discuss implementations, research, etc. https://discuss.pytorch.org
455
+ * GitHub Issues: Bug reports, feature requests, install issues, RFCs, thoughts, etc.
456
+ * Slack: The [PyTorch Slack](https://pytorch.slack.com/) hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is [PyTorch Forums](https://discuss.pytorch.org). If you need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1
457
+ * Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv
458
+ * Facebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch
459
+ * For brand guidelines, please visit our website at [pytorch.org](https://pytorch.org/)
460
+
461
+ ## Releases and Contributing
462
+
463
+ PyTorch has a 90-day release cycle (major releases). Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues).
464
+
465
+ We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
466
+
467
+ If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us.
468
+ Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.
469
+
470
+ To learn more about making a contribution to Pytorch, please see our [Contribution page](CONTRIBUTING.md).
471
+
472
+ ## The Team
473
+
474
+ PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.
475
+
476
+ PyTorch is currently maintained by [Adam Paszke](https://apaszke.github.io/), [Sam Gross](https://github.com/colesbury), [Soumith Chintala](http://soumith.ch) and [Gregory Chanan](https://github.com/gchanan) with major contributions coming from hundreds of talented individuals in various forms and means.
477
+ A non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.
478
+
479
+ Note: This project is unrelated to [hughperkins/pytorch](https://github.com/hughperkins/pytorch) with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.
480
+
481
+ ## License
482
+
483
+ PyTorch has a BSD-style license, as found in the [LICENSE](LICENSE) file.