shreyajn commited on
Commit
354edef
1 Parent(s): f07a4b5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +65 -33
README.md CHANGED
@@ -37,10 +37,10 @@ More details on model performance across various devices, can be found
37
 
38
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  | ---|---|---|---|---|---|---|---|
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.786 ms | 0 - 2 MB | FP16 | NPU | [MediaPipeFaceDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Face-Detection/blob/main/MediaPipeFaceDetector.tflite)
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.31 ms | 0 - 2 MB | FP16 | NPU | [MediaPipeFaceLandmarkDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Face-Detection/blob/main/MediaPipeFaceLandmarkDetector.tflite)
42
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.832 ms | 1 - 19 MB | FP16 | NPU | [MediaPipeFaceDetector.so](https://huggingface.co/qualcomm/MediaPipe-Face-Detection/blob/main/MediaPipeFaceDetector.so)
43
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.392 ms | 0 - 91 MB | FP16 | NPU | [MediaPipeFaceLandmarkDetector.so](https://huggingface.co/qualcomm/MediaPipe-Face-Detection/blob/main/MediaPipeFaceLandmarkDetector.so)
44
 
45
 
46
 
@@ -101,17 +101,17 @@ python -m qai_hub_models.models.mediapipe_face.export
101
  ```
102
  Profile Job summary of MediaPipeFaceDetector
103
  --------------------------------------------------
104
- Device: SA8255 (Proxy) (13)
105
- Estimated Inference Time: 0.84 ms
106
- Estimated Peak Memory Range: 0.77-7.22 MB
107
- Compute Units: NPU (148) | Total (148)
108
 
109
  Profile Job summary of MediaPipeFaceLandmarkDetector
110
  --------------------------------------------------
111
- Device: SA8255 (Proxy) (13)
112
- Estimated Inference Time: 0.39 ms
113
- Estimated Peak Memory Range: 0.44-86.25 MB
114
- Compute Units: NPU (107) | Total (107)
115
 
116
 
117
  ```
@@ -132,29 +132,49 @@ in memory using the `jit.trace` and then call the `submit_compile_job` API.
132
  import torch
133
 
134
  import qai_hub as hub
135
- from qai_hub_models.models.mediapipe_face import Model
136
 
137
  # Load the model
138
- torch_model = Model.from_pretrained()
 
 
 
139
 
140
  # Device
141
  device = hub.Device("Samsung Galaxy S23")
142
 
 
143
  # Trace model
144
- input_shape = torch_model.get_input_spec()
145
- sample_inputs = torch_model.sample_inputs()
146
 
147
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
148
 
149
  # Compile model on a specific device
150
- compile_job = hub.submit_compile_job(
151
- model=pt_model,
152
  device=device,
153
- input_specs=torch_model.get_input_spec(),
154
  )
155
 
156
  # Get target model to run on-device
157
- target_model = compile_job.get_target_model()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
 
159
  ```
160
 
@@ -166,10 +186,16 @@ After compiling models from step 1. Models can be profiled model on-device using
166
  provisioned in the cloud. Once the job is submitted, you can navigate to a
167
  provided job URL to view a variety of on-device performance metrics.
168
  ```python
169
- profile_job = hub.submit_profile_job(
170
- model=target_model,
171
- device=device,
172
- )
 
 
 
 
 
 
173
 
174
  ```
175
 
@@ -178,14 +204,20 @@ Step 3: **Verify on-device accuracy**
178
  To verify the accuracy of the model on-device, you can run on-device inference
179
  on sample input data on the same cloud hosted device.
180
  ```python
181
- input_data = torch_model.sample_inputs()
182
- inference_job = hub.submit_inference_job(
183
- model=target_model,
184
- device=device,
185
- inputs=input_data,
186
- )
187
-
188
- on_device_output = inference_job.download_output_data()
 
 
 
 
 
 
189
 
190
  ```
191
  With the output of the model, you can compute like PSNR, relative errors or
 
37
 
38
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  | ---|---|---|---|---|---|---|---|
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.577 ms | 0 - 1 MB | FP16 | NPU | [MediaPipeFaceDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Face-Detection/blob/main/MediaPipeFaceDetector.tflite)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.207 ms | 0 - 13 MB | FP16 | NPU | [MediaPipeFaceLandmarkDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Face-Detection/blob/main/MediaPipeFaceLandmarkDetector.tflite)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.638 ms | 0 - 5 MB | FP16 | NPU | [MediaPipeFaceDetector.so](https://huggingface.co/qualcomm/MediaPipe-Face-Detection/blob/main/MediaPipeFaceDetector.so)
43
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.285 ms | 2 - 10 MB | FP16 | NPU | [MediaPipeFaceLandmarkDetector.so](https://huggingface.co/qualcomm/MediaPipe-Face-Detection/blob/main/MediaPipeFaceLandmarkDetector.so)
44
 
45
 
46
 
 
101
  ```
102
  Profile Job summary of MediaPipeFaceDetector
103
  --------------------------------------------------
104
+ Device: Snapdragon X Elite CRD (11)
105
+ Estimated Inference Time: 0.76 ms
106
+ Estimated Peak Memory Range: 0.75-0.75 MB
107
+ Compute Units: NPU (146) | Total (146)
108
 
109
  Profile Job summary of MediaPipeFaceLandmarkDetector
110
  --------------------------------------------------
111
+ Device: Snapdragon X Elite CRD (11)
112
+ Estimated Inference Time: 0.37 ms
113
+ Estimated Peak Memory Range: 0.42-0.42 MB
114
+ Compute Units: NPU (105) | Total (105)
115
 
116
 
117
  ```
 
132
  import torch
133
 
134
  import qai_hub as hub
135
+ from qai_hub_models.models.mediapipe_face import MediaPipeFaceDetector,MediaPipeFaceLandmarkDetector
136
 
137
  # Load the model
138
+ face_detector_model = MediaPipeFaceDetector.from_pretrained()
139
+
140
+ face_landmark_detector_model = MediaPipeFaceLandmarkDetector.from_pretrained()
141
+
142
 
143
  # Device
144
  device = hub.Device("Samsung Galaxy S23")
145
 
146
+
147
  # Trace model
148
+ face_detector_input_shape = face_detector_model.get_input_spec()
149
+ face_detector_sample_inputs = face_detector_model.sample_inputs()
150
 
151
+ traced_face_detector_model = torch.jit.trace(face_detector_model, [torch.tensor(data[0]) for _, data in face_detector_sample_inputs.items()])
152
 
153
  # Compile model on a specific device
154
+ face_detector_compile_job = hub.submit_compile_job(
155
+ model=traced_face_detector_model ,
156
  device=device,
157
+ input_specs=face_detector_model.get_input_spec(),
158
  )
159
 
160
  # Get target model to run on-device
161
+ face_detector_target_model = face_detector_compile_job.get_target_model()
162
+
163
+ # Trace model
164
+ face_landmark_detector_input_shape = face_landmark_detector_model.get_input_spec()
165
+ face_landmark_detector_sample_inputs = face_landmark_detector_model.sample_inputs()
166
+
167
+ traced_face_landmark_detector_model = torch.jit.trace(face_landmark_detector_model, [torch.tensor(data[0]) for _, data in face_landmark_detector_sample_inputs.items()])
168
+
169
+ # Compile model on a specific device
170
+ face_landmark_detector_compile_job = hub.submit_compile_job(
171
+ model=traced_face_landmark_detector_model ,
172
+ device=device,
173
+ input_specs=face_landmark_detector_model.get_input_spec(),
174
+ )
175
+
176
+ # Get target model to run on-device
177
+ face_landmark_detector_target_model = face_landmark_detector_compile_job.get_target_model()
178
 
179
  ```
180
 
 
186
  provisioned in the cloud. Once the job is submitted, you can navigate to a
187
  provided job URL to view a variety of on-device performance metrics.
188
  ```python
189
+
190
+ face_detector_profile_job = hub.submit_profile_job(
191
+ model=face_detector_target_model,
192
+ device=device,
193
+ )
194
+
195
+ face_landmark_detector_profile_job = hub.submit_profile_job(
196
+ model=face_landmark_detector_target_model,
197
+ device=device,
198
+ )
199
 
200
  ```
201
 
 
204
  To verify the accuracy of the model on-device, you can run on-device inference
205
  on sample input data on the same cloud hosted device.
206
  ```python
207
+ face_detector_input_data = face_detector_model.sample_inputs()
208
+ face_detector_inference_job = hub.submit_inference_job(
209
+ model=face_detector_target_model,
210
+ device=device,
211
+ inputs=face_detector_input_data,
212
+ )
213
+ face_detector_inference_job.download_output_data()
214
+ face_landmark_detector_input_data = face_landmark_detector_model.sample_inputs()
215
+ face_landmark_detector_inference_job = hub.submit_inference_job(
216
+ model=face_landmark_detector_target_model,
217
+ device=device,
218
+ inputs=face_landmark_detector_input_data,
219
+ )
220
+ face_landmark_detector_inference_job.download_output_data()
221
 
222
  ```
223
  With the output of the model, you can compute like PSNR, relative errors or