shreyajn commited on
Commit
584ccbd
1 Parent(s): 848412a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -33
README.md CHANGED
@@ -34,8 +34,8 @@ More details on model performance across various devices, can be found
34
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 73.185 ms | 5 - 8 MB | FP16 | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite)
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 67.019 ms | 0 - 54 MB | FP16 | NPU | [Real-ESRGAN-x4plus.so](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.so)
39
 
40
 
41
 
@@ -97,10 +97,10 @@ python -m qai_hub_models.models.real_esrgan_x4plus.export
97
  ```
98
  Profile Job summary of Real-ESRGAN-x4plus
99
  --------------------------------------------------
100
- Device: SA8255 (Proxy) (13)
101
- Estimated Inference Time: 70.75 ms
102
- Estimated Peak Memory Range: 0.14-54.23 MB
103
- Compute Units: NPU (1031) | Total (1031)
104
 
105
 
106
  ```
@@ -121,29 +121,13 @@ in memory using the `jit.trace` and then call the `submit_compile_job` API.
121
  import torch
122
 
123
  import qai_hub as hub
124
- from qai_hub_models.models.real_esrgan_x4plus import Model
125
 
126
  # Load the model
127
- torch_model = Model.from_pretrained()
128
 
129
  # Device
130
  device = hub.Device("Samsung Galaxy S23")
131
 
132
- # Trace model
133
- input_shape = torch_model.get_input_spec()
134
- sample_inputs = torch_model.sample_inputs()
135
-
136
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
137
-
138
- # Compile model on a specific device
139
- compile_job = hub.submit_compile_job(
140
- model=pt_model,
141
- device=device,
142
- input_specs=torch_model.get_input_spec(),
143
- )
144
-
145
- # Get target model to run on-device
146
- target_model = compile_job.get_target_model()
147
 
148
  ```
149
 
@@ -156,10 +140,10 @@ provisioned in the cloud. Once the job is submitted, you can navigate to a
156
  provided job URL to view a variety of on-device performance metrics.
157
  ```python
158
  profile_job = hub.submit_profile_job(
159
- model=target_model,
160
- device=device,
161
- )
162
-
163
  ```
164
 
165
  Step 3: **Verify on-device accuracy**
@@ -169,12 +153,11 @@ on sample input data on the same cloud hosted device.
169
  ```python
170
  input_data = torch_model.sample_inputs()
171
  inference_job = hub.submit_inference_job(
172
- model=target_model,
173
- device=device,
174
- inputs=input_data,
175
- )
176
-
177
- on_device_output = inference_job.download_output_data()
178
 
179
  ```
180
  With the output of the model, you can compute like PSNR, relative errors or
 
34
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 65.101 ms | 4 - 6 MB | FP16 | NPU | [Real-ESRGAN-x4plus.tflite](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.tflite)
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 70.426 ms | 0 - 97 MB | FP16 | NPU | [Real-ESRGAN-x4plus.so](https://huggingface.co/qualcomm/Real-ESRGAN-x4plus/blob/main/Real-ESRGAN-x4plus.so)
39
 
40
 
41
 
 
97
  ```
98
  Profile Job summary of Real-ESRGAN-x4plus
99
  --------------------------------------------------
100
+ Device: Snapdragon X Elite CRD (11)
101
+ Estimated Inference Time: 65.45 ms
102
+ Estimated Peak Memory Range: 0.20-0.20 MB
103
+ Compute Units: NPU (1029) | Total (1029)
104
 
105
 
106
  ```
 
121
  import torch
122
 
123
  import qai_hub as hub
124
+ from qai_hub_models.models.real_esrgan_x4plus import
125
 
126
  # Load the model
 
127
 
128
  # Device
129
  device = hub.Device("Samsung Galaxy S23")
130
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
 
132
  ```
133
 
 
140
  provided job URL to view a variety of on-device performance metrics.
141
  ```python
142
  profile_job = hub.submit_profile_job(
143
+ model=target_model,
144
+ device=device,
145
+ )
146
+
147
  ```
148
 
149
  Step 3: **Verify on-device accuracy**
 
153
  ```python
154
  input_data = torch_model.sample_inputs()
155
  inference_job = hub.submit_inference_job(
156
+ model=target_model,
157
+ device=device,
158
+ inputs=input_data,
159
+ )
160
+ on_device_output = inference_job.download_output_data()
 
161
 
162
  ```
163
  With the output of the model, you can compute like PSNR, relative errors or