Yolo-v7: Optimized for Mobile Deployment

Real-time object detection optimized for mobile and edge

YoloV7 is a machine learning model that predicts bounding boxes and classes of objects in an image.

This model is an implementation of Yolo-v7 found here.

More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Object detection
  • Model Stats:
    • Model checkpoint: YoloV7 Tiny
    • Input resolution: 640x640
    • Number of parameters: 6.39M
    • Model size: 24.4 MB
Model Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Precision Primary Compute Unit Target Model
Yolo-v7 Samsung Galaxy S23 Snapdragon® 8 Gen 2 TFLITE 17.62 ms 1 - 44 MB FP16 NPU --
Yolo-v7 Samsung Galaxy S23 Snapdragon® 8 Gen 2 ONNX 13.89 ms 2 - 46 MB FP16 NPU --
Yolo-v7 Samsung Galaxy S24 Snapdragon® 8 Gen 3 TFLITE 12.802 ms 1 - 37 MB FP16 NPU --
Yolo-v7 Samsung Galaxy S24 Snapdragon® 8 Gen 3 QNN 6.063 ms 5 - 23 MB FP16 NPU --
Yolo-v7 Samsung Galaxy S24 Snapdragon® 8 Gen 3 ONNX 10.083 ms 6 - 65 MB FP16 NPU --
Yolo-v7 Snapdragon 8 Elite QRD Snapdragon® 8 Elite TFLITE 10.755 ms 0 - 33 MB FP16 NPU --
Yolo-v7 Snapdragon 8 Elite QRD Snapdragon® 8 Elite QNN 5.897 ms 5 - 50 MB FP16 NPU --
Yolo-v7 Snapdragon 8 Elite QRD Snapdragon® 8 Elite ONNX 9.454 ms 5 - 51 MB FP16 NPU --
Yolo-v7 SA7255P ADP SA7255P TFLITE 109.65 ms 1 - 23 MB FP16 NPU --
Yolo-v7 SA7255P ADP SA7255P QNN 98.564 ms 0 - 8 MB FP16 NPU --
Yolo-v7 SA8255 (Proxy) SA8255P Proxy TFLITE 17.508 ms 1 - 15 MB FP16 NPU --
Yolo-v7 SA8255 (Proxy) SA8255P Proxy QNN 8.719 ms 5 - 8 MB FP16 NPU --
Yolo-v7 SA8295P ADP SA8295P TFLITE 22.193 ms 1 - 30 MB FP16 NPU --
Yolo-v7 SA8295P ADP SA8295P QNN 13.676 ms 0 - 15 MB FP16 NPU --
Yolo-v7 SA8650 (Proxy) SA8650P Proxy TFLITE 17.609 ms 1 - 14 MB FP16 NPU --
Yolo-v7 SA8650 (Proxy) SA8650P Proxy QNN 8.692 ms 5 - 7 MB FP16 NPU --
Yolo-v7 SA8775P ADP SA8775P TFLITE 22.185 ms 0 - 24 MB FP16 NPU --
Yolo-v7 SA8775P ADP SA8775P QNN 13.053 ms 1 - 10 MB FP16 NPU --
Yolo-v7 QCS8275 (Proxy) QCS8275 Proxy TFLITE 109.65 ms 1 - 23 MB FP16 NPU --
Yolo-v7 QCS8275 (Proxy) QCS8275 Proxy QNN 98.564 ms 0 - 8 MB FP16 NPU --
Yolo-v7 QCS8550 (Proxy) QCS8550 Proxy TFLITE 17.515 ms 0 - 13 MB FP16 NPU --
Yolo-v7 QCS8550 (Proxy) QCS8550 Proxy QNN 8.697 ms 5 - 7 MB FP16 NPU --
Yolo-v7 QCS9075 (Proxy) QCS9075 Proxy TFLITE 22.185 ms 0 - 24 MB FP16 NPU --
Yolo-v7 QCS9075 (Proxy) QCS9075 Proxy QNN 13.053 ms 1 - 10 MB FP16 NPU --
Yolo-v7 QCS8450 (Proxy) QCS8450 Proxy TFLITE 20.436 ms 1 - 38 MB FP16 NPU --
Yolo-v7 QCS8450 (Proxy) QCS8450 Proxy QNN 10.483 ms 5 - 36 MB FP16 NPU --
Yolo-v7 Snapdragon X Elite CRD Snapdragon® X Elite QNN 9.44 ms 5 - 5 MB FP16 NPU --
Yolo-v7 Snapdragon X Elite CRD Snapdragon® X Elite ONNX 13.944 ms 9 - 9 MB FP16 NPU --

License

  • The license for the original implementation of Yolo-v7 can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community

Usage and Limitations

Model may not be used for or in connection with any of the following applications:

  • Accessing essential private and public services and benefits;
  • Administration of justice and democratic processes;
  • Assessing or recognizing the emotional state of a person;
  • Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
  • Education and vocational training;
  • Employment and workers management;
  • Exploitation of the vulnerabilities of persons resulting in harmful behavior;
  • General purpose social scoring;
  • Law enforcement;
  • Management and operation of critical infrastructure;
  • Migration, asylum and border control management;
  • Predictive policing;
  • Real-time remote biometric identification in public spaces;
  • Recommender systems of social media platforms;
  • Scraping of facial images (from the internet or otherwise); and/or
  • Subliminal manipulation
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support object-detection models for pytorch library.