Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -36,8 +36,8 @@ More details on model performance across various devices, can be found
|
|
36 |
|
37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
38 |
| ---|---|---|---|---|---|---|---|
|
39 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.
|
40 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.
|
41 |
|
42 |
|
43 |
## Installation
|
@@ -97,16 +97,16 @@ python -m qai_hub_models.models.squeezenet1_1.export
|
|
97 |
```
|
98 |
Profile Job summary of SqueezeNet-1_1
|
99 |
--------------------------------------------------
|
100 |
-
Device: Samsung Galaxy
|
101 |
-
Estimated Inference Time: 0.
|
102 |
-
Estimated Peak Memory Range: 0.
|
103 |
Compute Units: NPU (39) | Total (39)
|
104 |
|
105 |
Profile Job summary of SqueezeNet-1_1
|
106 |
--------------------------------------------------
|
107 |
-
Device: Samsung Galaxy
|
108 |
-
Estimated Inference Time: 0.
|
109 |
-
Estimated Peak Memory Range: 0.
|
110 |
Compute Units: NPU (69) | Total (69)
|
111 |
|
112 |
|
@@ -226,7 +226,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
|
226 |
## License
|
227 |
- The license for the original implementation of SqueezeNet-1_1 can be found
|
228 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
229 |
-
- The license for the compiled assets for on-device deployment can be found [here](
|
230 |
|
231 |
## References
|
232 |
* [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360)
|
|
|
36 |
|
37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
38 |
| ---|---|---|---|---|---|---|---|
|
39 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.225 ms | 0 - 1 MB | FP16 | NPU | [SqueezeNet-1_1.tflite](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.tflite)
|
40 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.278 ms | 0 - 51 MB | FP16 | NPU | [SqueezeNet-1_1.so](https://huggingface.co/qualcomm/SqueezeNet-1_1/blob/main/SqueezeNet-1_1.so)
|
41 |
|
42 |
|
43 |
## Installation
|
|
|
97 |
```
|
98 |
Profile Job summary of SqueezeNet-1_1
|
99 |
--------------------------------------------------
|
100 |
+
Device: Samsung Galaxy S24 (14)
|
101 |
+
Estimated Inference Time: 0.18 ms
|
102 |
+
Estimated Peak Memory Range: 0.01-20.67 MB
|
103 |
Compute Units: NPU (39) | Total (39)
|
104 |
|
105 |
Profile Job summary of SqueezeNet-1_1
|
106 |
--------------------------------------------------
|
107 |
+
Device: Samsung Galaxy S24 (14)
|
108 |
+
Estimated Inference Time: 0.20 ms
|
109 |
+
Estimated Peak Memory Range: 0.59-27.09 MB
|
110 |
Compute Units: NPU (69) | Total (69)
|
111 |
|
112 |
|
|
|
226 |
## License
|
227 |
- The license for the original implementation of SqueezeNet-1_1 can be found
|
228 |
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
229 |
+
- The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
|
230 |
|
231 |
## References
|
232 |
* [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360)
|