-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add fp16 precision support #222
Comments
We have a plan to add a few fp16 models specific for image classification. |
This would also be used for NPU test. #220 |
@philloooo, @mingmingtasd has added 3 fp16 models for image classification: #226 |
thanks! I've tested it on Mac, some models are blocked by webmachinelearning/webnn#678 but Image classification with ResNet 50 V2 works and I can confirm it's using Apple Neural Engine. I noticed that if I select the device to GPU, the model is different even when the precision is the same. Can we use the same models for GPU vs NPU? That's better to do comparison. |
We should allow to include softmax both for GPU and NPU: https://github.com/webmachinelearning/webnn-samples/pull/226/files#diff-13f8eda69b8dc85839ae4e046882bfbbddd80a5928e8bd3ea6ff7b71e4213f68R124, |
@philloooo, #237 fixes the inconsistent softmax support. Now all the fp16 models (MobileNetV2, RestNet 50 V1, EfficientNet) share the same models between GPU and NPU. |
Can we close this issue? |
Hi,
Can we add fp16 version of the models? (have all inputs and weights be fp16).
On Mac, CoreML only use NPU when the data is fp16. It would be valuable to have webnn samples with fp16 precision to compare the performance.
The text was updated successfully, but these errors were encountered: