-
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NNAPIFlags for Specifying CPU or GPU Inference Do Not Take Effect #2
Comments
@BeIanChang I have not explored the |
In Android Studio Logcat I got following information during loading model:
So I guess that most of the model is simply not suppported by NNAPI. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Steps to Reproduce:
I'm trying to configure the ONNX Runtime session within the DepthAnything Class to use NNAPI with specific flags (e.g. USE_FP16 and CPU_DISABLED). Here's the code snippet for setting up the session options:
Expected Behavior:
I expected the model inference to run using NNAPI with FP16 precision and without using the CPU.
Actual Behavior:
The inference seems to run as if these options were not applied at all. The performance and behavior do not change regardless of the flags set.
The text was updated successfully, but these errors were encountered: