Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgraded and pre-compiled TFLite binaries for both iOS and Android #85

Merged
merged 5 commits into from
Sep 9, 2024

Conversation

PStrakendal
Copy link
Contributor

Current version (1.3.0) has some known issues:

  • iOS version does not build on x86 Simulators
  • The iOS binaries are built locally and committed to the repo
  • The Android binaries are extracted from the .aar file in the Maven repository and is likely not using the same version of TFLite as iOS.

In this PR I suggest to use the ios binaries from TensorFlowLiteC pod instead of building and committing them to the repo. The binaries in the TensorFlowLiteC pod also supports building on x86 Simulators.

In this PR I have also upgraded the version of TFLite.

  • Android to 2.16.1
  • iOS to 2.17.0

Version 2.17.0 has not been published to the Maven repository yet tensorflow/tensorflow#72157) and for the TensorFlowLiteC there is no 2.16.1 version published. So at the moment the versions for Android and iOS are still not the same.

Some needed headers are not included in the .aar files newer than 2.12.0, that is why the tensorflow submodule is still needed.

This is an alternative to This PR, but instead of manually extracting the binaries from the TensorFlowLiteC I suggest them to be used directly from the podspec. I also think it is better to use released versions of TensorFlowLiteC and not nightly builds.

I have tested building and running the example app on Android (Samsung Galaxy S21), iPhone SE (2022), and building for iOS simulators on a MacBook Pro M1 and MacBook Pro Intel.

@mrousavy
Copy link
Owner

wow, this is really cool!!! I'll check that out internally with my team, thank you for your PR!! 🥳

@mrousavy
Copy link
Owner

I kinda hate that we have to use two (or actually even three) different sources of truth for a purely cross-platform library, i.e. the headers are one source of truth, then we either have them implemented as a 2.17.0 pod, or a 2.16.1 maven dep...

I was hoping to build everything from source, off of a stable release to have a single source of truth. But this is definitely a better option than what we have now.

@mrousavy
Copy link
Owner

hm, iOS build failed...

@PStrakendal
Copy link
Contributor Author

hm, iOS build failed...

This commit should fix that: 75ae18a

@DZamataev
Copy link

@mrousavy please review 🙏

@mrousavy mrousavy merged commit 35d7cbd into mrousavy:main Sep 9, 2024
7 checks passed
@DZamataev
Copy link

Could you please make a new release version 1.4.0 with this PR merged in?

@d3vhound
Copy link

@mrousavy would greatly appreciate a new release with this fix in!

@mrousavy
Copy link
Owner

Done, 1.4.0 is out!

@drewandre
Copy link

Thank you all for this fix! It's very helpful running the model on a simulator. The only issue I found so far is core-ml is not available on the simulator... I had to change my import to check if we're running on an emulator (something I defined in global scope elsewhere) and opt-out of core-ml:

  const objectDetection = useTensorflowModel(
    require('assets/efficientdet-lite4-detection-metadata.tflite'),
    Platform.select({
      ios: global.isEmulator ? undefined : 'core-ml',
      android: undefined,
    })
  )

@bang9
Copy link
Contributor

bang9 commented Nov 23, 2024

@mrousavy @PStrakendal The tensorflow directory is linked as a submodule and is not included in the files published via npm.

While it might work fine when cloning the repository, but installing library and building the app can result in missing TensorFlow headers, causing the CMake build to fail.

@lucksp
Copy link

lucksp commented Dec 3, 2024

The only issue I found so far is core-ml is not available on the simulator

I think this PR may have broke the core-ml delegate when loading newly trained TFLite models:

v 1.3.0 model output with delegate:
"delegate": "core-ml",

since v 1.4.0 + Sept 1 2024, cannot load an updated trained model with a core-ml delegate passed. Something changed perhaps? now can only use new model without core-ml delegate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants