The repository aims to run the Generative AI models in a simple react-native application.
There are various players in the space: [ ] MLC.ai - https://llm.mlc.ai/docs/deploy/android.html https://llm.mlc.ai/docs/deploy/ios.html
[ ] Executorch - Android Example with C API
[ ] llm.c
[ ] llamma.cpp
Setup the scaffolding to call a hello world function in a compiled c program. Q1: Given a simple main.c library and header files how do you push to app? Q2: How do you call a a compile binary from local module
First objective: load model into device Q1: Are there any starter guides? Q2: What are the steps for invoking the models for use in Android and Iphone? Q3: Finding a Llama Tokenizer Q4: Calling the model to Generate.