Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot build monsterapi/gemma-2b-lora-maths-orca-200k using LLM Conversion script #5750

Open
hassanabidpk opened this issue Nov 22, 2024 · 2 comments
Assignees
Labels
platform:python MediaPipe Python issues task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup type:support General questions

Comments

@hassanabidpk
Copy link

I am trying to convert monsterapi/gemma-2b-lora-maths-orca-200k to TFLite Flatbuffers file as mentioned here

But getting errors. I am reusing this script

Added below functions

def gemma2b_lora_download(token):
  REPO_ID = "monsterapi/gemma-2b-lora-maths-orca-200k"
  FILENAMES = ["tokenizer.json", "tokenizer_config.json", "adapter_model.safetensors"]
  os.environ['HF_TOKEN'] = "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"
  with out:
    for filename in FILENAMES:
      hf_hub_download(repo_id=REPO_ID, filename=filename, local_dir="./gemma-2b-lora-it")

def gemma2b_lora_convert_config(backend):
  input_ckpt = '/content/gemma-2b-lora-it/'
  vocab_model_file = '/content/gemma-2b-lora-it/'
  output_dir = '/content/intermediate/gemma-2b-lora-it/'
  output_tflite_file = f'/content/converted_models/gemma_base_lora_{backend}.bin'
  lora_output_tflite_file = f'/content/converted_models/gemma_lora_{backend}.bin'
  return converter.ConversionConfig(input_ckpt=input_ckpt, ckpt_format='safetensors', model_type='GEMMA_2B', backend=backend, output_dir=output_dir, combine_file_only=False, vocab_model_file=vocab_model_file, output_tflite_file=output_tflite_file, lora_output_tflite_file=lora_output_tflite_file)

Every time Colab enteprise is crashing

Is there any example to convert this trained model to .bin format for running on Android?

@hassanabidpk hassanabidpk added the type:others issues not falling in bug, perfromance, support, build and install or feature label Nov 22, 2024
@kalyan2789g kalyan2789g added type:support General questions platform:python MediaPipe Python issues task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup and removed type:others issues not falling in bug, perfromance, support, build and install or feature labels Nov 22, 2024
@kalyan2789g
Copy link
Collaborator

Hi @hassanabidpk,

We are looking into the issue. Could you please confirm the gemma model name/configuration which you are working? Please share the complete error logs to analyse the issue in detail.

We will keep you posted on the progress.

Thanks.

@kalyan2789g kalyan2789g added the stat:awaiting response Waiting for user response label Nov 22, 2024
@hassanabidpk
Copy link
Author

I am using this Gemma2B trained model : https://huggingface.co/monsterapi/gemma-2b-lora-maths-orca-200k based on google/gemma-2b

I am attaching
app.log
llm_conversion_notebook.txt
notebook and app.log file

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:python MediaPipe Python issues task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup type:support General questions
Projects
None yet
Development

No branches or pull requests

2 participants