Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while inference #49

Open
ZhengRachel opened this issue Oct 25, 2024 · 1 comment
Open

Error while inference #49

ZhengRachel opened this issue Oct 25, 2024 · 1 comment

Comments

@ZhengRachel
Copy link

Hi. I followed the set up in the readme and run bash omni_speech/infer/run.sh omni_speech/infer/examples but encounter this error

Traceback (most recent call last):
  File "/home/rczheng/LLaMA-Omni/omni_speech/infer/infer.py", line 181, in <module>
    eval_model(args)
  File "/home/rczheng/LLaMA-Omni/omni_speech/infer/infer.py", line 94, in eval_model
    tokenizer, model, context_len = load_pretrained_model(model_path, args.model_base, is_lora=args.is_lora, s2s=args.s2s)
  File "/home/rczheng/LLaMA-Omni/omni_speech/model/builder.py", line 79, in load_pretrained_model
    model = model_cls.from_pretrained(
  File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3798, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/home/rczheng/LLaMA-Omni/omni_speech/model/language_model/omni_speech2s_llama.py", line 25, in __init__
    super().__init__(config)
  File "/home/rczheng/LLaMA-Omni/omni_speech/model/language_model/omni_speech_llama.py", line 46, in __init__
    self.model = OmniSpeechLlamaModel(config)
  File "/home/rczheng/LLaMA-Omni/omni_speech/model/language_model/omni_speech_llama.py", line 38, in __init__
    super(OmniSpeechLlamaModel, self).__init__(config)
  File "/home/rczheng/LLaMA-Omni/omni_speech/model/omni_speech_arch.py", line 32, in __init__
    self.speech_encoder = build_speech_encoder(config)
  File "/home/rczheng/LLaMA-Omni/omni_speech/model/speech_encoder/builder.py", line 7, in build_speech_encoder
    return WhisperWrappedEncoder.load(config)
  File "/home/rczheng/LLaMA-Omni/omni_speech/model/speech_encoder/speech_encoder.py", line 26, in load
    encoder = whisper.load_model(name=model_config.speech_encoder, device='cuda').encoder
  File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/whisper/__init__.py", line 154, in load_model
    model = Whisper(dims)
  File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/whisper/model.py", line 256, in __init__
    self.encoder = AudioEncoder(
  File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/whisper/model.py", line 181, in __init__
    self.register_buffer("positional_embedding", sinusoids(n_ctx, n_state))
  File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/whisper/model.py", line 66, in sinusoids
    inv_timescales = torch.exp(-log_timescale_increment * torch.arange(channels // 2))
RuntimeError: "exp_vml_cpu" not implemented for 'Half'

Could you please give some insights about solving this problem?

@jin1258804025
Copy link

这个问题有解决方法了吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants