You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. I followed the set up in the readme and run bash omni_speech/infer/run.sh omni_speech/infer/examples but encounter this error
Traceback (most recent call last):
File "/home/rczheng/LLaMA-Omni/omni_speech/infer/infer.py", line 181, in <module>
eval_model(args)
File "/home/rczheng/LLaMA-Omni/omni_speech/infer/infer.py", line 94, in eval_model
tokenizer, model, context_len = load_pretrained_model(model_path, args.model_base, is_lora=args.is_lora, s2s=args.s2s)
File "/home/rczheng/LLaMA-Omni/omni_speech/model/builder.py", line 79, in load_pretrained_model
model = model_cls.from_pretrained(
File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3798, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/rczheng/LLaMA-Omni/omni_speech/model/language_model/omni_speech2s_llama.py", line 25, in __init__
super().__init__(config)
File "/home/rczheng/LLaMA-Omni/omni_speech/model/language_model/omni_speech_llama.py", line 46, in __init__
self.model = OmniSpeechLlamaModel(config)
File "/home/rczheng/LLaMA-Omni/omni_speech/model/language_model/omni_speech_llama.py", line 38, in __init__
super(OmniSpeechLlamaModel, self).__init__(config)
File "/home/rczheng/LLaMA-Omni/omni_speech/model/omni_speech_arch.py", line 32, in __init__
self.speech_encoder = build_speech_encoder(config)
File "/home/rczheng/LLaMA-Omni/omni_speech/model/speech_encoder/builder.py", line 7, in build_speech_encoder
return WhisperWrappedEncoder.load(config)
File "/home/rczheng/LLaMA-Omni/omni_speech/model/speech_encoder/speech_encoder.py", line 26, in load
encoder = whisper.load_model(name=model_config.speech_encoder, device='cuda').encoder
File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/whisper/__init__.py", line 154, in load_model
model = Whisper(dims)
File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/whisper/model.py", line 256, in __init__
self.encoder = AudioEncoder(
File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/whisper/model.py", line 181, in __init__
self.register_buffer("positional_embedding", sinusoids(n_ctx, n_state))
File "/home/rczheng/anaconda3/envs/llama-omni/lib/python3.10/site-packages/whisper/model.py", line 66, in sinusoids
inv_timescales = torch.exp(-log_timescale_increment * torch.arange(channels // 2))
RuntimeError: "exp_vml_cpu" not implemented for 'Half'
Could you please give some insights about solving this problem?
The text was updated successfully, but these errors were encountered:
Hi. I followed the set up in the readme and run
bash omni_speech/infer/run.sh omni_speech/infer/examples
but encounter this errorCould you please give some insights about solving this problem?
The text was updated successfully, but these errors were encountered: