Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with Mistral loaded manually #94

Open
CraterCrater opened this issue Nov 13, 2024 · 2 comments
Open

Error with Mistral loaded manually #94

CraterCrater opened this issue Nov 13, 2024 · 2 comments
Labels
bug Something isn't working ipex-llm

Comments

@CraterCrater
Copy link

Describe the bug

I downloaded the files for Mistral 7B Instruct v0.3 - I had previously had it successfully running under the previous version but when I updated all my models were removed.

To Reproduce

Steps to reproduce the behavior:
Load the program
Select Mistral from the LLMs in the Answer tab
Type a prompt
Get the below error

Expected behavior

I expected to get a response from the LLM

Screenshots

[ai-backend]: 2024-11-12 18:25:31,783 - INFO - 127.0.0.1 - - [12/Nov/2024 18:25:31] "POST /api/llm/chat HTTP/1.1" 200 -

Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]
Loading checkpoint shards: 33%|ΓûêΓûêΓûêΓûÄ | 1/3 [00:01<00:03, 1.59s/it]
Loading checkpoint shards: 67%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûï | 2/3 [00:03<00:01, 1.80s/it]
Loading checkpoint shards: 100%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûê| 3/3 [00:05<00:00, 1.76s/it]
Loading checkpoint shards: 100%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûê| 3/3 [00:05<00:00, 1.75s/it]

[ai-backend]: 2024-11-12 18:25:37,594 - INFO - Converting the current model to sym_int4 format......

[ai-backend]: You set add_prefix_space. The tokenizer needs to be converted from the slow tokenizers

[ai-backend]: 2024-11-12 18:25:47,692 - INFO - got prompt: [{'question': 'hello', 'answer': ''}]

[ai-backend]: Setting pad_token_id to eos_token_id:2 for open-end generation.

[ai-backend]: Traceback (most recent call last):

[ai-backend]: File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\service\llm_biz.py", line 72, in stream_chat_generate
model.generate(**args)
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\ipex_llm\transformers\lookup.py", line 123, in generate
return original_generate(self,
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\ipex_llm\transformers\speculative.py", line 109, in generate
return original_generate(self,
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\ipex_llm\transformers\pipeline_parallel.py", line 281, in generate
return original_generate(self,
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\transformers\generation\utils.py", line 1575, in generate
result = self._sample(
^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\transformers\generation\utils.py", line 2697, in _sample
outputs = self(
^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 1157, in forward
outputs = self.model(
^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[ai-backend]: File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\ipex_llm\transformers\models\mistral.py", line 217, in mistral_model_forward_4_36
return MistralModel.forward(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 1042, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\transformers\models\mistral\modeling_mistral.py", line 757, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\ipex_llm\transformers\models\mistral.py", line 1144, in mistral_attention_forward_4_39
return forward_function(
^^^^^^^^^^^^^^^^^
File "C:\Users\rober\AppData\Local\Programs\AI Playground\resources\env\Lib\site-packages\ipex_llm\transformers\models\mistral.py", line 1194, in mistral_attention_forward_4_39_original
query_states, key_states, value_states = xe_linear.forward_qkv(hidden_states,
^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'xe_linear' has no attribute 'forward_qkv'

Environment (please complete the following information):

  • OS: Windows 11 Pro
  • GPU: Intel Arc A770 16G
  • CPU: i9-10850K
  • Version: v1.22.1-beta

Additional context

I wasn't sure if I had successfully redownloaded all the files for Mistral but everything I check looks right. As far as I'm aware this is a model that should be supported. If I'm wrong about that please let me know! Thanks,

@Nuullll
Copy link
Contributor

Nuullll commented Dec 5, 2024

Filed a issue for IPEX-LLM: intel-analytics/ipex-llm#12506

@Nuullll
Copy link
Contributor

Nuullll commented Dec 5, 2024

Temporary workaround: 62920f6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ipex-llm
Projects
None yet
Development

No branches or pull requests

2 participants