We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如图
正常回答
No response
- OS: - Python: - Transformers: - PyTorch: - CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
The text was updated successfully, but these errors were encountered:
你好,可以提供下你现在使用的量化模型的方式和环境么?这可以有利于我帮你分析问题。
Sorry, something went wrong.
我用了这两个模型: https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf/resolve/main/Model-7.6B-Q4_K_M.gguf https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf/resolve/main/Model-7.6B-Q4_0.gguf 使用这个库:https://github.com/mybigday/llama.rn 版本 0.3.10 设备iPad Air5 iOS 17.3
明白了,原因应该是,minicpm-omni的gguf这个还没有合并到llama.cpp官方分支中。 可以等一些时间,我们正在合并pr。
tc-mb
No branches or pull requests
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
如图
期望行为 | Expected Behavior
正常回答
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered: