-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client is not running #93
Comments
I have the same criptic error. Using |
The client isn't compatible with llama.cpp at the moment. As for the errors you're facing, I cannot debug without the full error message. Could you share your settings as well? |
that is all I am able to get as error message: These are my extension settings: {
"llm.attributionEndpoint": "https://stack.dataportraits.org/overlap",
"llm.attributionWindowSize": 250,
"llm.configTemplate": "bigcode/starcoder",
"llm.contextWindow": 8192,
"llm.documentFilter": {
"pattern": "**"
},
"llm.enableAutoSuggest": true,
"llm.fillInTheMiddle.enabled": true,
"llm.fillInTheMiddle.middle": "<fim_middle>",
"llm.fillInTheMiddle.prefix": "<fim_prefix>",
"llm.fillInTheMiddle.suffix": "<fim_suffix>",
"llm.lsp.binaryPath": null,
"llm.lsp.logLevel": "warn",
"llm.maxNewTokens": 60,
"llm.modelIdOrEndpoint": "bigcode/starcoder",
"llm.temperature": 0.2,
"llm.tlsSkipVerifyInsecure": false,
"llm.tokenizer": null,
"llm.tokensToClear": [
"<|endoftext|>"
]
} |
@NicolasAG it looks like you're running VSCode on a remote env, what platform is it? |
That is correct! :) This is my setup: I have an interactive job mounting my code running an infinite loop. I ssh into that job directly in vscode to write code directly in servers and launch quick debug scripts. |
I'm not sure llm-ls supports remote file setups for now, given it probably runs on your machine and not the remote host. When I mean what platform I was asking what type of OS and architecture is your remote code stored on? |
Ah right. my remote env is this: But note that it was working fine the first time I installed it a few months ago... |
It does not run the model locally, it queries an API that is by default our inference API but can be any API you choose. There were some major changes recently, we know have https://github.com/huggingface/llm-ls running with the extension. Your error is saying that the connection between VSCode and llm-ls is broken for some reason. I work on MacOS silicon and have had no issues running the server though, and I'm pretty sure it should also work on x86 processors for MacOS. |
I'm facing the same issue. I'm running on an x86 mac ventura 13.4, vscode 1.83.1, and llm-vscode 0.1.5. If I roll back the llm-vscode extension back to 0.0.38, everything works fine. After going back and forth multiple times (mostly to confirm before commenting) llm-vscode 0.1.5 is working again. I didn't change anything else and the remote instance running the model has not changed. When the llm-ls server was failing, there was an error message in the output tab |
I was able to get more details on the root cause of the error.
PS: @McPatate I created #97 because I thought you wanted to have a separate issue, but now I'm not sure anymore ^^ feel free to close it or respond in #97 if you decide to keep it ;) |
Same problem here. Error on using the extention on a remote server, where it used to work ok but doesn't now. If it is of any use here are the info of the remote and local machines. Remote machine:
Local machine:
|
For me, the issue seems to be llm-ls running on a mac after rebooting. I tracked it down to the rust Instant library and was able to create a llm-ls binary that now works for me. I opened an issue on the llm-ls repo describing the issue and my interim solution huggingface/llm-ls#37 |
I have very similar logs as @NicolasAG |
This issue is stale because it has been open for 30 days with no activity. |
Additional details:
|
This issue is stale because it has been open for 30 days with no activity. |
This issue is stale because it has been open for 30 days with no activity. |
Is there any solution for client not working issue for local environment? |
The client is not running issues on the Ubuntu local environment |
This issue is stale because it has been open for 30 days with no activity. |
Hi, I have the same issue locally on Windows: 2024-05-31 17:36:57.272 [info] note: Some details are omitted, run with 2024-05-31 17:36:57.300 [info] [Error - 17:36:57] Server initialization failed. 2024-05-31 17:36:57.372 [info] note: Some details are omitted, run with 2024-05-31 17:36:57.375 [info] [Error - 17:36:57] Server initialization failed. 2024-05-31 17:36:57.417 [info] note: Some details are omitted, run with 2024-05-31 17:36:57.420 [info] [Error - 17:36:57] Server initialization failed. 2024-05-31 17:36:57.461 [info] note: Some details are omitted, run with 2024-05-31 17:36:57.464 [info] [Error - 17:36:57] Server initialization failed. 2024-05-31 17:36:58.570 [info] [Error - 17:36:58] Server process exited with code 101. 2024-05-31 17:36:58.571 [info] [Error - 17:36:58] Server initialization failed. La version de VScode: |
Hi, I have activated the extension also in WSL: "WSL : Ubuntu-22.04'" |
I'm also getting this error on v0.2.2 (and older versions) on Windows. |
This issue is stale because it has been open for 30 days with no activity. |
That's what I get instead of autocompletion, whenever I type:
Quite cryptic TBH.
Runtime status:
Using Windows 11 with my local LM Studio server.
The text was updated successfully, but these errors were encountered: