You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When vscode shows a popup Completion item (i.e. what they used to call intellisense: a regular language syntax or function that vscode knows about), any inline completion is supposed to start with the Completion item. That is to say, the completion item should be added to the end of the prefix. Take the following python example:
file_path = '/tmp/my-file'
with open(file_path, "r") as handle:
# imagine the developer is in the middle of typing the period below
obj = json.
if obj.myField:
print('my field is present')
So imagine the developer is typing the . in the line obj = json., vscode will pop up possible completions for json, and likely the method loads will be the top completion. The prefix that is sent to the LLM should use a value of obj = json.loads for that line. The suffix that comes after should also be included as normal. VScode will ignore any suggestion that does not start with json.loads so it should always be included.
The range that should be returned for the vscode.InlineCompletionItem should be properly adjusted for this as well.
The text was updated successfully, but these errors were encountered:
@McPatate I opened this issue here -- however, I think llm-ls will have to change something to properly support this as my understand is that llm-ls selects the code from the document that will be sent to TGI (the inference backend).
I opened an https://github.com/huggingface/llm-ls/issues/66 as I expect that project will need make some changes to fix this, but I was instructed to open an issue here.
When vscode shows a popup Completion item (i.e. what they used to call intellisense: a regular language syntax or function that vscode knows about), any inline completion is supposed to start with the Completion item. That is to say, the completion item should be added to the end of the prefix. Take the following python example:
So imagine the developer is typing the
.
in the lineobj = json.
, vscode will pop up possible completions forjson
, and likely the methodloads
will be the top completion. The prefix that is sent to the LLM should use a value ofobj = json.loads
for that line. The suffix that comes after should also be included as normal. VScode will ignore any suggestion that does not start withjson.loads
so it should always be included.The range that should be returned for the
vscode.InlineCompletionItem
should be properly adjusted for this as well.The text was updated successfully, but these errors were encountered: