Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refinement of folder import #426

Merged

Conversation

wonderwhy-er
Copy link
Collaborator

@wonderwhy-er wonderwhy-er commented Nov 26, 2024

Motivation

PR for refinement of folder import based on feedback from here:
#413 (comment)

Improvements

  • Split import button in to separate files
  • Added code to detect package.json and make decisions weather to run npx serve or npm install && npm run dev/start/preview commands after file import. Or fallback to proposing to investigate package.json for other commands if those are not found with LLM
  • I tried to add some "loading" and also max files at 1000 warning that this project does not yet support large projects
    But even 1000 seems to big. Will not fit in to LLM context anyways...

@mersadwm
Copy link

Hi, great job, and thank you for adding this feature—it’s fantastic!
I’m not sure if this issue is relevant, but I wanted to bring it up just in case it’s worth investigating.
I’m encountering a problem with imported projects from bolt.new: they don’t seem to work with any APIs. Whenever I try, I consistently get the following error:
“There was an error processing your request: No details were returned.”
However, if I start a new chat or project, the same API works perfectly fine.

@wonderwhy-er
Copy link
Collaborator Author

Hi, great job, and thank you for adding this feature—it’s fantastic! I’m not sure if this issue is relevant, but I wanted to bring it up just in case it’s worth investigating. I’m encountering a problem with imported projects from bolt.new: they don’t seem to work with any APIs. Whenever I try, I consistently get the following error: “There was an error processing your request: No details were returned.” However, if I start a new chat or project, the same API works perfectly fine.

I think its related to size of context, that is one of next big things to fix.

@dustinwloring1988 dustinwloring1988 added the enhancement New feature or request label Dec 2, 2024
@wonderwhy-er wonderwhy-er marked this pull request as ready for review December 2, 2024 18:34
@wonderwhy-er
Copy link
Collaborator Author

There is one issue with this PR ignore of files, we still get whole trough file input list. Which could be large.

I could instead use window.showDirectoryPicker but that is not available in Firefox and Safari.

So I could add code that uses that and falls back on input in Safari and Firefox, but don't really want to yet.

There bigger issues with bigger projects anyways for now.

@dustinwloring1988
Copy link
Collaborator

I like this a lot except for the skipping of cretin files, like images. Maybe we could add a tab in the settings modal PR I have and the user could whitelist file types?

@wonderwhy-er
Copy link
Collaborator Author

I like this a lot except for the skipping of cretin files, like images. Maybe we could add a tab in the settings modal PR I have and the user could whitelist file types?

Reason images are ignored is that it was not implemented, they need to be handled dufferently. Currently all files are added in to chat as text. If you allow images they will end up in chat as base64 and overload context.
You can try.

To support non textual files you will need to invent a fute proof way to add binnary files to chat that are not sent to AI.
There are different ways to do it.
Its tricky, i would open duscussion in forum first.

@thecodacus
Copy link
Collaborator

To support non textual files you will need to invent a fute proof way to add binnary files to chat that are not sent to AI.
There are different ways to do it.

Ooo I just realized this we can put base64 content in to the chat and it will not overload the context with this PR added
#578

toast.info(`Skipping ${binaryFilePaths.length} binary files`);
}

const { userMessage, assistantMessage } = await createChatFromFolder(textFiles, binaryFilePaths, folderName);
Copy link
Collaborator

@thecodacus thecodacus Dec 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this return an array of messages instead of object, so that we can directly pass them.
this will help if we want to return multiple messages instead of one from user and one from assistant

I like the shell command to be in a separate message with its own artifact. so that I can bundle the files artifact into one single action.. its already present the main repo

@thecodacus
Copy link
Collaborator

@wonderwhy-er , please check the adjustments

@wonderwhy-er
Copy link
Collaborator Author

To support non textual files you will need to invent a fute proof way to add binnary files to chat that are not sent to AI.
There are different ways to do it.

Ooo I just realized this we can put base64 content in to the chat and it will not overload the context with this PR added #578

Yeah, I was thinking to allow images, and then just filter them from context, replace just with path when sent to AI
But I have not tested if having base64 files works when sending to wrokbench and webcontainer
That may need additional fixing

@wonderwhy-er wonderwhy-er merged commit 5b6b26b into stackblitz-labs:main Dec 8, 2024
1 check passed
@thecodacus
Copy link
Collaborator

But I have not tested if having base64 files works when sending to wrokbench and webcontainer
That may need additional fixing

yes we need addition tags on the action tags

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants