-
-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipe LLM response to files #14
Comments
It is easy enough to send the output to a file with > and >>, but I guess you talking about something more intelligent, for instance, updating a function definition in a file? Or splitting one response into multiple files? |
@simonw 's llm tool already stores every response inside a SQLite database by default, in case that is not known. Of course you can use his If you want to go really nuts with the responses from the LLM, have a look at e.g. LangChain or Instructor. LangChain offers a broad spectrum to persist responses, from in-memory, simple files to several databases. See langchain_community and langchain-postgres. Instructor is more lightweight and focuses on forcing LLMs to return data in a structured form (it generates JSON schemas on the fly and adds them to the prompt, leveraging tool calling if available), so they can easily get processed further. For TypeScript aficionados like me, there is also an official port instructor-js. This is so brilliant: JSON (TypesScript) + Prompt -> LLM -> JSON back. |
Thanks for your input @irthomasthomas and @fry69. I was thinking a unix-like tool for parsing out stdout responses e.g:
into corresponding files on the filesystem. What would be a typical approach for this? I was initially experimenting with some cobbled together regex's. It looks like instructor etc is close but unsure how would approach in this case. |
@eddie : there is a command line tool chatgptextractcodeblock in my chatgpt tool suite that extracts the code block, though that's done in Javascript. BTW, you guys:
That also uses my "put it into the AI's mouth pattern" to encapsulate the Some of that might be implemented into LLM or plugins or partially into files-to-prompt by somebody, but that could |
I've been leveraging your llm, strip-tags, and ttok tools and this is the perfect addition! Thank you @simonw!
I have been toying with the reverse of this, where we can pipe typical LLM responses back to the files system with a quick confirmation step. Would this functionality belong in the files-to-prompt tool or in it's own utility, and I'm curious if anyone knows of a CLI tool that already does this?
Many thanks!
The text was updated successfully, but these errors were encountered: