Replies: 4 comments
-
I'm trying to replicate this flow with gptel-write and adding a buffer to the context, but it doesn't exactly match. I would like to be able to provide instructions in a gptel buffer, and get results in another buffer (e.g., a .py file) |
Beta Was this translation helpful? Give feedback.
-
gptel does not work like this out of the box, but it provides redirection options that can help:
For a dedicated tool that works this way out of the box, you can check out the Elysium package. There may also be other LLM clients that work like this. |
Beta Was this translation helpful? Give feedback.
-
I don't understand the need for this feature. Could you explain what a "large AI response" is, when you would see it, and how it would be identified? |
Beta Was this translation helpful? Give feedback.
-
Moving to discussions as this is currently not planned. |
Beta Was this translation helpful? Give feedback.
-
Description
Implement an artifact-like output window for gptel that displays substantial, standalone content generated by the AI in a dedicated buffer separate from the main conversation. This feature would enhance gptel's functionality by providing a more interactive and visually appealing way to work with large AI-generated outputs.
Proposed Functionality
Dedicated Output Buffer: Create a new buffer called "gptel-artifact" to display large AI responses.
Side-by-Side Display: Show the artifact buffer alongside the main gptel conversation buffer using display-buffer-in-side-window or a similar function.
Syntax Highlighting: Apply appropriate major modes to the artifact buffer based on content type (e.g., markdown-mode, org-mode, or language-specific modes for code).
Version Control: Implement a simple version history for artifact content, allowing users to revert changes or view previous versions.
Beta Was this translation helpful? Give feedback.
All reactions