Replies: 2 comments 1 reply
-
I need more, uh, context, to help here.
|
Beta Was this translation helpful? Give feedback.
1 reply
-
Yes, gptel in a `*ChatGPT*` buffer, org mode.
I prompt the model to use org-babel code blocks in its answer.
I wouldn't do this, unless it's working fine. Markdown source blocks should be converted to org-babel blocks automatically.
It's useful to me because if I am debugging an issue, I can just instruct the model to write code that would print/evaluate code that would demonstrate the issue. And then all I need to do is execute the code blocks (C+c C+c) one by one.
The problem is that the model can't see the #+RESULTS. So I have to manually copy and paste all of them to my prompt.
Why not? You can check what's actually being sent: Run (setq gptel-expert-commands t), then use the dry-run option in the transient menu to see if the #+RESULTS blocks are being sent as expected.
I assume it is because everything that's changed before the current "boundary" remains static.
I'm not sure what you mean by boundary, but everything in the buffer up to your cursor should be sent.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I instruct the model to, when debugging, prefer writing small org-babel code blocks I can evaluate one by one to understand an issue better.
The idea is to leave a trace of #+RESULTS blocks so the AI can also help debug it further.
However, it seems that it is not able to see these #+RESULTS blocks.
Would it be feasible to calculate a diff of all changes that happen before my own prompt, so it is able to see these? Right now I have to always copy and paste the #+RESULTS blocks one by one.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions