Replies: 3 comments 6 replies
-
We are looking into EBNF support within As to 'why not the default?' the reason is that |
Beta Was this translation helpful? Give feedback.
-
I tend to get a bit rambly at times, so it's nice to hear that it's appreciated 😆
❤️
That is essentially the goal here! You write python, we turn that into something that can be used to 🤞 efficiently guide an LLM.
Great question. Hopefully I can help address it a bit... When you call a guidance function, nothing really happens. That's imprecise, but what I mean is that all that really goes on under the hood is that the python code inside the function is used to generate a grammar (sort of like a compilation step). Functions that require dynamic python control-flow are a little more complicated than that, but that's the general idea (this is the distinction between "stateless" and "stateful" grammars that you can find in the readme). Adding strings and/or the grammars returned by guidance functions (and strings) together simply concatenates the strings/grammars. Still no LLM calls. Generation only happens when one of these grammars is added to the right of a Your prompts should never really be "modified" by this library (beyond possibly some chat/role blocks, which I believe are more or less "opt-in"). Adding a string to an LLM is quite literally appending that string to the "prompt" used for further generation. Adding a grammar just runs the LLM forward on that grammar, furthermore appending the output of that generation to the downstream "prompt". I'm putting quotes on "prompt" because I think a nice insight of this library is that there really isn't a distinction between prompts and deterministic generation constraints. |
Beta Was this translation helpful? Give feedback.
-
Just as as follow-up, I wanted to see if there's been any update on this? It's so cool to be able to use (E)BNF grammars directly in guidance. @hudson-ai |
Beta Was this translation helpful? Give feedback.
-
Context:
I'm slowly wrapping my head around guidance's way of working and evaluating whether it's worth the added layer of abstraction.
One thing that strikes me as odd in the examples – although this might just say something about my current level of understanding – is that CFG grammars are consistently defined using Guidance operators, whereas BNF notation already exists. By contrast, regex patterns are defined using the familiar regex DSL.
I suppose there are advantages to using a custom grammar DSL in terms of getting values from the generated output. I'm concerned, though, that I may at some point encounter a use case that is easier/only possible to define in BNF notation, which obviously factors into how I evaluate Guidance.
Questions:
There's a good chance my questions simply reveal I haven't fully understood the power of Guidance yet. Anyway, thanks in advance for any extra info!
Beta Was this translation helpful? Give feedback.
All reactions