Realizing External Behaviour #89
OWigginsHay
started this conversation in
Communication
Replies: 1 comment 3 replies
-
Great work! Could it be beneficial to add a python function to the Functional Assistant to look up the functions it has already created that now live in the stored python files? That should allow the Functional Assistant to chain tools and provide the Interface Assistant with more succinct function call for more complicated tasks. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Lets Look at the Minimum Viable Agent
Many discussions have been looking at the swarm communication. I however believe that an assistant on it's own is not sufficiently complex to be a complete "Agent" or "Node". Here I will cover the architecture I have used to get the results in: #62
The premise behind the auto tool work is similar to the agents that create agents... there is a minimum tool that allows for the construction of all other tools. It is the request_function. It takes as argument:
But on its own, that would only add tools to the assistant, it would not make an externally intractable function.
Design
Interface
The bit that user interacts with. This talks in natural language, it can converse, it is also the only assistant type which has function tools. It therefore gets to decide when to invoke them. This combination means that the interface is the conversational front that interprets language and can turn requests for tools into request_function commands. This is an Interface Agent with an Interface Thread for persistent context.
Functional
Here when the Interface Run makes a request for a new function, the structure is sent to a Functional Thread where a Functional assistant is tasked with turning the schema into an actual python function, which is then stored. This is the externally interacting part, now when the Interface Assistant calla tool, the call is directed to perform a module import from the python store where the python code is read and executed and the result returned to the Interface Thread.
Why this works
The Interface interprets the user and can generate abstract interfaces for functions. The Interface can use the Description field to provide feedback or instruction to the Functional Assistant.
The Functional Assistant produces code which is saved locally to a pre determined location. The code return error and structs for the Interface to see. There is a fully circular communication loop.
Surely the Functional Assistant can produce broken python code?
Yes. But it does not matter, because the Interface sees it is broken when it runs, and relays the errors in the description field for the Functional Assistant to see.
Because each agent has a dedicated thread with memory, the Interface quickly becomes aware of error trends and the Functional quickly becomes aware of what kind of feedback and error handling the Interface needs to make good decisions.
Usage
Limits
One Unit is capable of a hell of a lot. I'se seen it build whole repos from the ground up.
There are stability issues where sometimes multiple tool output requests come back empty. This happens rarely and I think is an OpenAI error at the moment.
Demonstration of Speed and Capability of a Single Unit
PR: #88
Beta Was this translation helpful? Give feedback.
All reactions