Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement custom formatting in response_format #145

Merged

Conversation

sternakt
Copy link
Collaborator

@sternakt sternakt commented Dec 4, 2024

Why are these changes needed?

These changes enable the developer to define custom structured output formatting of the response messages from the client.

Example use:

class Step(BaseModel):
    explanation: str
    output: str

class MathReasoning(BaseModel, FormatterProtocol):
    steps: List[Step]
    final_answer: str

    def format(self) -> str:
        steps_output = "\n".join(
            f"Step {i + 1}: {step.explanation}\n  Output: {step.output}"
            for i, step in enumerate(self.steps)
        )
        return f"{steps_output}\n\nFinal Answer: {self.final_answer}"

assistant = autogen.AssistantAgent(
    name="Math_solver",
    llm_config=llm_config,
    response_format=MathReasoning,
)

The assistant will output messages in the following format:

Step 1: [Explanation of the first step]
  Output: [Result of the first step]
Step 2: [Explanation of the second step]
  Output: [Result of the second step]
...

Final Answer: [The final answer provided by the reasoning]

For example, if the assistant is solving the square root of 16, the output could look like this:

Step 1: Break 16 into 4 x 4
  Output: 4
Step 2: Verify that 4 squared is 16
  Output: Correct

Final Answer: 4

This format is automatically generated by the format method of the MathReasoning model, ensuring clear and structured communication.

Related issue number

Closes #143

Checks

Signed-off-by: Sternakt <[email protected]>
…ormat' of github.com:ag2ai/ag2 into 143-feature-request-add-formatting-option-to-response_format
@marklysze
Copy link
Collaborator

Thanks @sternakt, all tests passed. Will tweak the notebook text and then merge, thanks so much for this!

Signed-off-by: Mark Sze <[email protected]>
@marklysze marklysze added this pull request to the merge queue Dec 5, 2024
Merged via the queue into main with commit 7736885 Dec 5, 2024
206 of 214 checks passed
@sonichi sonichi deleted the 143-feature-request-add-formatting-option-to-response_format branch December 6, 2024 02:13
@sonichi
Copy link
Collaborator

sonichi commented Dec 6, 2024

@sternakt, one consideration is how we set the response_format to a Pydantic modeL when the config is loaded from a file (see loading OAI_CONFIG_LIST). E.g. is there a way to specify it in the config file or do we say that this isn't supported.
I personally don't think it's critical to have this supported when loading from a file but you may come up against it when you're working on it.

Loading the response_format from the config file is not supported at the moment, but that is a good point Right now, when I am loading OAI_CONFIG_LIST, I dynamically add the response_format to the configs

e.g.

config_list = autogen.config_list_from_json(
    "OAI_CONFIG_LIST",
    filter_dict={
        "model": ["gpt-4o", "gpt-4o-mini"],
    },
)

for config in config_list:
    config["response_format"] = MathReasoning

The generation of pydantic model from a json schema has been discussed before but they decided against implementing that if I remember correctly.

We could use something like datamodel-code-generator and generate a Pydantic datamodel when response_format is defined in the OAI_CONFIG_LIST

But I think we should address this in a separate FeatureRequest

The code can be simplified by setting llm_config={"response_format": ..., "config_list": config_list}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request]: Add formatting option to response_format
3 participants