Skip to content

Commit

Permalink
Python: Persist user and assistant messages in chat history (concept …
Browse files Browse the repository at this point in the history
…sample) (microsoft#7407)

### Motivation and Context

In the concept sample `chat_gpt_api_function_calling.py`, the user
messages and assistant messages are not persisted in the history as the
user continues to chat.

<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.
-->

### Description

This PR persists the user and assistant messages in the history for the
sample. Supersedes microsoft#6668 as there were errors with the pre-commit check.

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [X] The code builds clean without any errors or warnings
- [X] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [X] All unit tests pass, and I have added new tests where possible
- [X] I didn't break anyone 😄
  • Loading branch information
moonbox3 authored Jul 27, 2024
1 parent 68b21b8 commit 3169c8a
Showing 1 changed file with 10 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ async def handle_streaming(
kernel: Kernel,
chat_function: "KernelFunction",
arguments: KernelArguments,
) -> None:
) -> str | None:
response = kernel.invoke_stream(
chat_function,
return_function_results=False,
Expand All @@ -129,12 +129,14 @@ async def handle_streaming(

print("Mosscap:> ", end="")
streamed_chunks: list[StreamingChatMessageContent] = []
result_content = []
async for message in response:
if not execution_settings.function_choice_behavior.auto_invoke_kernel_functions and isinstance(
message[0], StreamingChatMessageContent
):
streamed_chunks.append(message[0])
else:
result_content.append(message[0])
print(str(message[0]), end="")

if streamed_chunks:
Expand All @@ -145,6 +147,9 @@ async def handle_streaming(
print_tool_calls(streaming_chat_message)

print("\n")
if result_content:
return "".join([str(content) for content in result_content])
return None


async def chat() -> bool:
Expand All @@ -164,7 +169,7 @@ async def chat() -> bool:
arguments["chat_history"] = history

if stream:
await handle_streaming(kernel, chat_function, arguments=arguments)
result = await handle_streaming(kernel, chat_function, arguments=arguments)
else:
result = await kernel.invoke(chat_function, arguments=arguments)

Expand All @@ -177,6 +182,9 @@ async def chat() -> bool:
return True

print(f"Mosscap:> {result}")

history.add_user_message(user_input)
history.add_assistant_message(str(result))
return True


Expand Down

0 comments on commit 3169c8a

Please sign in to comment.