forked from microsoft/semantic-kernel
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Python: Adjust tool limit per request (microsoft#9894)
### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We have set incorrect limits on the number of tools that can be provided to the AI services per request. ### Description 1. This PR removes the limits and will have requests fail on the service side if there is any limit. 2. Add unit tests for prompt execution settings of some services that are previously lacking such tests. 3. Fix the test coverage workflow. <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone 😄 --------- Co-authored-by: Evan Mattson <[email protected]>
- Loading branch information
1 parent
ceae00c
commit 31ec20e
Showing
16 changed files
with
644 additions
and
84 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,55 +1,53 @@ | ||
name: Python Test Coverage | ||
|
||
on: | ||
pull_request_target: | ||
pull_request: | ||
branches: ["main", "feature*"] | ||
paths: | ||
- "python/**" | ||
workflow_run: | ||
workflows: ["Python Unit Tests"] | ||
types: | ||
- in_progress | ||
- "python/semantic_kernel/**" | ||
- "python/tests/unit/**" | ||
env: | ||
# Configure a constant location for the uv cache | ||
UV_CACHE_DIR: /tmp/.uv-cache | ||
|
||
permissions: | ||
contents: write | ||
checks: write | ||
pull-requests: write | ||
|
||
jobs: | ||
python-tests-coverage: | ||
runs-on: ubuntu-latest | ||
continue-on-error: true | ||
permissions: | ||
pull-requests: write | ||
contents: read | ||
actions: read | ||
continue-on-error: false | ||
defaults: | ||
run: | ||
working-directory: python | ||
env: | ||
UV_PYTHON: "3.10" | ||
steps: | ||
- name: Wait for unit tests to succeed | ||
uses: lewagon/[email protected] | ||
with: | ||
ref: ${{ github.event.pull_request.head.sha }} | ||
check-name: 'Python Test Coverage' | ||
repo-token: ${{ secrets.GH_ACTIONS_PR_WRITE }} | ||
wait-interval: 90 | ||
allowed-conclusions: success | ||
- uses: actions/checkout@v4 | ||
- name: Setup filename variables | ||
run: echo "FILE_ID=${{ github.event.number }}" >> $GITHUB_ENV | ||
- name: Download Files | ||
uses: actions/download-artifact@v4 | ||
- name: Set up uv | ||
uses: astral-sh/setup-uv@v4 | ||
with: | ||
github-token: ${{ secrets.GH_ACTIONS_PR_WRITE }} | ||
run-id: ${{ github.event.workflow_run.id }} | ||
path: python/ | ||
merge-multiple: true | ||
- name: Display structure of downloaded files | ||
run: ls python/ | ||
version: "0.5.x" | ||
enable-cache: true | ||
cache-suffix: ${{ runner.os }}-${{ env.UV_PYTHON }} | ||
- name: Install the project | ||
run: uv sync --all-extras --dev | ||
- name: Test with pytest | ||
run: uv run --frozen pytest -q --junitxml=pytest.xml --cov=semantic_kernel --cov-report=term-missing:skip-covered ./tests/unit | tee python-coverage.txt | ||
- name: Pytest coverage comment | ||
id: coverageComment | ||
uses: MishaKav/pytest-coverage-comment@main | ||
continue-on-error: true | ||
continue-on-error: false | ||
with: | ||
github-token: ${{ secrets.GH_ACTIONS_PR_WRITE }} | ||
pytest-coverage-path: python-coverage.txt | ||
pytest-coverage-path: python/python-coverage.txt | ||
coverage-path-prefix: "python/" | ||
title: "Python Test Coverage Report" | ||
badge-title: "Python Test Coverage" | ||
junitxml-title: "Python Unit Test Overview" | ||
junitxml-path: pytest.xml | ||
junitxml-path: python/pytest.xml | ||
default-branch: "main" | ||
unique-id-for-comment: python-test-coverage | ||
report-only-changed-files: true |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
143 changes: 143 additions & 0 deletions
143
...n/tests/unit/connectors/ai/azure_ai_inference/test_azure_ai_inference_request_settings.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,143 @@ | ||
# Copyright (c) Microsoft. All rights reserved. | ||
|
||
from semantic_kernel.connectors.ai.azure_ai_inference import ( | ||
AzureAIInferenceChatPromptExecutionSettings, | ||
AzureAIInferenceEmbeddingPromptExecutionSettings, | ||
AzureAIInferencePromptExecutionSettings, | ||
) | ||
from semantic_kernel.connectors.ai.prompt_execution_settings import PromptExecutionSettings | ||
|
||
|
||
def test_default_azure_ai_inference_prompt_execution_settings(): | ||
settings = AzureAIInferencePromptExecutionSettings() | ||
|
||
assert settings.frequency_penalty is None | ||
assert settings.max_tokens is None | ||
assert settings.presence_penalty is None | ||
assert settings.seed is None | ||
assert settings.stop is None | ||
assert settings.temperature is None | ||
assert settings.top_p is None | ||
assert settings.extra_parameters is None | ||
|
||
|
||
def test_custom_azure_ai_inference_prompt_execution_settings(): | ||
settings = AzureAIInferencePromptExecutionSettings( | ||
frequency_penalty=0.5, | ||
max_tokens=128, | ||
presence_penalty=0.5, | ||
seed=1, | ||
stop="world", | ||
temperature=0.5, | ||
top_p=0.5, | ||
extra_parameters={"key": "value"}, | ||
) | ||
|
||
assert settings.frequency_penalty == 0.5 | ||
assert settings.max_tokens == 128 | ||
assert settings.presence_penalty == 0.5 | ||
assert settings.seed == 1 | ||
assert settings.stop == "world" | ||
assert settings.temperature == 0.5 | ||
assert settings.top_p == 0.5 | ||
assert settings.extra_parameters == {"key": "value"} | ||
|
||
|
||
def test_azure_ai_inference_prompt_execution_settings_from_default_completion_config(): | ||
settings = PromptExecutionSettings(service_id="test_service") | ||
chat_settings = AzureAIInferenceChatPromptExecutionSettings.from_prompt_execution_settings(settings) | ||
|
||
assert chat_settings.service_id == "test_service" | ||
assert chat_settings.frequency_penalty is None | ||
assert chat_settings.max_tokens is None | ||
assert chat_settings.presence_penalty is None | ||
assert chat_settings.seed is None | ||
assert chat_settings.stop is None | ||
assert chat_settings.temperature is None | ||
assert chat_settings.top_p is None | ||
assert chat_settings.extra_parameters is None | ||
|
||
|
||
def test_azure_ai_inference_prompt_execution_settings_from_openai_prompt_execution_settings(): | ||
chat_settings = AzureAIInferenceChatPromptExecutionSettings(service_id="test_service", temperature=1.0) | ||
new_settings = AzureAIInferencePromptExecutionSettings(service_id="test_2", temperature=0.0) | ||
chat_settings.update_from_prompt_execution_settings(new_settings) | ||
|
||
assert chat_settings.service_id == "test_2" | ||
assert chat_settings.temperature == 0.0 | ||
|
||
|
||
def test_azure_ai_inference_prompt_execution_settings_from_custom_completion_config(): | ||
settings = PromptExecutionSettings( | ||
service_id="test_service", | ||
extension_data={ | ||
"frequency_penalty": 0.5, | ||
"max_tokens": 128, | ||
"presence_penalty": 0.5, | ||
"seed": 1, | ||
"stop": "world", | ||
"temperature": 0.5, | ||
"top_p": 0.5, | ||
"extra_parameters": {"key": "value"}, | ||
}, | ||
) | ||
chat_settings = AzureAIInferenceChatPromptExecutionSettings.from_prompt_execution_settings(settings) | ||
|
||
assert chat_settings.service_id == "test_service" | ||
assert chat_settings.frequency_penalty == 0.5 | ||
assert chat_settings.max_tokens == 128 | ||
assert chat_settings.presence_penalty == 0.5 | ||
assert chat_settings.seed == 1 | ||
assert chat_settings.stop == "world" | ||
assert chat_settings.temperature == 0.5 | ||
assert chat_settings.top_p == 0.5 | ||
assert chat_settings.extra_parameters == {"key": "value"} | ||
|
||
|
||
def test_azure_ai_inference_chat_prompt_execution_settings_from_custom_completion_config_with_functions(): | ||
settings = PromptExecutionSettings( | ||
service_id="test_service", | ||
extension_data={ | ||
"tools": [{"function": {}}], | ||
}, | ||
) | ||
chat_settings = AzureAIInferenceChatPromptExecutionSettings.from_prompt_execution_settings(settings) | ||
|
||
assert chat_settings.tools == [{"function": {}}] | ||
|
||
|
||
def test_create_options(): | ||
settings = AzureAIInferenceChatPromptExecutionSettings( | ||
service_id="test_service", | ||
extension_data={ | ||
"frequency_penalty": 0.5, | ||
"max_tokens": 128, | ||
"presence_penalty": 0.5, | ||
"seed": 1, | ||
"stop": "world", | ||
"temperature": 0.5, | ||
"top_p": 0.5, | ||
"extra_parameters": {"key": "value"}, | ||
}, | ||
) | ||
options = settings.prepare_settings_dict() | ||
|
||
assert options["frequency_penalty"] == 0.5 | ||
assert options["max_tokens"] == 128 | ||
assert options["presence_penalty"] == 0.5 | ||
assert options["seed"] == 1 | ||
assert options["stop"] == "world" | ||
assert options["temperature"] == 0.5 | ||
assert options["top_p"] == 0.5 | ||
assert options["extra_parameters"] == {"key": "value"} | ||
assert "tools" not in options | ||
assert "tool_config" not in options | ||
|
||
|
||
def test_default_azure_ai_inference_embedding_prompt_execution_settings(): | ||
settings = AzureAIInferenceEmbeddingPromptExecutionSettings() | ||
|
||
assert settings.dimensions is None | ||
assert settings.encoding_format is None | ||
assert settings.input_type is None | ||
assert settings.extra_parameters is None |
Oops, something went wrong.