Skip to content

Commit

Permalink
vizroAI UI tests
Browse files Browse the repository at this point in the history
  • Loading branch information
l0uden committed Nov 18, 2024
1 parent d74645f commit 2af4dce
Show file tree
Hide file tree
Showing 15 changed files with 982 additions and 4 deletions.
87 changes: 87 additions & 0 deletions .github/workflows/test-vizro-ai-ui.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
name: Integration tests for VizroAI UI

defaults:
run:
working-directory: vizro-ai

on:
push:
branches: [main]
pull_request:
branches:
- main
paths:
- "vizro-ai/**"
- "!vizro-ai/docs/**"

env:
PYTHONUNBUFFERED: 1
FORCE_COLOR: 1

jobs:
test-vizro-ai-ui-fork:
if: ${{ github.event.pull_request.head.repo.fork }}
name: test-vizro-ai-ui
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Passed fork step
run: echo "Success!"

test-vizro-ai-ui:
if: ${{ ! github.event.pull_request.head.repo.fork }}
name: test-vizro-ai-ui
runs-on: ubintu-latest

steps:
- uses: actions/checkout@v4

- name: Set up Python 3.12
uses: actions/setup-python@v5
with:
python-version: "3.12"

- name: Install Hatch
run: pip install hatch

- name: Show dependency tree
run: hatch run tests:pip tree

- name: Run vizroAI UI tests
run: |
python examples/dashboard_ui/app.py &
tests/vizro_ai_ui/wait-for-it.sh 127.0.0.1:8050 -t 30
hatch run tests:test-vizro-ai-ui
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}


- name: Copy failed screenshots
if: failure()
run: |
mkdir /home/runner/work/vizro/vizro/vizro-ai/failed_screenshots/
cp tests*.png failed_screenshots
- name: Archive screenshot artifacts
uses: actions/upload-artifact@v4
if: failure()
with:
name: Failed screenshots
path: |
/home/runner/work/vizro/vizro/vizro-ai/failed_screenshots/*.png
- name: Send custom JSON data to Slack
id: slack
uses: slackapi/[email protected]
if: failure()
with:
payload: |
{
"text": "VizroAI UI tests build result: ${{ job.status }}\nBranch: ${{ github.head_ref }}\n${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
4 changes: 0 additions & 4 deletions .github/workflows/vizro-qa-tests-trigger.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ jobs:
matrix:
include:
- label: integration tests
- label: vizro-ai ui tests
steps:
- name: Passed fork step
run: echo "Success!"
Expand All @@ -34,7 +33,6 @@ jobs:
matrix:
include:
- label: integration tests
- label: vizro-ai ui tests
steps:
- uses: actions/checkout@v4
- name: Tests trigger
Expand All @@ -44,8 +42,6 @@ jobs:
if [ "${{ matrix.label }}" == "integration tests" ]; then
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_INTEGRATION_TESTS_WORKFLOW }}
elif [ "${{ matrix.label }}" == "vizro-ai ui tests" ]; then
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_VIZRO_AI_UI_TESTS_WORKFLOW }}
fi
export INPUT_GITHUB_TOKEN=${{ secrets.VIZRO_SVC_PAT }}
export INPUT_REF=main # because we should send existent branch to dispatch workflow
Expand Down
Empty file added vizro-ai/__init__.py
Empty file.
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
<!--
A new scriv changelog fragment.
Uncomment the section that is right (remove the HTML comment wrapper).
-->

<!--
### Highlights ✨
- A bullet item for the Highlights ✨ category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Removed
- A bullet item for the Removed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Added
- A bullet item for the Added category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Changed
- A bullet item for the Changed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Deprecated
- A bullet item for the Deprecated category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Fixed
- A bullet item for the Fixed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Security
- A bullet item for the Security category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
4 changes: 4 additions & 0 deletions vizro-ai/hatch.toml
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ pypath = "hatch run python -c 'import sys; print(sys.executable)'"
test = "pytest tests {args}"
test-integration = "pytest -vs --reruns 1 tests/integration --headless {args}"
test-score = "pytest -vs --reruns 1 tests/score --headless {args}"
test-vizro-ai-ui = "pytest -vs --reruns 1 tests/vizro_ai_ui --headless {args}"
test-unit = "pytest tests/unit {args}"
test-unit-coverage = [
"coverage run -m pytest tests/unit {args}",
Expand Down Expand Up @@ -81,5 +82,8 @@ serve = "mkdocs serve --open"
extra-dependencies = ["pydantic==1.10.16"]
python = "3.9"

[envs.tests]
python = "3.12"

[version]
path = "src/vizro_ai/__init__.py"
Empty file added vizro-ai/tests/__init__.py
Empty file.
Empty file.
30 changes: 30 additions & 0 deletions vizro-ai/tests/helpers/checkers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
from hamcrest import any_of, assert_that, contains_string
from tests.helpers.constants import (
INVALID_PROP_ERROR,
REACT_NOT_RECOGNIZE_ERROR,
REACT_RENDERING_ERROR,
READPIXELS_WARNING,
SCROLL_ZOOM_ERROR,
UNMOUNT_COMPONENTS_ERROR,
WEBGL_WARNING,
WILLMOUNT_RENAMED_WARNING,
WILLRECEIVEPROPS_RENAMED_WARNING,
)


def browser_console_warnings_checker(log_level, log_levels):
assert_that(
log_level["message"],
any_of(
contains_string(INVALID_PROP_ERROR),
contains_string(REACT_NOT_RECOGNIZE_ERROR),
contains_string(SCROLL_ZOOM_ERROR),
contains_string(REACT_RENDERING_ERROR),
contains_string(UNMOUNT_COMPONENTS_ERROR),
contains_string(WILLMOUNT_RENAMED_WARNING),
contains_string(WILLRECEIVEPROPS_RENAMED_WARNING),
contains_string(READPIXELS_WARNING),
contains_string(WEBGL_WARNING),
),
reason=f"Error outoput: {log_levels}",
)
39 changes: 39 additions & 0 deletions vizro-ai/tests/helpers/common.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import time

from selenium.common.exceptions import (
StaleElementReferenceException,
)
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support.wait import WebDriverWait


def wait_for(condition_function, *args):
"""Function wait for any condition to be True."""
start_time = time.time()
while time.time() < start_time + 10:
if condition_function(*args):
return True
else:
time.sleep(0.1)
raise Exception(f"Timeout waiting for {condition_function.__name__}")


def webdriver_click_waiter(browserdriver, xpath):
WebDriverWait(
browserdriver, 10, ignored_exceptions=StaleElementReferenceException
).until(expected_conditions.element_to_be_clickable((By.XPATH, xpath))).click()


def webdriver_waiter(browserdriver, xpath):
elem = WebDriverWait(
browserdriver, 10, ignored_exceptions=StaleElementReferenceException
).until(expected_conditions.presence_of_element_located((By.XPATH, xpath)))
return elem


def webdriver_waiter_css(browserdriver, xpath):
elem = WebDriverWait(
browserdriver, 30, ignored_exceptions=StaleElementReferenceException
).until(expected_conditions.presence_of_element_located((By.CSS_SELECTOR, xpath)))
return elem
9 changes: 9 additions & 0 deletions vizro-ai/tests/helpers/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
INVALID_PROP_ERROR = "Invalid prop `persisted_props[0]` of value `on` supplied to `t`"
REACT_NOT_RECOGNIZE_ERROR = "React does not recognize the `%s` prop on a DOM element"
SCROLL_ZOOM_ERROR = "_scrollZoom"
REACT_RENDERING_ERROR = "unstable_flushDiscreteUpdates: Cannot flush updates when React is already rendering"
UNMOUNT_COMPONENTS_ERROR = "React state update on an unmounted component"
WILLMOUNT_RENAMED_WARNING = "componentWillMount has been renamed"
WILLRECEIVEPROPS_RENAMED_WARNING = "componentWillReceiveProps has been renamed"
READPIXELS_WARNING = "GPU stall due to ReadPixels"
WEBGL_WARNING = "WebGL" # https://issues.chromium.org/issues/40277080
Empty file.
60 changes: 60 additions & 0 deletions vizro-ai/tests/vizro_ai_ui/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
from datetime import datetime

import pytest
from selenium import webdriver
from selenium.webdriver.chrome.options import Options

from tests.helpers.checkers import browser_console_warnings_checker


@pytest.fixture()
def chromedriver(request):
"""Fixture for starting chromedriver."""
options = Options()
# options.add_argument("--headless")
options.add_argument("--window-size=1920,1080")
options.add_argument("--disable-search-engine-choice-screen")
driver = webdriver.Chrome(options=options)
driver.get(f"http://127.0.0.1:{request.param.get('port')}/")
return driver


@pytest.fixture(autouse=True)
def teardown_method(chromedriver):
"""Fixture checks log errors and quits the driver after each test."""
yield
log_levels = [
level
for level in chromedriver.get_log("browser")
if level["level"] == "SEVERE" or "WARNING"
]
if log_levels:
for log_level in log_levels:
browser_console_warnings_checker(log_level, log_levels)
chromedriver.quit()


@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
rep = outcome.get_result()
setattr(item, "rep_" + rep.when, rep)


@pytest.fixture(scope="function", autouse=True)
def test_failed_check(request):
yield
if request.node.rep_setup.failed:
return "setting up a test failed!", request.node.nodeid
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
driver = request.node.funcargs["chromedriver"]
take_screenshot(driver, request.node.nodeid)
return "executing test failed", request.node.nodeid


def take_screenshot(driver, nodeid):
file_name = f'{nodeid}_{datetime.today().strftime("%Y-%m-%d_%H-%M")}.png'.replace(
"/", "_"
).replace("::", "__")
driver.save_screenshot(file_name)
Loading

0 comments on commit 2af4dce

Please sign in to comment.