Skip to content

Commit

Permalink
[QA] Component library tests (#872)
Browse files Browse the repository at this point in the history
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
l0uden and pre-commit-ci[bot] authored Nov 26, 2024
1 parent a8c342a commit 5f75529
Show file tree
Hide file tree
Showing 10 changed files with 294 additions and 13 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
name: "Create artifacts and slack notifications"
description: "Creates failed artifacts with screenshots and sends slack notifications if build failed"

runs:
using: "composite"
steps:
- name: Copy failed screenshots
shell: bash
run: |
mkdir /home/runner/work/vizro/vizro/vizro-core/failed_screenshots/
cd /home/runner/work/vizro/vizro/vizro-core/
cp *.png failed_screenshots
- name: Archive production artifacts
uses: actions/upload-artifact@v4
with:
name: Failed screenshots
path: |
/home/runner/work/vizro/vizro/vizro-core/failed_screenshots/*.png
- name: Send custom JSON data to Slack
id: slack
uses: slackapi/[email protected]
with:
payload: |
{
"text": "${{ env.TESTS_NAME }} build result: ${{ job.status }}\nBranch: ${{ github.head_ref }}\n${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
}
env:
SLACK_WEBHOOK_URL: ${{ env.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
45 changes: 45 additions & 0 deletions .github/workflows/test-e2e-component-library-vizro-core.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
name: e2e tests of component library for Vizro

defaults:
run:
working-directory: vizro-core

on:
push:
branches: [main]
pull_request:
branches:
- main

env:
PYTHONUNBUFFERED: 1
FORCE_COLOR: 1
PYTHON_VERSION: "3.12"

jobs:
test-e2e-component-library-vizro-core:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}

- name: Install Hatch
run: pip install hatch

- name: Show dependency tree
run: hatch run pip tree

- name: Run e2e component library tests
run: hatch run test-e2e-component-library

- name: Create artifacts and slack notifications
if: failure()
uses: ./.github/actions/failed-artifacts-and-slack-notifications
env:
TESTS_NAME: Vizro e2e component library tests
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
4 changes: 0 additions & 4 deletions .github/workflows/vizro-qa-tests-trigger.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ jobs:
include:
- label: integration tests
- label: vizro-ai ui tests
- label: component library tests
steps:
- name: Passed fork step
run: echo "Success!"
Expand All @@ -36,7 +35,6 @@ jobs:
include:
- label: integration tests
- label: vizro-ai ui tests
- label: component library tests
steps:
- uses: actions/checkout@v4
- name: Tests trigger
Expand All @@ -48,8 +46,6 @@ jobs:
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_INTEGRATION_TESTS_WORKFLOW }}
elif [ "${{ matrix.label }}" == "vizro-ai ui tests" ]; then
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_VIZRO_AI_UI_TESTS_WORKFLOW }}
elif [ "${{ matrix.label }}" == "component library tests" ]; then
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_VIZRO_COMPONENT_LIBRARY_TESTS_WORKFLOW }}
fi
export INPUT_GITHUB_TOKEN=${{ secrets.VIZRO_SVC_PAT }}
export INPUT_REF=main # because we should send existent branch to dispatch workflow
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
<!--
A new scriv changelog fragment.
Uncomment the section that is right (remove the HTML comment wrapper).
-->

<!--
### Highlights ✨
- A bullet item for the Highlights ✨ category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX. ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Removed
- A bullet item for the Removed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX. ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Added
- A bullet item for the Added category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX. ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Changed
- A bullet item for the Changed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX. ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Deprecated
- A bullet item for the Deprecated category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX. ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Fixed
- A bullet item for the Fixed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX. ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
<!--
### Security
- A bullet item for the Security category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX. ([#1](https://github.com/mckinsey/vizro/pull/1))
-->
6 changes: 5 additions & 1 deletion vizro-core/hatch.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,10 @@ dependencies = [
"openpyxl",
"jupyter",
"pre-commit",
"PyGithub"
"PyGithub",
"imutils",
"opencv-python",
"pyhamcrest"
]
env-vars = {UV_PRERELEASE = "allow"}
installer = "uv"
Expand All @@ -56,6 +59,7 @@ schema-check = ["python schemas/generate.py --check"]
# fix this, but we don't actually use `hatch run test` anywhere right now.
# See comments added in https://github.com/mckinsey/vizro/pull/444.
test = "pytest tests --headless {args}"
test-e2e-component-library = "pytest tests/e2e/test_component_library.py --headless {args}"
test-integration = "pytest tests/integration --headless {args}"
test-js = "./tools/run_jest.sh {args}"
test-unit = "pytest tests/unit {args}"
Expand Down
4 changes: 3 additions & 1 deletion vizro-core/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,9 @@ filterwarnings = [
# Ignore warning when using the fig.layout.title inside examples:
"ignore:Using the `title` argument in your Plotly chart function may cause misalignment:UserWarning",
# Ignore warning for Pydantic v1 API and Python 3.13:
"ignore:Failing to pass a value to the 'type_params' parameter of 'typing.ForwardRef._evaluate' is deprecated:DeprecationWarning"
"ignore:Failing to pass a value to the 'type_params' parameter of 'typing.ForwardRef._evaluate' is deprecated:DeprecationWarning",
# Ignore deprecation warning until this is solved: https://github.com/plotly/dash/issues/2590:
"ignore:HTTPResponse.getheader():DeprecationWarning"
]
norecursedirs = ["tests/tests_utils", "tests/js"]
pythonpath = ["tests/tests_utils"]
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
89 changes: 89 additions & 0 deletions vizro-core/tests/e2e/test_component_library.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import dash_bootstrap_components as dbc
import pandas as pd
from dash import Dash, html
from e2e_asserts import assert_image_equal, make_screenshot_and_paths

from vizro.figures.library import kpi_card, kpi_card_reference

df_kpi = pd.DataFrame(
{
"Actual": [100, 200, 700],
"Reference": [100, 300, 500],
"Category": ["A", "B", "C"],
}
)

example_cards = [
kpi_card(data_frame=df_kpi, value_column="Actual", title="KPI with value"),
kpi_card(
data_frame=df_kpi,
value_column="Actual",
title="KPI with aggregation",
agg_func="median",
),
kpi_card(
data_frame=df_kpi,
value_column="Actual",
title="KPI formatted",
value_format="${value:.2f}",
),
kpi_card(
data_frame=df_kpi,
value_column="Actual",
title="KPI with icon",
icon="shopping_cart",
),
]

example_reference_cards = [
kpi_card_reference(
data_frame=df_kpi,
value_column="Actual",
reference_column="Reference",
title="KPI ref. (pos)",
),
kpi_card_reference(
data_frame=df_kpi,
value_column="Actual",
reference_column="Reference",
agg_func="median",
title="KPI ref. (neg)",
),
kpi_card_reference(
data_frame=df_kpi,
value_column="Actual",
reference_column="Reference",
title="KPI ref. formatted",
value_format="{value}€",
reference_format="{delta}€ vs. last year ({reference}€)",
),
kpi_card_reference(
data_frame=df_kpi,
value_column="Actual",
reference_column="Reference",
title="KPI ref. with icon",
icon="shopping_cart",
),
]


def test_kpi_card_component_library(dash_duo, request):
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = dbc.Container(
[
html.H1(children="KPI Cards"),
dbc.Stack(
children=[
dbc.Row([dbc.Col(kpi_card) for kpi_card in example_cards]),
dbc.Row([dbc.Col(kpi_card) for kpi_card in example_reference_cards]),
],
gap=4,
),
]
)
dash_duo.start_server(app)
dash_duo.wait_for_page(timeout=20)
dash_duo.wait_for_element("div[class='card-kpi card']")
result_image_path, expected_image_path = make_screenshot_and_paths(dash_duo.driver, request.node.name)
assert_image_equal(result_image_path, expected_image_path)
assert dash_duo.get_logs() == [], "browser console should contain no error"
11 changes: 4 additions & 7 deletions vizro-core/tests/integration/test_examples.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# ruff: noqa: F403, F405
import os
import runpy
from pathlib import Path
Expand Down Expand Up @@ -40,17 +39,15 @@ def dashboard(request, monkeypatch):
examples_path = Path(__file__).parents[2] / "examples"


# Ignore deprecation warning until this is solved: https://github.com/plotly/dash/issues/2590
# The `features` examples do add_type, which ideally we would clean up afterwards to restore vizro.models to
# its previous state. Since we don't currently do this, `hatch run test` fails.
# This is difficult to fix fully by un-importing vizro.models though, since we use `import vizro.models as vm` - see
# https://stackoverflow.com/questions/437589/how-do-i-unload-reload-a-python-module.
@pytest.mark.filterwarnings("ignore:HTTPResponse.getheader():DeprecationWarning")
# Ignore as it doesn't affect the test run
@pytest.mark.filterwarnings("ignore::pytest.PytestUnhandledThreadExceptionWarning")
@pytest.mark.filterwarnings("ignore:unclosed file:ResourceWarning")
# Ignore for lower bounds because of plotly==5.12.0
@pytest.mark.filterwarnings("ignore:The behavior of DatetimeProperties.to_pydatetime is deprecated:FutureWarning")
# The `features` examples do add_type, which ideally we would clean up afterwards to restore vizro.models to
# its previous state. Since we don't currently do this, `hatch run test` fails.
# This is difficult to fix fully by un-importing vizro.models though, since we use `import vizro.models as vm` - see
# https://stackoverflow.com/questions/437589/how-do-i-unload-reload-a-python-module.
@pytest.mark.parametrize(
"example_path, version",
[
Expand Down
69 changes: 69 additions & 0 deletions vizro-core/tests/tests_utils/e2e_asserts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
import shutil
from pathlib import Path

import cv2
import imutils
from hamcrest import assert_that, equal_to


def _compare_images(expected_image, result_image):
"""Comparison process."""
# Subtract two images
difference = cv2.subtract(expected_image, result_image)
# Splitting image into separate channels
blue, green, red = cv2.split(difference)
# Counting non-zero pixels and comparing it to zero
assert_that(cv2.countNonZero(blue), equal_to(0), reason="Blue channel is different")
assert_that(cv2.countNonZero(green), equal_to(0), reason="Green channel is different")
assert_that(cv2.countNonZero(red), equal_to(0), reason="Red channel is different")


def _create_image_difference(expected_image, result_image):
"""Creates new image with diff of images comparison."""
# Calculate the difference between the two images
diff = cv2.absdiff(expected_image, result_image)
# Convert image to grayscale
gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
for i in range(0, 3):
# Dilation of the image
dilated = cv2.dilate(gray.copy(), None, iterations=i + 1)
# Apply threshold to the dilated image
(t_var, thresh) = cv2.threshold(dilated, 3, 255, cv2.THRESH_BINARY)
# Calculate difference contours for the image
cnts = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
for contour in cnts:
# Calculate bounding rectangles around detected contour
(x, y, width, height) = cv2.boundingRect(contour)
# Draw red rectangle around difference area
cv2.rectangle(result_image, (x, y), (x + width, y + height), (0, 0, 255), 2)
return result_image


def make_screenshot_and_paths(browserdriver, request_node_name):
"""Creates image paths and makes screenshot during the test run."""
result_image_path = f"{request_node_name}_branch.png"
expected_image_path = f"tests/e2e/screenshots/{request_node_name.replace('test', 'main')}.png"
browserdriver.save_screenshot(result_image_path)
return result_image_path, expected_image_path


def assert_image_equal(result_image_path, expected_image_path):
"""Comparison logic and diff files creation."""
expected_image = cv2.imread(expected_image_path)
expected_image_name = Path(expected_image_path).name
result_image = cv2.imread(result_image_path)
try:
_compare_images(expected_image, result_image)
# Deleting created branch image to leave only failed for github artifacts
Path(result_image_path).unlink()
except AssertionError as exc:
# Copy created branch image to the one with the name from main for easier replacement in the repo
shutil.copy(result_image_path, expected_image_name)
diff = _create_image_difference(expected_image=expected_image, result_image=result_image)
# Writing image with differences to a new file
cv2.imwrite(f"{result_image_path}_difference_from_main.png", diff)
raise AssertionError("pictures are not the same") from exc
except cv2.error as exc:
shutil.copy(result_image_path, expected_image_name)
raise cv2.error("pictures has different sizes") from exc

0 comments on commit 5f75529

Please sign in to comment.