Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support live server tests with Selenium #2077

Open
wants to merge 33 commits into
base: main
Choose a base branch
from

Conversation

hansegucker
Copy link
Collaborator

Close #1854

Copy link
Member

@niklasmohrin niklasmohrin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool, I feel that there are some parts that could be cleaned up or need some more explanation otherwise:

deployment/provision_vagrant_vm.sh Outdated Show resolved Hide resolved
evap/evaluation/tests/tools.py Outdated Show resolved Hide resolved
evap/evaluation/tests/tools.py Outdated Show resolved Hide resolved
evap/evaluation/tests/tools.py Show resolved Hide resolved
@niklasmohrin
Copy link
Member

Manager group was not present, but only if all tests are run. We found that django will first run django.test.TestCase instances, then django.test.TransactionTestCase and then unittest.TestCase. After the first, the database is cleared. If only running the live server test, the database is still there when running the test.

Link dump if we try to debug this again:

We concluded that we want to migrate an empty database, dump it, and use that dump as a fixture in the live server test - at least for now, to continue working on this.

@hansegucker
Copy link
Collaborator Author

hansegucker commented Apr 15, 2024

@Kakadus @niklasmohrin @richardebeling Finally, the last test has been migrated and I'm ready for a final review. 🥳

diff --git a/deployment/provision_vagrant_vm.sh b/deployment/provision_vagrant_vm.sh
index 6c1c539f..5d478d2a 100755
--- a/deployment/provision_vagrant_vm.sh
+++ b/deployment/provision_vagrant_vm.sh
@@ -15,7 +15,7 @@ export DEBIAN_FRONTEND=noninteractive
 apt-get -q update

 # system utilities that docker containers don't have
-apt-get -q install -y sudo wget git bash-completion
+apt-get -q install -y sudo wget git bash-completion software-properties-common
 # docker weirdly needs this -- see https://stackoverflow.com/questions/46247032/how-to-solve-invoke-rc-d-policy-rc-d-denied-execution-of-start-when-building
 printf '#!/bin/sh\nexit 0' > /usr/sbin/policy-rc.d

@@ -93,6 +93,27 @@ cp $REPO_FOLDER/deployment/manage_autocompletion.sh /etc/bash_completion.d/
 # install chrome, see: puppeteer/puppeteer#7740
 apt-get -q install -y chromium-browser

+# install firefox and geckodriver
+cat <<EOT >> /etc/apt/preferences.d/mozillateam
+Package: *
+Pin: release o=LP-PPA-mozillateam
+Pin-Priority: 100
+
+Package: firefox*
+Pin: release o=LP-PPA-mozillateam
+Pin-Priority: 1001
+
+Package: firefox*
+Pin: release o=Ubuntu
+Pin-Priority: -1
+EOT
+add-apt-repository -y ppa:mozillateam/ppa
+apt-get -q install -y firefox
+
+wget https://github.com/mozilla/geckodriver/releases/download/v0.33.0/geckodriver-v0.33.0-linux64.tar.gz -O geckodriver.tar.gz
+tar xzf geckodriver.tar.gz -C /usr/local/bin/
+chmod +x /usr/local/bin/geckodriver
+
 # install libraries for puppeteer
 apt-get -q install -y libasound2 libgconf-2-4 libgbm1 libgtk-3-0 libnss3 libx11-xcb1 libxss1 libxshmfence-dev

diff --git a/evap/evaluation/tests/test_live.py b/evap/evaluation/tests/test_live.py
new file mode 100644
index 00000000..f664a73e
--- /dev/null
+++ b/evap/evaluation/tests/test_live.py
@@ -0,0 +1,35 @@
+from django.core import mail
+from django.urls import reverse
+from selenium.webdriver.common.by import By
+from selenium.webdriver.support import expected_conditions
+from selenium.webdriver.support.wait import WebDriverWait
+
+from evap.evaluation.tests.tools import LiveServerTest
+
+
+class ContactModalTests(LiveServerTest):
+    def test_contact_modal(self):
+        self._login()
+        self.selenium.get(self.live_server_url + reverse("evaluation:index"))
+        self.selenium.find_element(By.ID, "feedbackModalShowButton").click()
+        self._screenshot("feedback_modal_")
+
+        WebDriverWait(self.selenium, 10).until(
+            expected_conditions.visibility_of_element_located((By.ID, "feedbackModalMessageText"))
+        )
+        self._screenshot("feedback_modal_2")
+        self.selenium.find_element(By.ID, "feedbackModalMessageText").send_keys("Testmessage")
+        self._screenshot("feedback_modal_typed")
+        self.selenium.find_element(By.ID, "feedbackModalActionButton").click()
+
+        WebDriverWait(self.selenium, 10).until(
+            expected_conditions.text_to_be_present_in_element(
+                (By.CSS_SELECTOR, "#successMessageModal_feedbackModal .modal-body"),
+                "Your message was successfully sent.",
+            )
+        )
+        self._screenshot("feedback_modal_success")
+
+        self.assertEqual(len(mail.outbox), 1)
+
+        self.assertEqual(mail.outbox[0].subject, f"[EvaP] Message from {self.test_user.email}")
diff --git a/evap/evaluation/tests/tools.py b/evap/evaluation/tests/tools.py
index c1a07d5a..169649fb 100644
--- a/evap/evaluation/tests/tools.py
+++ b/evap/evaluation/tests/tools.py
@@ -9,10 +9,14 @@ from django.conf import settings
 from django.contrib.auth.models import Group
 from django.db import DEFAULT_DB_ALIAS, connections
 from django.http.request import QueryDict
+from django.test.selenium import SeleniumTestCase, SeleniumTestCaseBase
 from django.test.utils import CaptureQueriesContext
 from django.utils import timezone
 from django_webtest import WebTest
 from model_bakery import baker
+from selenium.webdriver.common.by import By
+from selenium.webdriver.support import expected_conditions
+from selenium.webdriver.support.wait import WebDriverWait

 from evap.evaluation.models import (
     CHOICES,
@@ -254,3 +258,53 @@ def assert_no_database_modifications(*args, **kwargs):
             lower_sql = query["sql"].lower()
             if not any(lower_sql.startswith(prefix) for prefix in allowed_prefixes):
                 raise AssertionError("Unexpected modifying query found: " + query["sql"])
+
+
+class CustomSeleniumTestCaseBase(SeleniumTestCaseBase):
+    external_host = os.environ.get("TEST_HOST", "") or None
+    browsers = ["firefox"]
+    selenium_hub = os.environ.get("TEST_SELENIUM_HUB", "") or None
+    headless = True
+
+    def create_options(self):  # pylint: disable=bad-mcs-method-argument
+        options = super().create_options()
+
+        if self.browser == "chrome":
+            options.add_argument("--headless")
+            options.add_argument("--no-sandbox")
+            options.add_argument("--disable-dev-shm-usage")
+            options.add_argument("--disable-gpu")
+        elif self.browser == "firefox":
+            options.add_argument("--headless")
+
+        return options
+
+
+class LiveServerTest(SeleniumTestCase, metaclass=CustomSeleniumTestCaseBase):
+    def _screenshot(self, name):
+        self.selenium.save_screenshot(os.path.join(settings.BASE_DIR, f"{name}.png"))
+
+    def _create_test_user(self):
+        self.test_user = baker.make(  # pylint: disable=attribute-defined-outside-init
+            UserProfile, email="[email protected]", groups=[Group.objects.get(name="Manager")]
+        )
+        self.test_user_password = "evap"  # pylint: disable=attribute-defined-outside-init
+        self.test_user.set_password(self.test_user_password)
+        self.test_user.save()
+        return self.test_user
+
+    def _login(self):
+        self._create_test_user()
+        self.selenium.get(self.live_server_url)
+        self.selenium.find_element(By.ID, "id_email").click()
+        self.selenium.find_element(By.ID, "id_email").send_keys(self.test_user.email)
+        self.selenium.find_element(By.ID, "id_email").click()
+        self.selenium.find_element(By.ID, "id_password").send_keys(self.test_user_password)
+        self.selenium.find_element(By.CSS_SELECTOR, ".login-button").click()
+        self.selenium.save_screenshot(os.path.join(settings.BASE_DIR, "login_success.png"))
+
+        WebDriverWait(self.selenium, 10).until(expected_conditions.presence_of_element_located((By.ID, "logout-form")))
+
+    @classmethod
+    def tearDownClass(cls):
+        cls.selenium.quit()
diff --git a/requirements-dev.txt b/requirements-dev.txt
index dc456c81..c274373e 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -14,3 +14,4 @@ ruff==0.3.5
 tblib~=3.0.0
 xlrd~=2.0.1
 typeguard~=4.2.1
+selenium~=4.15.2
diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml
index ac317fdf..d07f8607 100644
--- a/.github/workflows/tests.yml
+++ b/.github/workflows/tests.yml
@@ -197,51 +197,11 @@ jobs:
           path: evap/static/css/evap.css

-  render_pages:
-    runs-on: ubuntu-22.04
-
-    name: Render Html pages
-
-    services:
-      postgres:
-        image: postgres
-        env:
-          POSTGRES_USER: postgres
-          POSTGRES_PASSWORD: postgres
-          POSTGRES_DB: evap
-        ports:
-          - 5432:5432
-        options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
-      redis:
-        image: redis
-        options: --health-cmd "redis-cli ping" --health-interval 10s --health-timeout 5s --health-retries 5
-        ports:
-          - 6379:6379
-
-    steps:
-      - name: Check out repository code
-        uses: actions/checkout@v3
-
-      - name: Setup python
-        uses: ./.github/setup_python
-
-      - name: Render pages
-        run: coverage run manage.py ts render_pages
-      - name: Upload coverage
-        uses: codecov/codecov-action@v3
-        with:
-          flags: render-pages
-      - name: Store rendered pages
-        uses: actions/upload-artifact@v3
-        with:
-          name: rendered-pages
-          path: evap/static/ts/rendered
-

   typescript:
     runs-on: ubuntu-22.04

-    needs: [ compile_scss, render_pages ]
+    needs: [ compile_scss ]

     name: Test Typescript

diff --git a/deployment/manage_autocompletion.sh b/deployment/manage_autocompletion.sh
index 9083a793..bf8fc429 100755
--- a/deployment/manage_autocompletion.sh
+++ b/deployment/manage_autocompletion.sh
@@ -3,7 +3,7 @@
 # generated using
 # ./manage.py | grep -v -E "^\[|^$" | tail -n +3 | sort | xargs
 COMMANDS="admin_generator anonymize changepassword check clean_pyc clear_cache clearsessions collectstatic compile_pyc compilemessages create_command create_jobs create_template_tags createcachetable createsuperuser dbshell delete_squashed_migrations describe_form diffsettings drop_test_database dump_testdata dumpdata dumpscript export_emails find_template findstatic flush format generate_password generate_secret_key graph_models inspectdb lint list_model_info list_signals loaddata mail_debug makemessages makemigrations merge_model_instances migrate notes pipchecker precommit print_settings print_user_for_session refresh_results_cache reload_testdata remove_stale_contenttypes reset_db reset_schema run runjob runjobs runprofileserver runscript runserver runserver_plus scss send_reminders sendtestemail set_default_site set_fake_emails set_fake_passwords shell shell_plus show_template_tags show_urls showmigrations sqlcreate sqldiff sqldsn sqlflush sqlmigrate sqlsequencereset squashmigrations startapp startproject sync_s3 syncdata test testserver tools translate ts typecheck unreferenced_files update_evaluation_states update_permissions validate_templates"
-TS_COMMANDS="compile test render_pages"
+TS_COMMANDS="compile test"

 _managepy_complete()
 {
diff --git a/evap/contributor/tests/test_views.py b/evap/contributor/tests/test_views.py
index 4bce3f1b..1e875682 100644
--- a/evap/contributor/tests/test_views.py
+++ b/evap/contributor/tests/test_views.py
@@ -8,7 +8,6 @@ from evap.evaluation.models import Contribution, Course, Evaluation, Questionnai
 from evap.evaluation.tests.tools import (
     WebTestWith200Check,
     create_evaluation_with_responsible_and_editor,
-    render_pages,
     submit_with_modal,
 )

@@ -136,8 +135,6 @@ class TestContributorEvaluationPreviewView(WebTestWith200Check):

 class TestContributorEvaluationEditView(WebTest):
-    render_pages_url = "/contributor/evaluation/PK/edit"
-
     @classmethod
     def setUpTestData(cls):
         result = create_evaluation_with_responsible_and_editor()
@@ -146,23 +143,6 @@ class TestContributorEvaluationEditView(WebTest):
         cls.evaluation = result["evaluation"]
         cls.url = f"/contributor/evaluation/{cls.evaluation.pk}/edit"

-    @render_pages
-    def render_pages(self):
-        self.evaluation.allow_editors_to_edit = False
-        self.evaluation.save()
-
-        content_without_allow_editors_to_edit = self.app.get(self.url, user=self.editor).content
-
-        self.evaluation.allow_editors_to_edit = True
-        self.evaluation.save()
-
-        content_with_allow_editors_to_edit = self.app.get(self.url, user=self.editor).content
-
-        return {
-            "normal": content_without_allow_editors_to_edit,
-            "allow_editors_to_edit": content_with_allow_editors_to_edit,
-        }
-
     def test_not_authenticated(self):
         """
         Asserts that an unauthorized user gets redirected to the login page.
diff --git a/evap/evaluation/management/commands/ts.py b/evap/evaluation/management/commands/ts.py
index bdc7c9ff..76bdcd2a 100644
--- a/evap/evaluation/management/commands/ts.py
+++ b/evap/evaluation/management/commands/ts.py
@@ -1,23 +1,10 @@
 import argparse
 import os
 import subprocess  # nosec
-import unittest

 from django.conf import settings
 from django.core.management import call_command
 from django.core.management.base import BaseCommand, CommandError
-from django.test.runner import DiscoverRunner
-
-
-class RenderPagesRunner(DiscoverRunner):
-    """Test runner which only includes `render_pages.*` methods.
-    The actual logic of the page rendering is implemented in the `@render_pages` decorator."""
-
-    test_loader = unittest.TestLoader()
-
-    def __init__(self, **kwargs):
-        super().__init__(**kwargs)
-        self.test_loader.testMethodPrefix = "render_pages"

 class Command(BaseCommand):
@@ -32,7 +19,6 @@ class Command(BaseCommand):
         self.add_fresh_argument(compile_parser)
         test_parser = subparsers.add_parser("test")
         self.add_fresh_argument(test_parser)
-        subparsers.add_parser("render_pages")

     @staticmethod
     def add_fresh_argument(parser: argparse.ArgumentParser):
@@ -48,8 +34,6 @@ class Command(BaseCommand):
             self.compile(**options)
         elif options["command"] == "test":
             self.test(**options)
-        elif options["command"] == "render_pages":
-            self.render_pages(**options)

     def run_command(self, command):
         try:
@@ -84,14 +68,4 @@ class Command(BaseCommand):
     def test(self, **options):
         call_command("scss")
         self.compile(**options)
-        self.render_pages()
         self.run_command(["npx", "jest"])
-
-    @staticmethod
-    def render_pages(**_options):
-        # Enable debug mode as otherwise a collectstatic beforehand would be necessary,
-        # as missing static files would result into an error.
-        test_runner = RenderPagesRunner(debug_mode=True)
-        failed_tests = test_runner.run_tests([])
-        if failed_tests > 0:
-            raise CommandError("Failures during render_pages")
diff --git a/evap/evaluation/tests/test_commands.py b/evap/evaluation/tests/test_commands.py
index b5ce1e6a..b6f016db 100644
--- a/evap/evaluation/tests/test_commands.py
+++ b/evap/evaluation/tests/test_commands.py
@@ -243,12 +243,10 @@ class TestTsCommend(TestCase):

     @patch("subprocess.run")
     @patch("evap.evaluation.management.commands.ts.call_command")
-    @patch("evap.evaluation.management.commands.ts.Command.render_pages")
-    def test_ts_test(self, mock_render_pages, mock_call_command, mock_subprocess_run):
+    def test_ts_test(self, mock_call_command, mock_subprocess_run):
         management.call_command("ts", "test")

         # Mock render pages to prevent a second call into the test framework
-        mock_render_pages.assert_called_once()
         mock_call_command.assert_called_once_with("scss")
         mock_subprocess_run.assert_has_calls(
             [
diff --git a/evap/evaluation/tests/test_views.py b/evap/evaluation/tests/test_views.py
index 2af1ac6d..b2ac8328 100644
--- a/evap/evaluation/tests/test_views.py
+++ b/evap/evaluation/tests/test_views.py
@@ -8,20 +8,7 @@ from django_webtest import WebTest
 from model_bakery import baker

 from evap.evaluation.models import Evaluation, Question, QuestionType, UserProfile
-from evap.evaluation.tests.tools import (
-    WebTestWith200Check,
-    create_evaluation_with_responsible_and_editor,
-    store_ts_test_asset,
-)
-
-
-class RenderJsTranslationCatalog(WebTest):
-    url = reverse("javascript-catalog")
-
-    def render_pages(self):
-        # Not using render_pages decorator to manually create a single (special) javascript file
-        content = self.app.get(self.url).content
-        store_ts_test_asset("catalog.js", content)
+from evap.evaluation.tests.tools import WebTestWith200Check, create_evaluation_with_responsible_and_editor

 @override_settings(PASSWORD_HASHERS=["django.contrib.auth.hashers.MD5PasswordHasher"])
diff --git a/evap/evaluation/tests/tools.py b/evap/evaluation/tests/tools.py
index ae3352d6..e752885d 100644
--- a/evap/evaluation/tests/tools.py
+++ b/evap/evaluation/tests/tools.py
@@ -1,4 +1,3 @@
-import functools
 import os
 import time
 from collections.abc import Sequence
@@ -91,36 +90,6 @@ def let_user_vote_for_evaluation(user, evaluation, create_answers=False):
     RatingAnswerCounter.objects.bulk_update(rac_by_contribution_question.values(), ["count"])

-def store_ts_test_asset(relative_path: str, content) -> None:
-    absolute_path = os.path.join(settings.STATICFILES_DIRS[0], "ts", "rendered", relative_path)
-
-    os.makedirs(os.path.dirname(absolute_path), exist_ok=True)
-
-    with open(absolute_path, "wb") as file:
-        file.write(content)
-
-
-def render_pages(test_item):
-    """Decorator which annotates test methods which render pages.
-    The containing class is expected to include a `url` attribute which matches a valid path.
-    Unlike normal test methods, it should not assert anything and is expected to return a dictionary.
-    The key denotes the variant of the page to reflect multiple states, cases or views.
-    The value is a byte string of the page content."""
-
-    @functools.wraps(test_item)
-    def decorator(self) -> None:
-        pages = test_item(self)
-
-        url = getattr(self, "render_pages_url", self.url)
-
-        for name, content in pages.items():
-            # Remove the leading slash from the url to prevent that an absolute path is created
-            path = os.path.join(url[1:], f"{name}.html")
-            store_ts_test_asset(path, content)
-
-    return decorator
-
-
 class WebTestWith200Check(WebTest):
     url = "/"
     test_users: list[UserProfile | str] = []
diff --git a/evap/staff/tests/test_views.py b/evap/staff/tests/test_views.py
index f1e01cc8..708eadc9 100644
--- a/evap/staff/tests/test_views.py
+++ b/evap/staff/tests/test_views.py
@@ -45,7 +45,6 @@ from evap.evaluation.tests.tools import (
     let_user_vote_for_evaluation,
     make_manager,
     make_rating_answer_counters,
-    render_pages,
     submit_with_modal,
 )
 from evap.grades.models import GradeDocument
@@ -610,12 +609,6 @@ class TestUserImportView(WebTestStaffMode):

         cls.manager = make_manager()

-    @render_pages
-    def render_pages(self):
-        return {
-            "normal": self.app.get(self.url, user=self.manager).content,
-        }
-
     def test_success_handling(self):
         """
         Tests whether a correct excel file is correctly tested and imported and whether the success messages are displayed
@@ -2054,8 +2047,6 @@ class TestCourseDeleteView(DeleteViewTestMixin, WebTestStaffMode):
     ]
 )
 class TestEvaluationEditView(WebTestStaffMode):
-    render_pages_url = "/staff/semester/PK/evaluation/PK/edit"
-
     @classmethod
     def setUpTestData(cls):
         cls.manager = make_manager()
@@ -2097,12 +2088,6 @@ class TestEvaluationEditView(WebTestStaffMode):
         cls.contribution1.questionnaires.set([cls.contributor_questionnaire])
         cls.contribution2.questionnaires.set([cls.contributor_questionnaire])

-    @render_pages
-    def render_pages(self):
-        return {
-            "normal": self.app.get(self.url, user=self.manager).content,
-        }
-
     def test_edit_evaluation(self):
         page = self.app.get(self.url, user=self.manager)
diff --git a/.github/dependabot.yml b/.github/dependabot.yml
index 9b48bb90..3e6ade14 100644
--- a/.github/dependabot.yml
+++ b/.github/dependabot.yml
@@ -18,7 +18,5 @@ updates:
     ignore:
       - dependency-name: "*"
         update-types: ["version-update:semver-patch"]
-      - dependency-name: "*puppeteer*"
-        update-types: ["version-update:semver-minor"]
     labels:
       - "[T] Dependencies"
diff --git a/deployment/provision_vagrant_vm.sh b/deployment/provision_vagrant_vm.sh
index c5ead652..3ca66fe6 100755
--- a/deployment/provision_vagrant_vm.sh
+++ b/deployment/provision_vagrant_vm.sh
@@ -90,9 +90,6 @@ sed -i -e "s/\${SECRET_KEY}/$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 32)
 # setup vm auto-completion
 cp $REPO_FOLDER/deployment/manage_autocompletion.sh /etc/bash_completion.d/

-# install chrome, see: puppeteer/puppeteer#7740
-apt-get -q install -y chromium-browser
-
 # install firefox and geckodriver
 sudo install -d -m 0755 /etc/apt/keyrings
 wget -q https://packages.mozilla.org/apt/repo-signing-key.gpg -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null
@@ -109,9 +106,6 @@ wget https://github.com/mozilla/geckodriver/releases/download/v0.34.0/geckodrive
 tar xzf geckodriver.tar.gz -C /usr/local/bin/
 chmod +x /usr/local/bin/geckodriver

-# install libraries for puppeteer
-apt-get -q install -y libasound2 libgconf-2-4 libgbm1 libgtk-3-0 libnss3 libx11-xcb1 libxss1 libxshmfence-dev
-
 # install nvm
 wget https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh --no-verbose --output-document - | sudo -H -u $USER bash

diff --git a/evap/static/ts/tests/utils/matchers.ts b/evap/static/ts/tests/utils/matchers.ts
deleted file mode 100644
index c82e190e..00000000
--- a/evap/static/ts/tests/utils/matchers.ts
+++ /dev/null
@@ -1,70 +0,0 @@
-import { ElementHandle } from "puppeteer";
-import MatcherUtils = jest.MatcherUtils;
-
-declare global {
-    namespace jest {
-        interface Matchers<R> {
-            toBeChecked(): Promise<R>;
-            toHaveClass(className: string): Promise<R>;
-        }
-    }
-}
-
-function createTagDescription(element: ElementHandle): Promise<string> {
-    return element.evaluate(element => {
-        let tagDescription = element.tagName.toLowerCase();
-        if (element.id) {
-            tagDescription += ` id="${element.id}"`;
-        }
-        if (element.className) {
-            tagDescription += ` class="${element.className}"`;
-        }
-        return `<${tagDescription}>`;
-    });
-}
-
-async function createElementMessage(
-    this: MatcherUtils,
-    matcherName: string,
-    expectation: string,
-    element: ElementHandle,
-    value?: any,
-): Promise<() => string> {
-    const tagDescription = await createTagDescription(element);
-    return () => {
-        const optionallyNot = this.isNot ? "not " : "";
-        const receivedLine = value ? `\nReceived: ${this.utils.printReceived(value)}` : "";
-        return (
-            this.utils.matcherHint(matcherName, undefined, undefined, { isNot: this.isNot }) +
-            "\n\n" +
-            `Expected ${this.utils.RECEIVED_COLOR(tagDescription)} to ${optionallyNot}${expectation}` +
-            receivedLine
-        );
-    };
-}
-
-expect.extend({
-    async toBeChecked(received: ElementHandle): Promise<jest.CustomMatcherResult> {
-        const pass = await received.evaluate(element => {
-            return (element as HTMLInputElement).checked;
-        });
-        const message = await createElementMessage.call(this, "toBeChecked", "be checked", received);
-        return { message, pass };
-    },
-
-    async toHaveClass(received: ElementHandle, className: string): Promise<jest.CustomMatcherResult> {
-        const classList = await received.evaluate(element => {
-            return [...element.classList];
-        });
-        const pass = classList.includes(className);
-        const message = await createElementMessage.call(
-            this,
-            "toHaveClass",
-            `have the class ${this.utils.printExpected(className)}`,
-            received,
-            classList,
-        );
-
-        return { message, pass };
-    },
-});
diff --git a/evap/static/ts/tests/utils/page.ts b/evap/static/ts/tests/utils/page.ts
deleted file mode 100644
index 0cb49c20..00000000
--- a/evap/static/ts/tests/utils/page.ts
+++ /dev/null
@@ -1,84 +0,0 @@
-import * as fs from "fs";
-import * as path from "path";
-import { Browser, Page } from "puppeteer";
-import { Global } from "@jest/types/";
-import DoneFn = Global.DoneFn;
-
-const contentTypeByExtension: Map<string, string> = new Map([
-    [".css", "text/css"],
-    [".js", "application/javascript"],
-    [".png", "image/png"],
-    [".svg", "image/svg+xml"],
-]);
-
-async function createPage(browser: Browser): Promise<Page> {
-    const staticPrefix = "/static/";
-
-    const page = await browser.newPage();
-    await page.setRequestInterception(true);
-    page.on("request", request => {
-        const extension = path.extname(request.url());
-        const pathname = new URL(request.url()).pathname;
-        if (extension === ".html") {
-            // requests like /evap/evap/static/ts/rendered/results/student.html
-            request.continue();
-        } else if (pathname.startsWith(staticPrefix)) {
-            // requests like /static/css/tom-select.bootstrap5.min.css
-            const asset = pathname.substr(staticPrefix.length);
-            const body = fs.readFileSync(path.join(__dirname, "..", "..", "..", asset));
-            request.respond({
-                contentType: contentTypeByExtension.get(extension),
-                body,
-            });
-        } else if (pathname.endsWith("catalog.js")) {
-            // request for /catalog.js
-            // some pages will error out if translation functions are not available
-            // rendered in RenderJsTranslationCatalog
-            const absolute_fs_path = path.join(__dirname, "..", "..", "..", "ts", "rendered", "catalog.js");
-            const body = fs.readFileSync(absolute_fs_path);
-            request.respond({
-                contentType: contentTypeByExtension.get(extension),
-                body,
-            });
-        } else {
-            request.abort();
-        }
-    });
-    return page;
-}
-
-export function pageHandler(fileName: string, fn: (page: Page) => void): (done?: DoneFn) => void {
-    return async done => {
-        let finished = false;
-        // This wrapper ensures that done() is only called once
-        async function finish(reason?: Error) {
-            if (!finished) {
-                finished = true;
-                await page.evaluate(() => {
-                    localStorage.clear();
-                });
-                await page.close();
-                done!(reason);
-            }
-        }
-
-        const context = await browser.defaultBrowserContext();
-        await context.overridePermissions("file:", ["clipboard-read"]);
-
-        const page = await createPage(browser);
-        page.on("pageerror", async error => {
-            await finish(new Error(error.message));
-        });
-
-        const filePath = path.join(__dirname, "..", "..", "rendered", fileName);
-        await page.goto(`file:${filePath}`, { waitUntil: "networkidle0" });
-
-        try {
-            await fn(page);
-            await finish();
-        } catch (error) {
-            if (error instanceof Error) await finish(error);
-            else throw error;
-        }
-    };
-}
diff --git a/package.json b/package.json
index 63222602..5b0e5a80 100644
--- a/package.json
+++ b/package.json
@@ -2,16 +2,13 @@
     "devDependencies": {
         "@types/bootstrap": "^5.2.6",
         "@types/jest": "^29.5.12",
-        "@types/jest-environment-puppeteer": "^5.0.3",
         "@types/jquery": "^3.5.16",
         "@types/sortablejs": "^1.15.1",
         "jest": "^29.7.0",
         "jest-environment-jsdom": "^29.7.0",
-        "jest-environment-puppeteer": "^10.0.0",
         "jest-jasmine2": "^29.7.0",
         "jest-ts-webcompat-resolver": "^1.0.0",
         "prettier": "^3.2.2",
-        "puppeteer": "^21.0.1",
         "sass": "1.74.1",
         "ts-jest": "^29.1.0",
         "typescript": "^5.4.2"
@@ -30,9 +27,6 @@
         "transform": {
             "^.+\\.ts$": "ts-jest"
         },
-        "globalSetup": "jest-environment-puppeteer/setup",
-        "globalTeardown": "jest-environment-puppeteer/teardown",
-        "testEnvironment": "jest-environment-puppeteer",
         "resolver": "jest-ts-webcompat-resolver"
     }
 }
Copy link
Member

@richardebeling richardebeling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet fully reviewed, will have to look through most of the ported tests still, but generally this is looking very nice. Thanks!

Comment on lines 204 to 206
assert not degree_checkbox.is_selected()
assert not course_type_checkbox.is_selected()
assert not semester_checkbox.is_selected()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency, I'd argue we should use self.assertFalse here

(and use the corresponding self.assertX methods below whenever you currently use assert

);
tomselect.setValue(managerOption);
return managerOption;
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Closing """ does not match opening """ indentation -- is this the formatting black enforces here, or does it just accept the current indentation level?

Comment on lines +111 to +112
except NoSuchElementException:
continue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would silently allow this test to "skip" all rows and not assert anything. For what cases is this necessary?


assert len(row.find_elements(By.CSS_SELECTOR, ".choice-error")) == 0

def test_skip_contributor(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check was asserting that the card collapsed before in the TS implementation. I think the check should be possible with the new setup as well, or is there some issue with that?


self.selenium.get(self.live_server_url + reverse("staff:evaluation_edit", args=[evaluation.pk]))

self._screenshot("changes_form_data")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leftover(?) screenshot command -- we don't currently do anything with this, right?

@niklasmohrin
Copy link
Member

Getting chromium or firefox should be a lot easier now that we have nix, we can work something out together some time if you want :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Use LiveServerTestCase for frontend tests
4 participants