Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try to re-factor the integration-boot.js file (and related code) #12730

Open
Snuffleupagus opened this issue Dec 11, 2020 · 5 comments
Open

Try to re-factor the integration-boot.js file (and related code) #12730

Snuffleupagus opened this issue Dec 11, 2020 · 5 comments
Labels

Comments

@Snuffleupagus
Copy link
Collaborator

For consistency/maintainability reasons, it would probably be helpful if the new integration-boot.js (and related code) would look more like the existing code used with the unit/font-tests. To that end, I'd suggest the following (not an exhaustive list):

@aashrafh
Copy link

Can I work on it?

@timvandermeij
Copy link
Contributor

Sure, feel free to work on this.

@simran212530
Copy link

Hey,

Can I work on this?

@Snuffleupagus
Copy link
Collaborator Author

This issue is unlikely to be a good beginner bug, if you're not already at least somewhat familiar with the PDF.js code-base and its test-suites.

timvandermeij added a commit to timvandermeij/pdf.js that referenced this issue Jun 23, 2024
Currently errors in `afterAll` are logged, but don't fail the tests.
This could cause new errors during test teardown to go by unnoticed.

Moreover, the integration test use a different reporting mechanism which
also handled errors differently (this is extra reason to do mozilla#12730).

This patch fixes the issues by consistently handling errors in
`suiteStarted` and `suiteDone` in both reporting mechanisms.

Fixes mozilla#18319.
timvandermeij added a commit to timvandermeij/pdf.js that referenced this issue Jun 23, 2024
Currently errors in `afterAll` are logged, but don't fail the tests.
This could cause new errors during test teardown to go by unnoticed.

Moreover, the integration test use a different reporting mechanism which
also handled errors differently (this is extra reason to do mozilla#12730).

This patch fixes the issues by consistently handling errors in
`suiteStarted` and `suiteDone` in both reporting mechanisms.

Fixes mozilla#18319.
@timvandermeij
Copy link
Contributor

timvandermeij commented Dec 22, 2024

I have looked into this, but unfortunately this is more difficult than I had hoped. The main issue is that the unit/font tests run in the browser itself whereas the integration tests run in Node.js and only control the browsers under test. This means that Jasmine also only runs once in Node.js instead of twice in each browser. This makes it very difficult to use the same general structure or reporting mechanism because those rely on being run inside the browser under test.

One way in which I can imagine this working is if we run Puppeteer inside the individual browsers. This should be possible according to https://pptr.dev/guides/running-puppeteer-in-the-browser, but in quickly trying this I haven't gotten it to work yet. Even if this works though we'll have to find a solution for tests that currently rely on Node.js packages (first-party like os or third-party like pngjs) because those aren't available in the browser context by default. Another complication are tests that spawn new browser themselves; we have one or two of those and I don't really see how that should work if Node.js is not the orchestrator here.

I therefore think the path of making the browsers orchestrate the tests here isn't going to fly. What might work instead is to let Node.js remain the orchestrator, but find some way to parametrize the tests for Jasmine so that it can report on the individual browsers inside of each test running the logic per browser inside the test. This would basically mean moving the pages.map calls outside of the test somehow, but given that the pages are built up dynamically in beforeAll blocks this also doesn't really seem trivial.

I think mainly having the browser name in the test output would be most helpful for e.g. debugging intermittents, but I don't yet see how we can easily achieve that. If there are ideas about this I'd love to hear about it. I did make PR #19254 for the first point mentioned here to at least somewhat improve consistency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants