Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execute output code in integration tests #68

Merged
merged 5 commits into from
Oct 11, 2023
Merged

Execute output code in integration tests #68

merged 5 commits into from
Oct 11, 2023

Conversation

clavedeluna
Copy link
Contributor

@clavedeluna clavedeluna commented Oct 10, 2023

Overview

This PR implements a simple API to run during testing that will execute a codemod's output code

Description

We want to have more confidence that the code changes we make actually do work. Executing the code during integration tests will ensure not only imports work but also functions run. Some of the ramifications of that are:

  • our testing environment has to have any external lib in any output code
  • integration tests are going to take longer. They will also execute code: such as making requests, serializing, etc. Whatever the output code is, that will run. We're ok with time cost and risk for now since we consider it minimal risk for the benefit of validating our code
  • I had to fix some integration tests to call real imports and to fix any errors that may happen from execution which are expected (like making a request to a bogus url, we don't care about that).
  • I did handle at a codemod by codemod basis exceptions that are expected. Doing it this way means we're ok catching a FileNotFound error for a codemod that has open(...)... call which we expect if we try to open a file that doesn't exist, but not catch it in other codemods in case that is a legitimate error.
  • In the long term we may want to also call execute_code in unit tests so I left its API flexible for either path or code str.

@clavedeluna clavedeluna marked this pull request as ready for review October 10, 2023 16:30
Copy link
Member

@drdavella drdavella left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome. Eventually we can have more sophisticated behavioral assertions if we want, but this is exactly what we need right now.

module = importlib.util.module_from_spec(spec)
try:
spec.loader.exec_module(module)
except allowed_exceptions:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is super cool.

@drdavella
Copy link
Member

@clavedeluna it looks like there's maybe some kind of version/compatibility issue with jwt causing the tests to fail but once that's fixed we should be good.

@clavedeluna
Copy link
Contributor Author

I had blindly added jwt to requirements without realizing what we want is Pyjwt:

Screenshot 2023-10-11 at 7 28 36 AM

@codecov
Copy link

codecov bot commented Oct 11, 2023

Codecov Report

Merging #68 (f380fd6) into main (db09667) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##             main      #68   +/-   ##
=======================================
  Coverage   95.86%   95.86%           
=======================================
  Files          46       46           
  Lines        1694     1694           
=======================================
  Hits         1624     1624           
  Misses         70       70           

@clavedeluna clavedeluna merged commit 90ce80b into main Oct 11, 2023
8 checks passed
@clavedeluna clavedeluna deleted the validate-code branch October 11, 2023 11:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants