-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow Spark SQL as a dialect #968
Conversation
I run the notebook and faced some missing dependency issues even after installing I had to install the following packages: |
@neelasha23 I've added |
please add a Changelog entry and ensure that the Ci is passing @gilandose |
Integration tests added should be good to go |
yes, ready for review @neelasha23 , fixed linting and hopefully resolved CI environment variable look up issue, during integration |
The CI is still failing: https://github.com/ploomber/jupysql/actions/runs/7274454343/job/19847788457?pr=968 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great work, mostly minor stuff.
give another stab to the integration tests, if you have difficulties to get them to work, let me know so someone on the team can help you!
struggling to get the postgresSQL tests to run locally which seems to be the ones failing will have another attempt to see |
postgreSQL tests passing locally, it was the print statements I'd left in error_handler! Let me know thoughts on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please check our contribution guidelines: https://ploomber-contributing.readthedocs.io/en/latest/contributing/responding-pr-review.html
pasting a link to the commit with the changes simplifies reviewing
@gilandose please check the failed CI tests |
Didn't realise there was a separate list in noxfile.py Should be everything addressed |
@edublancas any idea why the docker containers for integration tests don't start automatically for me when running the integration tests locally? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just fix the connection.py
file so the notebook passes and fix the CI
no idea, are you seeing any issues? we've had other contributors encounter issues when running integration tests locally, we definitely need to improve the setup. I changed the repository settings so the CI runs whenever you push new code (previously, I had to approve every run). this should allow you to quickly test changes in the CI. the integration tests are passing. only the unit tests are failing. we're getting closer! |
🎉 thanks a lot for working on this, this is great! |
I'll make a release now |
Describe your changes
Making is possible to use basic functionality of the library with Spark, this should allow
%%sql
usage and returning a Spark DataFrame for further processing.%%sql
this integrates with most of the libraries existing functionality and also introduceslazy_execution
: This allows frameworks that support it to bypass theResultSet
in this library, and stay in native formats. This is useful when going back and forth between sql and python, and also for query validation without full execution when composing CTEsIssue number
Related #536
Closes #965
Checklist before requesting a review
pkgmt format
📚 Documentation preview 📚: https://jupysql--968.org.readthedocs.build/en/968/