Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.0.0 release #490

Merged
merged 26 commits into from
Jan 5, 2024
Merged

v1.0.0 release #490

merged 26 commits into from
Jan 5, 2024

Conversation

JessicaS11
Copy link
Member

No description provided.

JessicaS11 and others added 26 commits November 15, 2023 09:39
* Adding argo search and download script

* Create get_argo.py

Download the 'classic' argo data with physical variables only

* begin implementing argo dataset

* 1st draft implementing argo dataset

* implement search_data for physical argo

* doctests and general cleanup for physical argo query

* beginning of BGC Argo download

* parse BGC profiles into DF

* plan to query BGC profiles

* validate BGC param input function

* order BGC params in order in which they should be queried

* fix bug in parse_into_df() - init blank df to take in union of params from all profiles

* identify profiles from initial API request containing all required params

* creates df with only profiles that contain all user specified params
Need to dload additional params

* modified to populate prof df by querying individual profiles

* finished up BGC argo download!

* assert bounding box type in Argo init, begin framework for unit tests

* Adding argo search and download script

* Create get_argo.py

Download the 'classic' argo data with physical variables only

* begin implementing argo dataset

* 1st draft implementing argo dataset

* implement search_data for physical argo

* doctests and general cleanup for physical argo query

* beginning of BGC Argo download

* parse BGC profiles into DF

* plan to query BGC profiles

* validate BGC param input function

* order BGC params in order in which they should be queried

* fix bug in parse_into_df() - init blank df to take in union of params from all profiles

* identify profiles from initial API request containing all required params

* creates df with only profiles that contain all user specified params
Need to dload additional params

* modified to populate prof df by querying individual profiles

* finished up BGC argo download!

* assert bounding box type in Argo init, begin framework for unit tests

* need to confirm spatial extent is bbox

* begin test case for available profiles

* add tests for argo.py

* add typing, add example json, and use it to test parsing

* update argo to submit successful api request (update keys and values submitted)

* first pass at porting argo over to metadata+per profile download (WIP)

* basic working argo script

* simplify parameter validation (ordered list no longer needed)

* add option to delete existing data before new download

* continue cleaning up argo.py

* fix download_by_profile to properly store all downloaded data

* remove old get_argo.py script

* remove _filter_profiles function in favor of submitting data kwarg in request

* start filling in docstrings

* clean up nearly duplicate functions

* add more docstrings

* get a few minimal argo tests working

* add bgc argo params. begin adding merge for second download runs

* some changes

* WIP test commit to see if can push to GH

* WIP handling argo merge issue

* update profile to df to return df and move merging to get_dataframe

* merge profiles with existing df

* clean up docstrings and code

* add test_argo.py

* add prelim test case for adding to Argo df

* remove sandbox files

* remove bgc argo test file

* update variables notebook from development

* simplify import statements

* quickfix for granules error

* draft subpage on available QUEST datasets

* small reference fix in text

* add reference to top of .rst file

* test argo df merge

* add functionality to Quest class to pass search criteria to all datasets

* add functionality to Quest class to pass search criteria to all datasets

* update dataset docstrings; reorder argo.py to match

* implement quest search+download for IS2

* move spatial and temporal properties from query to genquery

* add query docstring test for cycles,tracks to test file

* add quest test module

* standardize print outputs for quest search and download; is2 download needs auth updates

* remove extra files from this branch

* comment out argo portions of quest for PR

* remove argo-branch-only init file

* remove argo script from branch

* remove argo test file from branch

* comment out another line of argo stuff

* Update quest.py

Added Docstrings to functions within quest.py and edited the primary docstring for the QUEST class here.

Note I did not add Docstrings to the implicit __self__ function.

* Update test_quest.py

Added comments (not Docstrings) to test functions

* Update dataset.py

Minor edits to the doc strings

* Update quest.py

Edited docstrings

* catch error with downloading datasets in Quest; template test case for multi dataset query

---------

Co-authored-by: Kelsey Bisson <[email protected]>
Co-authored-by: Romina <[email protected]>
Co-authored-by: zachghiaccio <[email protected]>
Co-authored-by: Zach Fair <[email protected]>
* add OA API warning
* comment out tests that use OA API

---------

Co-authored-by: GitHub Action <[email protected]>
---------

Co-authored-by: GitHub Action <[email protected]>
* add filelist and product properties to Read object
* deprecate filename_pattern and product class Read inputs
* transition to data_source input as a string (including glob string) or list
* update tutorial with changes and user guidance for using glob

---------
Co-authored-by: Jessica Scheick <[email protected]>
* add kwarg acceptance for data queries and download_all in quest
* Add QUEST dataset page to RTD

---------

Co-authored-by: zachghiaccio <[email protected]>
Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
Co-authored-by: Jessica Scheick <[email protected]>
Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
Co-authored-by: Jessica Scheick <[email protected]>
Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
Co-authored-by: Jessica Scheick <[email protected]>
Refactor Variables class to be user facing functionality
* expand extract_product and extract_version to check for s3 url

* add cloud notes to variables notebook

---------

Co-authored-by: Jessica Scheick <[email protected]>
- add argo.py dataset functionality and implementation through QUEST
- demonstrate QUEST usage via example notebook
- add save to QUEST DataSet class template

Co-authored-by: Kelsey Bisson <[email protected]>
Co-authored-by: Romina <[email protected]>
Co-authored-by: zachghiaccio <[email protected]>
- update pypi action to use OIDC trusted publisher mgmt
- generalize the flake8 action to a general linting action and add black
- put flake8 config parameters into a separate file (.flake8)
- update versions of actions/pre-commit hooks
- specify uml updates only need to run on PRs to development
- do not run uml updates on PRs into main #449)
- update docs config files to be compliant
- temporarily ignore many flake8 error codes until legacy files are updated
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link

github-actions bot commented Jan 5, 2024

Binder 👈 Launch a binder notebook on this branch for commit b4361c6

I will automatically update this comment whenever this PR is modified

@JessicaS11 JessicaS11 merged commit bed355a into main Jan 5, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants