-
Notifications
You must be signed in to change notification settings - Fork 795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(RFC): Adds altair.datasets
#3631
base: main
Are you sure you want to change the base?
Conversation
- Allow quickly switching between version tags #3150 (comment)
To support [flights-200k.arrow](https://github.com/vega/vega-datasets/blob/f637f85f6a16f4b551b9e2eb669599cc21d77e69/data/flights-200k.arrow)
Not required for these requests, but may be helpful to avoid limits
As an example, for comparing against the most recent I've added the 5 most recent
- Basic mechanism for discovering new versions - Tries to minimise number of and total size of requests
Experimenting with querying the url cache w/ expressions
- `metadata_full.parquet` stores **all known** file metadata - `GitHub.refresh()` to maintain integrity in a safe manner - Roughly 3000 rows - Single release: **9kb** vs 46 releases: **21kb**
- Still undecided exactly how this functionality should work - Need to resolve `npm` tags != `gh` tags issue as well
Doesn't happen in CI, still unclear why the import within `pandas` breaks under these conditions. Have tried multiple combinations of `pytest.MonkeyPatch`, hard imports, but had no luck in fixing the bug
I'm reviewing as an average user of Altair and for this use-case it is probably an associate professor who will need to update all her/his lecture materials at the evening before the semester starts. What would it be great if we could say: # old way (this is deprecated)
from vega_datasets import data # new way (this will be awesome)
from altair.datasets import data And everything else is still functioning. So this still works: source_url = data.cars.url
source_pandas = data.cars() But, the awesome thing that we provide with this PR is: source_polars = data.cars(backend="polars") # or `engine=` Or polars with pyarrow dtypes: source_pl_pa = data.cars(backend="polars[pyarrow]") # or `engine=` If it is like this than I'm fine with |
The next commits benefit from having functionality decoupled from `_Reader.query`. Mainly, keeping things lazy and not raising a user-facing error
Simplifies logic that relies on enum/categoricals that may not be recognised as ordered
Docs to follow
@jonmmease I just tried updating this branch, seems to be some UpdateResolved in #3702 |
Feature has been adopted upstream in narwhals-dev/narwhals#1417
Not using doctest style here, none of these return anything but I want them hinted at
Mutability is not needed. Also see #3573
Provides a generalized solution to `pd.read_(csv|json)` requiring the names of date columns to attempt parsing. cc @joelostblom The solution is possible in large part to vega/vega-datasets#631 #3631 (comment)
…arrow` Provides better dtype inference
Switching to one with a timestamp that `frictionless` recognises https://github.com/vega/vega-datasets/blob/8745f5c61ba951fe057a42562b8b88604b4a3735/datapackage.json#L2674-L2689 https://github.com/vega/vega-datasets/blob/8745f5c61ba951fe057a42562b8b88604b4a3735/datapackage.json#L45-L57
Related
Description
Providing a minimal, but up-to-date source for https://github.com/vega/vega-datasets.
This PR takes a different approach to that of https://github.com/altair-viz/vega_datasets, notably:
metadata.parquet
npm
andgithub
pandas
"polars"
backend, the slowest I've had on a cache-hit is 0.1s to loadExamples
These all come from the docstrings of:
Loader
Loader.from_backend
Loader.__call__
Tasks
altair.datasets
Loader.__call__
_readers._Reader
_typing.Metadata
(Align with revised descriptions from (c572180))tools.datasets
Application
models.py
github.py
npm.py
semver.py
_PyArrowReader
JSON limitation (3fbc759), (4f5b4de)Resolved
Investigate bundling metadata
npm
does not have every version availableGitHub
npm
github
, but during testing this was much slowernpm
Plan strategy for user-configurable dataset cache
altair
, each release would simply ship with changes baked inaltair
package size with datasetsaltair
versionsALTAIR_DATASETS_DIR
Deferred
Reducing cache footprint
.(csv|tsv|json)
files as.parquet
Investigate providing a decorator to add a backend
_name: LiteralString
_read_fn: dict[Extension, Callable[..., IntoDataFrameT]]
_scan_fn: dict[_ExtensionScan, Callable[..., IntoFrameT]]
Investigate ways to utilize (https://github.com/vega/vega-datasets/blob/main/SOURCES.md)
expr
method signatures, docs #3600Provide more meaningful info on the state of
ALTAIR_DATASETS_DIR
sha
cover?nw.Expr.(first|last)
nw.Expr.(head|tail)(1)
not equivalent in agroup_by().agg(...)
contextpandas
-> scalarpolars
-> listpl.Enum
translating to non-orderedpd.Categorical
polars
-native solution