Skip to content

Commit

Permalink
bump version
Browse files Browse the repository at this point in the history
  • Loading branch information
drizk1 committed Apr 15, 2024
1 parent 2f9199d commit be3bb2b
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "TidierFiles"
uuid = "8ae5e7a9-bdd3-4c93-9cc3-9df4d5d947db"
authors = ["Daniel Rizk <[email protected]> and contributors"]
version = "0.1.0"
version = "0.1.1"

[deps]
Arrow = "69666777-d1a9-59fb-9406-91d4454c9d45"
Expand Down
10 changes: 5 additions & 5 deletions docs/examples/UserGuide/Arrow.jl
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
# Arrow file reading and writing is powered by Arrow.jl
# ## `read_arrow`
# read_arrow(path; skip=0, n_max=Inf, col_select=nothing)
# `read_arrow(path; skip=0, n_max=Inf, col_select=nothing)`

# This function reads a Parquet (.parquet) file into a DataFrame. The arguments are:
# This function reads an Arrow (.arrow) file into a DataFrame. The arguments are:

# - `path`: The path to the .parquet file.
# - `path`: The path to the .arrow file.
# - `skip`: Number of initial rows to skip before reading data. Default is 0.
# - `n_max`: Maximum number of rows to read. Default is `Inf` (read all rows).
# - `col_select`: Optional vector of symbols or strings to select which columns to load. Default is `nothing` (load all columns).

# ## `write_arrow`
# `write_arrow(df, path)`

# This function writes a DataFrame to a Parquet (.parquet) file. The arguments are:
# This function writes a DataFrame to an Arrow (.arrow) file. The arguments are:

# - `df`: The DataFrame to be written to a file.
# - `path`: The path where the .parquet file will be created. If a file at this path already exists, it will be overwritten.
# - `path`: The path where the .arrow file will be created. If a file at this path already exists, it will be overwritten.
# - Additional arguments for writing arrow files are not outlined here, but should be available through the same interface of `Arrow.write`. Refer to Arrow.jl [documentation](https://arrow.apache.org/julia/stable/manual/#Arrow.write) at their page for further explanation.
2 changes: 1 addition & 1 deletion docs/examples/UserGuide/parquet.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Parquet file reading and writing is powered by Parquet2.jl
# ## `read_parquet`
# read_parquet(path; col_names=true, skip=0, n_max=Inf, col_select=nothing)
# `read_parquet(path; col_names=true, skip=0, n_max=Inf, col_select=nothing)`

# This function reads a Parquet (.parquet) file into a DataFrame. The arguments are:

Expand Down

2 comments on commit be3bb2b

@drizk1
Copy link
Member Author

@drizk1 drizk1 commented on be3bb2b Apr 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator register

Release notes:

Changes

  • Adds read/write support for .arrow and .parquet

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/104935

Tagging

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.1.1 -m "<description of version>" be3bb2bdb0c303c2fcd5d9d68a79a6f4659f3c54
git push origin v0.1.1

Please sign in to comment.