You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So polars seems to be equally as fast as pandas on simple operations on larger dfs. Is there nothing that can be done to make the operations faster?
importnumpyasnpimportpolarsaspl# Define the number of rowsn_rows=200_000_000df=pl.DataFrame({
"col1": np.random.rand(n_rows)
})
%timeit-r1-n7df.select(pl.col("col1")+1)
df=df.to_pandas()
%timeit-r1-n7df["col1"] +1
504 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 7 loops each)
484 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 7 loops each)
The text was updated successfully, but these errors were encountered:
Chuck321123
changed the title
Using multithreading for simple operations on larger dfs
Feature request: Use multithreading for simple operations on larger dfs
Nov 22, 2024
Description
So polars seems to be equally as fast as pandas on simple operations on larger dfs. Is there nothing that can be done to make the operations faster?
The text was updated successfully, but these errors were encountered: