-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
BUG: Poor GroupBy Performance with ArrowDtype(...) wrapped types #60861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for the report!
I don't think this expectation is correct. @jorisvandenbossche @WillAyd - should
be converting to |
Hmm I don't think so - the Do we know where the bottleneck is performance-wise? I agree with the OP that the performance should be equivalent between the two types, if not slightly faster for the |
pandas/pandas/core/arrays/string_arrow.py Line 128 in 70edaa0
|
@WillAyd looks like the bottleneck is due to |
Thanks for that insight @snitish . Yea anything we can do to remove the layers of indirection and particularly Arrow <> NumPy copies would be very beneficial |
FWIW I realize the OP is talking about strings, but this likely applies to the wrapped Arrow types in general. You can see the same performance issues using another Arrow type like decimal: In [108]: df = pd.DataFrame({
...: "key": pd.Series(range(100_000), dtype=pd.ArrowDtype(pa.int64())),
...: "val": pd.Series(["3.14"] * 100_000, dtype=pd.ArrowDtype(pa.decimal128(10, 4)))
...: })
In [109]: %timeit df.groupby("key").sum()
2.83 s ± 131 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [110]: %timeit pa.TableGroupBy(pa.Table.from_pandas(df), "key").aggregate([("val", "sum")])
8.18 ms ± 589 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) |
ArrowDtype(pa.string())
columns
@WillAyd also noticed that
|
I meant to open a separate issue about In [3]: df = pd.DataFrame({"val": ["test"]}).convert_dtypes(dtype_backend="pyarrow")
In [4]: type(df.dtypes["val"])
Out[4]: pandas.core.dtypes.dtypes.ArrowDtype |
Yes, that is expected (although very annoying they have the same repr ..). See also https://pandas.pydata.org/pdeps/0014-string-dtype.html for context. The |
This is of course not a simple case with one obvious correct answer, but, in general the idea is that Of course, if we are going to get more of the non-ArrowDtype nullable dtypes getting backed by pyarrow (like StringDtype now), this division gets a bit tricky (and another reason to move forward with #58455). |
Makes sense, thanks for clarifying. I'd just misinterpreted the relation between |
Thanks @kzvezdarov that is a great analysis.
Fair point. The qualification should be that this applies to |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Grouping by and then aggregating on a dataframe that contains
ArrowDtype(pyarrow.string())
columns is orders of magnitude slower than performing the same operations on an equivalent dataframe whose corresponding string column is of any other acceptable string type (e.g.string
,StringDtype("python"), StringDtype("pyarrow")
). This is surprising in particular becauseStringDtype("pyarrow")
does not exhibit the same problem.Note that in the bug reproduction example,
DataFrame.convert_dtypes
withdtype_backend="pyarrow"
convertsstring
columns toArrowDtype(pyarrow.string())
rather thanStringDtype("pyarrow")
.Finally, here's a sample run, with dtypes printed out for clarity; I've reproduced this on both OS X and OpenSuse Tumbleweed for the listed pandas and pyarrow versions (as well as current
main
):Expected Behavior
Aggregation performance on
ArrowDtype(pyarrow.string())
columns should be comparable to aggregation performance onStringDtype("pyarrow")
,string
typed columns.Installed Versions
INSTALLED VERSIONS
commit : 0691c5c
python : 3.13.1
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : en_CA.UTF-8
LANG : None
LOCALE : en_CA.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.2.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: