Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PDEP-10: Add pyarrow as a required dependency #52711
PDEP-10: Add pyarrow as a required dependency #52711
Changes from 19 commits
89a3a3b
cf88b43
dafa709
5e1fbd1
44a3321
ea9f5e3
fbd1aa0
6d667b4
bed5f0b
12622bb
864b8d1
2d4f4fd
bb332ca
a8275fa
1148007
b406dc1
ecc4d5b
ec1c0e3
23eb251
dd7c62a
2ddd82a
3c54d22
1b60fbb
70cdf74
14602a6
2cfb92f
e0e406c
f047032
ed28c04
99de932
99fd739
9384bc7
c3beeb3
8347e83
d740403
959873e
f936280
2db0037
c2b8cfe
4e05151
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any small code samples we can add to drive this point home? I think still we would make a runtime determination whether to return a pyarrow or numpy-backed object even if both are installed, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure this comment by Will has been addressed (unless I missed it?)
to make it easier to find: the link is here, and says:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't kept up with this, but how are the plans to add the new numpy string dtype (xref #47884 ) going to affect the rationale here?
I would assume performance of the numpy string dtype would be on par with the pyarrow one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is still years' away #52711 (comment)
I can't remember the perf comparison - @ngoldbaum do you want to comment here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The linked comment said that numpy strings are available "within a year or so".
This does not seem to be dissimilar to a pandas 3.0 release date now proposed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I interpreted that as "ready within numpy" - adding in extra time to make them available in pandas, plus accounting for Hofstadter's law, "year's away" seems realistic
(Nathan - we discussed timelines before, but I didn't write them down so have forgotten them, apologies)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope it doesn't take that long!
The earliest pandas could officially support the dtype I'm working on is after the release of Numpy 2.0 - currently scheduled for January 2024. This assumes the new dtype API is available for downstream use in Numpy 2.0 without needing to set an environment variable. I'm hoping to start shipping experimental support in pandas behind the environment variable after Numpy 1.25 is released this summer, as that version of Numpy will hopefully have a version of the new dtype API that is usable for pandas' needs. The version in Numpy 1.24 is broken and is missing a lot of features we've added since that release, unfortunately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The memory usage should be comparable with pyarrow strings. Both are storing UTF8 bytestreams internally. I don't know offhand if arrow uses the small string optimization (storing the string content in the space normally reserved for a pointer to the string). It's difficult to compare memory usage exactly since the operating system facilities for this only allow you to measure the peak memory usage of a process and not all allocations necessarily use Python's allocation tracking machinery. I'm hoping to do a more careful memory usage benchmark as part of the NEP I'm writing.
The main difference in the storage is that right now I'm using individual heap allocations for each string array entry. Arrow just does a single allocation for all the array entries and has a secondary array of offsets to find the data for each string array element. I've thought a bit about following that approach, but it would mean we would have to either disallow mutating string arrays or there or have pathological behavior where enlarging a single array element could cause the entire array to get reallocated. It would also be nice to be able to use the short string optimization, arrows approach with an array of offsets would make that more difficult.
For performance, do you mean for string manipulation operations like case folding or padding? In principle NumPy could add string ufuncs that would allow for fast implementations, but right now NumPy doesn't have a namespace for that. Currently, all the comparison operators are implemented as ufuncs, but no other string functionality is. There are string manipulation functions in the
np.char
namespace, but they just do a for loop over the array elements and call string functions on the scalars.I don't want to promise that string ufuncs definitely will happen in the future, but there's no real technical blockers, just social ones. NumPy doesn't currently have any ufuncs that only make sense with string data, so some thought needs to go into where in the namespace they should go. It will also require a decent amount of implementation work to add the functions, although mostly just tedious C coding.
Overall the goal is to facilitate a straightforward transition from workflows that used object string arrays while enabling possible performance improvements in the future that are currently impossible with object string arrays.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, pandas is moving away from mutability in some sense (the CoW adoption), so that isn't very high on my priority list.
While a storage efficient string dtype is nice, this is kind of pointless if the operations aren't fast from a pandas PoV. One of the biggest advantages of arrow is that we can reduce memory but also that most operations are significantly faster, depending on what you are doing it's can be an order of magnitude.
I am referring mostly to stuff like the
str
accessor but also things like factorization etc.So even if NumPy strings are ready in around a year (or some other time period), that's not helpful for us as long as NumPy does not ship fast algorithms on top of it.
Sorry if this sounds harsh, that wasn't my intention. But having the string dtype without algorithms gets us only half the way compared to what PyArrow does, so this isn't a compelling argument to avoid making Arrow strings the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a minimum, a fast regex engine could potentially help as some of the str accessor functions were (maybe still are) implemented using regex for string[pyarrow] where the functions did not exist in PyArrow (or the minimum version supported at the time).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you suggesting to implement this in pandas? That's something I personally don't have any interest in doing and would also be at least 0- on adding for the time being. Having this stuff in Arrow is nice since it reduces maintenance burden and also having better test coverage since more libraries will depend on it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect a regex engine would be implemented in Numpy and then any str accessor functions not implemented in NumPy could be implemented using either regex or object fallback in pandas (just like we did for PyArrow initially).