-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Astropy CSV table reader using pyarrow #17706
base: main
Are you sure you want to change the base?
Conversation
Thank you for your contribution to Astropy! 🌌 This checklist is meant to remind the package maintainers who will review this pull request of some common things to look for.
|
👋 Thank you for your draft pull request! Do you know that you can use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I want to benchmark this but does that mean we need to install pyarrow in https://github.com/astropy/astropy/blob/main/.github/workflows/ci_benchmark.yml ?
There are one-time benchmarks here: #16869 (comment). These demonstrate that pyarrow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! I like the general idea; my only more major comment is that I'm not sure in this initial stage one should add the commented-line skipper.
For follow-up, I guess, would be to make this the default "first try" if pyarrow
is available, and then deprecate the fast reader?
It does seem Table.{from,to}_pyarrow
methods would be reasonable, but better as follow-up.
Since @dhomeier has shown pyarrow to be significantly faster, it would be good to have it for the biggest tables. And this is a relatively thin wrapper just to match the API we are used to, so why not? For smaller tables we have other established solutions which are more flexible (not the least our own pure-python readers and our own C reader). How many GB-sized tables are there in the wild with commented lines that are not in the header? I'm just worried about user confusion along the lines of "It's reading this table just fine and that table that's almost identical (but with comment lines) crashes with a Python out-of-memory error". Of course, that only applies to the biggest tables of them all. For csv files in the 0.5-1GB rage, this is probably still be faster AND would fit into memory (and maybe not be too slow) on modern machines. So it's a trade-off. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be a bit too technical for a first round of review but I wanted to get this kind of feedback in early too so it doesn't grow into too much of a pain later:
Here are a couple suggestions and comments mostly about type annotations and internal consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the great comments! I think I've addressed them all, or at least responded. Sounds like I have agreement to keep going ahead on this and start working on tests, docs etc?
|
||
|
||
def get_convert_options( | ||
include_names: list | None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
include_names: list | None, | |
include_names: list[str] | None, |
def get_convert_options( | ||
include_names: list | None, | ||
dtypes: dict[str, "npt.DTypeLike"] | None, | ||
null_values: list | None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
null_values: list | None, | |
null_values: list[str] | None, |
def get_read_options( | ||
header_start: int | None, | ||
data_start: int | None, | ||
names: list | None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
names: list | None, | |
names: list[str] | None, |
include_names: list[str] | None = None, | ||
dtypes: dict[str, "npt.DTypeLike"] | None = None, | ||
comment: str | None = None, | ||
null_values: list | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
null_values: list | None = None, | |
null_values: list[str] | None = None, |
Description
This pull request is a draft implementation of a fast CSV reader for astropy that uses pyarrow.csv.read_csv. This was discussed in #16869.
Before going much further, I am hoping to get feedback on the general implementation and API. The goal was to make an interface that will be familiar to astropy
io.ascii
users, while exposing some additional features brought by pyarrowread_csv
. Currently the interface is not complete, but the idea is to keep the interface clean and consistent with astropy.A quick demonstration notebook that you can use to play with this is at: https://gist.github.com/taldcroft/ac15bc516a7bf7c76f9eec644c787298
Fixes #16869
Related
pandas-dev/pandas#54466