-
Notifications
You must be signed in to change notification settings - Fork 269
Pyiceberg 9.0 not adding files with hour(timestamp) partitions #1917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks @MrDerecho for raising this. I checked, and the numbers map to the same hour: Let me dig into this |
Able to reproduce it: @pytest.mark.integration
def test_add_files_hour_transform(session_catalog: Catalog) -> None:
identifier = f"default.test_add_files_hour_transform"
schema = Schema(
NestedField(1, "hourly", TimestampType())
)
schema_arrow = schema_to_pyarrow(schema, include_field_ids=False)
spec = PartitionSpec(
PartitionField(source_id=1, field_id=1000, transform=HourTransform(), name="spec_hour")
)
tbl = _create_table(session_catalog, identifier, format_version=1, schema=schema, partition_spec=spec)
file_path = f"s3://warehouse/default/test_add_files_hour_transform/test.parquet"
arrow_table = pa.Table.from_pylist(
[
{
"hourly": micros_to_timestamp(1743465600155254)
},
{
"hourly": micros_to_timestamp(1743469198047855)
}
],
schema=schema_arrow,
)
fo = tbl.io.new_output(file_path)
with fo.create(overwrite=True) as fos:
with pq.ParquetWriter(fos, schema=schema_arrow) as writer:
writer.write_table(arrow_table)
tbl.add_files(file_paths=[file_path]) |
Did a
|
Fokko
added a commit
to Fokko/iceberg-python
that referenced
this issue
Apr 16, 2025
Thanks for reporting this @MrDerecho! The fix should be included in the next patch release, 0.9.1 |
Thanks again @MrDerecho for raising this issue, we'll make sure that it gets backported to 0.9.1 |
Fokko
added a commit
that referenced
this issue
Apr 17, 2025
<!-- Thanks for opening a pull request! --> <!-- In the case this PR will resolve an issue, please replace ${GITHUB_ISSUE_ID} below with the actual Github issue id. --> <!-- Closes #${GITHUB_ISSUE_ID} --> Found out I broke this myself after doing a `git bisect`: ``` 36d383d is the first bad commit commit 36d383d Author: Fokko Driesprong <fokko@apache.org> Date: Thu Jan 23 07:50:54 2025 +0100 PyArrow: Avoid buffer-overflow by avoid doing a sort (#1555) Second attempt of #1539 This was already being discussed back here: #208 (comment) This PR changes from doing a sort, and then a single pass over the table to the approach where we determine the unique partition tuples filter on them individually. Fixes #1491 Because the sort caused buffers to be joined where it would overflow in Arrow. I think this is an issue on the Arrow side, and it should automatically break up into smaller buffers. The `combine_chunks` method does this correctly. Now: ``` 0.42877754200890195 Run 1 took: 0.2507691659993725 Run 2 took: 0.24833179199777078 Run 3 took: 0.24401691700040828 Run 4 took: 0.2419595829996979 Average runtime of 0.28 seconds ``` Before: ``` Run 0 took: 1.0768639159941813 Run 1 took: 0.8784021250030492 Run 2 took: 0.8486490420036716 Run 3 took: 0.8614017910003895 Run 4 took: 0.8497851670108503 Average runtime of 0.9 seconds ``` So it comes with a nice speedup as well :) --------- Co-authored-by: Kevin Liu <kevinjqliu@users.noreply.github.com> pyiceberg/io/pyarrow.py | 129 ++- pyiceberg/partitioning.py | 39 +- pyiceberg/table/__init__.py | 6 +- pyproject.toml | 1 + tests/benchmark/test_benchmark.py | 72 ++ tests/integration/test_partitioning_key.py | 1299 ++++++++++++++-------------- tests/table/test_locations.py | 2 +- 7 files changed, 805 insertions(+), 743 deletions(-) create mode 100644 tests/benchmark/test_benchmark.py ``` Closes #1917 <!-- In the case of user-facing changes, please add the changelog label. -->
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Apache Iceberg version
0.9.0 (latest release)
Please describe the bug 🐞
Note: this feature still works in pyiceberg 8.1.
Currently, I cannot use the table.add_files method to add files to an iceberg table due to
ValueError Cannot infer partition value from parquet metadata as there are more than one partition values for Partition Field: timestamp_hour. lower_value=1743465600155254, upper_value=1743469198047855
, when I inspect the parquet metadata the stats max and min values are within an hour, the same files are able to be added using pyiceberg 8.1Willingness to contribute
The text was updated successfully, but these errors were encountered: