Skip to content

Pyiceberg 9.0 not adding files with hour(timestamp) partitions #1917

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 of 3 tasks
MrDerecho opened this issue Apr 15, 2025 · 5 comments · Fixed by #1925
Closed
1 of 3 tasks

Pyiceberg 9.0 not adding files with hour(timestamp) partitions #1917

MrDerecho opened this issue Apr 15, 2025 · 5 comments · Fixed by #1925

Comments

@MrDerecho
Copy link

Apache Iceberg version

0.9.0 (latest release)

Please describe the bug 🐞

Note: this feature still works in pyiceberg 8.1.

Currently, I cannot use the table.add_files method to add files to an iceberg table due to ValueError Cannot infer partition value from parquet metadata as there are more than one partition values for Partition Field: timestamp_hour. lower_value=1743465600155254, upper_value=1743469198047855, when I inspect the parquet metadata the stats max and min values are within an hour, the same files are able to be added using pyiceberg 8.1

Image

Willingness to contribute

  • I can contribute a fix for this bug independently
  • I would be willing to contribute a fix for this bug with guidance from the Iceberg community
  • I cannot contribute a fix for this bug at this time
@Fokko
Copy link
Contributor

Fokko commented Apr 15, 2025

Thanks @MrDerecho for raising this. I checked, and the numbers map to the same hour:

Image

Let me dig into this

@Fokko
Copy link
Contributor

Fokko commented Apr 16, 2025

Able to reproduce it:

@pytest.mark.integration
def test_add_files_hour_transform(session_catalog: Catalog) -> None:
    identifier = f"default.test_add_files_hour_transform"

    schema = Schema(
        NestedField(1, "hourly", TimestampType())
    )
    schema_arrow = schema_to_pyarrow(schema, include_field_ids=False)
    spec = PartitionSpec(
        PartitionField(source_id=1, field_id=1000, transform=HourTransform(), name="spec_hour")
    )

    tbl = _create_table(session_catalog, identifier, format_version=1, schema=schema, partition_spec=spec)

    file_path = f"s3://warehouse/default/test_add_files_hour_transform/test.parquet"

    arrow_table = pa.Table.from_pylist(
        [
            {
                "hourly": micros_to_timestamp(1743465600155254)
            },
            {
                "hourly": micros_to_timestamp(1743469198047855)
            }
        ],
        schema=schema_arrow,
    )

    fo = tbl.io.new_output(file_path)
    with fo.create(overwrite=True) as fos:
        with pq.ParquetWriter(fos, schema=schema_arrow) as writer:
            writer.write_table(arrow_table)

    tbl.add_files(file_paths=[file_path])

@Fokko
Copy link
Contributor

Fokko commented Apr 16, 2025

Did a git bisect, and I introduced the bug:

36d383dcb676ae5ef59c34cc2910d16a8e30a80c is the first bad commit
commit 36d383dcb676ae5ef59c34cc2910d16a8e30a80c
Author: Fokko Driesprong <fokko@apache.org>
Date:   Thu Jan 23 07:50:54 2025 +0100

    PyArrow: Avoid buffer-overflow by avoid doing a sort (#1555)
    
    Second attempt of https://github.com/apache/iceberg-python/pull/1539
    
    This was already being discussed back here:
    https://github.com/apache/iceberg-python/issues/208#issuecomment-1889891973
    
    This PR changes from doing a sort, and then a single pass over the table
    to the approach where we determine the unique partition tuples filter on
    them individually.
    
    Fixes https://github.com/apache/iceberg-python/issues/1491
    
    Because the sort caused buffers to be joined where it would overflow in
    Arrow. I think this is an issue on the Arrow side, and it should
    automatically break up into smaller buffers. The `combine_chunks` method
    does this correctly.
    
    Now:
    
    ```
    0.42877754200890195
    Run 1 took: 0.2507691659993725
    Run 2 took: 0.24833179199777078
    Run 3 took: 0.24401691700040828
    Run 4 took: 0.2419595829996979
    Average runtime of 0.28 seconds
    ```
    
    Before:
    
    ```
    Run 0 took: 1.0768639159941813
    Run 1 took: 0.8784021250030492
    Run 2 took: 0.8486490420036716
    Run 3 took: 0.8614017910003895
    Run 4 took: 0.8497851670108503
    Average runtime of 0.9 seconds
    ```
    
    So it comes with a nice speedup as well :)
    
    ---------
    
    Co-authored-by: Kevin Liu <kevinjqliu@users.noreply.github.com>

 pyiceberg/io/pyarrow.py                    |  129 ++-
 pyiceberg/partitioning.py                  |   39 +-
 pyiceberg/table/__init__.py                |    6 +-
 pyproject.toml                             |    1 +
 tests/benchmark/test_benchmark.py          |   72 ++
 tests/integration/test_partitioning_key.py | 1299 ++++++++++++++--------------
 tests/table/test_locations.py              |    2 +-
 7 files changed, 805 insertions(+), 743 deletions(-)
 create mode 100644 tests/benchmark/test_benchmark.py

Fokko added a commit to Fokko/iceberg-python that referenced this issue Apr 16, 2025
@kevinjqliu
Copy link
Contributor

kevinjqliu commented Apr 16, 2025

Thanks for reporting this @MrDerecho! The fix should be included in the next patch release, 0.9.1

@Fokko
Copy link
Contributor

Fokko commented Apr 16, 2025

Thanks again @MrDerecho for raising this issue, we'll make sure that it gets backported to 0.9.1

Fokko added a commit that referenced this issue Apr 17, 2025
<!--
Thanks for opening a pull request!
-->

<!-- In the case this PR will resolve an issue, please replace
${GITHUB_ISSUE_ID} below with the actual Github issue id. -->
<!-- Closes #${GITHUB_ISSUE_ID} -->

Found out I broke this myself after doing a `git bisect`:

```
36d383d is the first bad commit
commit 36d383d
Author: Fokko Driesprong <fokko@apache.org>
Date:   Thu Jan 23 07:50:54 2025 +0100

    PyArrow: Avoid buffer-overflow by avoid doing a sort (#1555)

    Second attempt of #1539

    This was already being discussed back here:
    #208 (comment)

    This PR changes from doing a sort, and then a single pass over the table
    to the approach where we determine the unique partition tuples filter on
    them individually.

    Fixes #1491

    Because the sort caused buffers to be joined where it would overflow in
    Arrow. I think this is an issue on the Arrow side, and it should
    automatically break up into smaller buffers. The `combine_chunks` method
    does this correctly.

    Now:

    ```
    0.42877754200890195
    Run 1 took: 0.2507691659993725
    Run 2 took: 0.24833179199777078
    Run 3 took: 0.24401691700040828
    Run 4 took: 0.2419595829996979
    Average runtime of 0.28 seconds
    ```

    Before:

    ```
    Run 0 took: 1.0768639159941813
    Run 1 took: 0.8784021250030492
    Run 2 took: 0.8486490420036716
    Run 3 took: 0.8614017910003895
    Run 4 took: 0.8497851670108503
    Average runtime of 0.9 seconds
    ```

    So it comes with a nice speedup as well :)

    ---------

    Co-authored-by: Kevin Liu <kevinjqliu@users.noreply.github.com>

 pyiceberg/io/pyarrow.py                    |  129 ++-
 pyiceberg/partitioning.py                  |   39 +-
 pyiceberg/table/__init__.py                |    6 +-
 pyproject.toml                             |    1 +
 tests/benchmark/test_benchmark.py          |   72 ++
 tests/integration/test_partitioning_key.py | 1299 ++++++++++++++--------------
 tests/table/test_locations.py              |    2 +-
 7 files changed, 805 insertions(+), 743 deletions(-)
 create mode 100644 tests/benchmark/test_benchmark.py
```

Closes #1917

<!-- In the case of user-facing changes, please add the changelog label.
-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants