Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Fetch fail_calc metrics even for passing tests #9808

Closed
3 tasks done
tbog357 opened this issue Mar 23, 2024 · 3 comments
Closed
3 tasks done

[Feature] Fetch fail_calc metrics even for passing tests #9808

tbog357 opened this issue Mar 23, 2024 · 3 comments
Labels
enhancement New feature or request stale Issues that have gone stale

Comments

@tbog357
Copy link

tbog357 commented Mar 23, 2024

Is this your first time submitting a feature request?

  • I have read the expectations for open source contributors
  • I have searched the existing issues, and I could not find an existing issue for this feature
  • I am requesting a straightforward extension of existing dbt functionality, rather than a Big Idea better suited to a discussion

Describe the feature

Currently, fail_calc metrics only being populated when tests not passed in run_results.json file.

In my opinion, it is reasonable to know why tests passed. How close are the metrics close to the threshold (error_if / warn_if configuration)

Describe alternatives you've considered

No response

Who will this benefit?

No response

Are you interested in contributing this feature?

Yes

Anything else?

No response

@dbeatty10
Copy link
Contributor

Thanks for opening this issue and the associated PR @tbog357 ! 🤩

Agreed with @jtcohen6 in #9657 (comment) that this sounds reasonable and that:

We'll just want to do a little bit of thinking to make sure this wouldn't be a behavior change that would surprise someone upon upgrade.

So let's take a closer look at this!

Suppose we have the following project files:

models/my_model.sql

select 1 as id union all
-- select null as id union all
-- select null as id union all
select null as id

models/_models.yml

models:
  - name: my_model
    columns:
      - name: id
        tests:
          - not_null:
              config:
                severity: error
                error_if: ">2"
                warn_if: ">1"

And then we run this command:

dbt build -s my_model

The contents of target/run_results.json currently contain JSON like this:

    "results": [
        {
            "status": "pass",
            "timing": [...],
            "thread_id": "Thread-1 (worker)",
            "execution_time": 0.04432988166809082,
            "adapter_response": {
                "_message": "OK"
            },
            "message": null,
            "failures": 0,
            "unique_id": "test.my_project.not_null_large_table_id.915a6f562e",
            "compiled": true,
            "compiled_code": "\n    \n    \n\n\n\nselect id\nfrom \"db\".\"dbt_dbeatty\".\"large_table\"\nwhere id is null\n\n\n",
            "relation_name": null
        }
    ],

If your proposed change is adopted, then the only difference would be this portion (i.e., "status": "pass" would remain):

            "failures": 1,

This sounds good to me.

The case where this would affect someone is if they are using the number of failures to determine if there was a non-zero exit code or not. If they were doing this, then they would already have issues today because the warning case already reports a number that isn't 0, and a warning is still considered passing as well.

Since it seems most accurate to report the actual number of rows flagged by the test within run_results.json (irrespective if they meet the threshold or not), I'd feel good about accepting your proposal provided that we call this out in a migration guide.

@dbeatty10 dbeatty10 removed the triage label Apr 3, 2024
@dbeatty10 dbeatty10 removed their assignment Apr 3, 2024
@dbeatty10 dbeatty10 changed the title [Feature] Fetch fail_calc metrics even tests are passed [Feature] Fetch fail_calc metrics even for passing tests Apr 3, 2024
Copy link
Contributor

github-actions bot commented Oct 1, 2024

This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days.

@github-actions github-actions bot added the stale Issues that have gone stale label Oct 1, 2024
@tbog357 tbog357 closed this as completed Oct 1, 2024
@vglocus
Copy link

vglocus commented Feb 17, 2025

Stumbled on this and this is the same as my #11312
Not sure if we should reopen this or use the new one.

In addition there is a PR #11313 with a suggested fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale Issues that have gone stale
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants