-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Fetch fail_calc
metrics even for passing tests
#9808
Comments
Thanks for opening this issue and the associated PR @tbog357 ! 🤩 Agreed with @jtcohen6 in #9657 (comment) that this sounds reasonable and that:
So let's take a closer look at this! Suppose we have the following project files:
select 1 as id union all
-- select null as id union all
-- select null as id union all
select null as id
models:
- name: my_model
columns:
- name: id
tests:
- not_null:
config:
severity: error
error_if: ">2"
warn_if: ">1" And then we run this command: dbt build -s my_model The contents of "results": [
{
"status": "pass",
"timing": [...],
"thread_id": "Thread-1 (worker)",
"execution_time": 0.04432988166809082,
"adapter_response": {
"_message": "OK"
},
"message": null,
"failures": 0,
"unique_id": "test.my_project.not_null_large_table_id.915a6f562e",
"compiled": true,
"compiled_code": "\n \n \n\n\n\nselect id\nfrom \"db\".\"dbt_dbeatty\".\"large_table\"\nwhere id is null\n\n\n",
"relation_name": null
}
], If your proposed change is adopted, then the only difference would be this portion (i.e., "failures": 1, This sounds good to me. The case where this would affect someone is if they are using the number of Since it seems most accurate to report the actual number of rows flagged by the test within |
fail_calc
metrics even for passing tests
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days. |
Is this your first time submitting a feature request?
Describe the feature
Currently, fail_calc metrics only being populated when tests not passed in run_results.json file.
In my opinion, it is reasonable to know why tests passed. How close are the metrics close to the threshold (
error_if
/warn_if
configuration)Describe alternatives you've considered
No response
Who will this benefit?
No response
Are you interested in contributing this feature?
Yes
Anything else?
No response
The text was updated successfully, but these errors were encountered: