You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're running a self-hosted Prefect on EKS, deployed via the official Helm charts. We want to add custom labels to our Kubernetes jobs, to allow for easier identification of flow run jobs in our analytics and metrics tools. However, when specifying labels={"example": "example"} in create_flow_run_from_deployment, the labels do not appear in the resulting Kubernetes job.
I expect the resulting job’s metadata to include example=example under metadata.labels.
Actual behavior
Inspecting the job with shows that the label example=example is not present:
$ kubectl get jobs --show-labels
NAME STATUS COMPLETIONS DURATION AGE LABELS
aloof-tench-k7l6s Complete 1/1 15s 36s prefect.io/deployment-id=b4e7151f-de9e-4c00-9424-61bc3b838f1c,prefect.io/deployment-name=my-deployment,prefect.io/deployment-updated=2025-02-13t16-30-44.632196z,prefect.io/flow-id=8046cc02-d1ca-4022-8720-1ee7f579df64,prefect.io/flow-name=my-flow,prefect.io/flow-run-id=8bc3e898-1826-4ff3-84f4-b9dc4df01d8d,prefect.io/flow-run-name=aloof-tench,prefect.io/version=3.2.1
Temporary solution
Passing the labels through the job_variables parameter instead:
does apply the label, but it may override any default labels set in job_variables, if they're not replicated here.
$ kubectl get jobs --show-labels
NAME STATUS COMPLETIONS DURATION AGE LABELS
accurate-petrel-gq7z7 Running 0/1 12s 12s example=example,...
Proposed solution
In my opinion, it would be ideal if Prefect merged top-level flow_run.labels with the job configuration labels. In particular, I believe merging flow_run.labels into this dict would resolve the issue:
This change would allow clients to apply additional flow run labels on the infrastructure, in addition to default ones set in job_variables on deployments/job templates.
Version info
Version: 3.2.1
API version: 0.8.4
Python version: 3.11.6
Git commit: f8b15dfb
Built: Mon, Feb 10, 2025 3:20 PM
OS/Arch: linux/x86_64
Profile: local
Server type: server
Pydantic version: 2.10.6
Integrations:
prefect-kubernetes: 0.5.3
Additional context
No response
The text was updated successfully, but these errors were encountered:
Bug summary
We're running a self-hosted Prefect on EKS, deployed via the official Helm charts. We want to add custom labels to our Kubernetes jobs, to allow for easier identification of flow run jobs in our analytics and metrics tools. However, when specifying
labels={"example": "example"}
increate_flow_run_from_deployment
, the labels do not appear in the resulting Kubernetes job.Expected behavior
When I run:
I expect the resulting job’s metadata to include
example=example
undermetadata.labels
.Actual behavior
Inspecting the job with shows that the label
example=example
is not present:Temporary solution
Passing the labels through the
job_variables
parameter instead:does apply the label, but it may override any default labels set in
job_variables
, if they're not replicated here.Proposed solution
In my opinion, it would be ideal if Prefect merged top-level
flow_run.labels
with the job configuration labels. In particular, I believe mergingflow_run.labels
into this dict would resolve the issue:prefect/src/prefect/workers/base.py
Line 253 in cd03e6f
This change would allow clients to apply additional flow run labels on the infrastructure, in addition to default ones set in
job_variables
on deployments/job templates.Version info
Additional context
No response
The text was updated successfully, but these errors were encountered: