Scaling not happening after upgrade, need advice on troubleshooting steps. #1826
-
Today we upgraded from v0.17.0 to v0.21.0 (Helm) and scaling stopped working. We have multiple different runners, here's an example manifest: apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"actions.summerwind.dev/v1alpha1","kind":"RunnerDeployment","metadata":{"annotations":{},"labels":{"tanka.dev/environment":"f2d718c3f144840ec98a3b340e1bbea82c7eb82b13c527e2"},"name":"proxyco-small-runner","namespace":"ci"},"spec":{"template":{"spec":{"dockerdWithinRunnerContainer":true,"env":[{"name":"DISABLE_RUNNER_UPDATE","value":"false"}],"ephemeral":true,"labels":["small"],"organization":"proxyco","resources":{"limits":{"cpu":"1","memory":"1Gi"},"requests":{"cpu":"1","memory":"1Gi"}},"serviceAccountName":"actions","workDir":"/home/runner/work/"}}}}
creationTimestamp: "2022-03-23T21:17:21Z"
generation: 60906
labels:
tanka.dev/environment: f2d718c3f144840ec98a3b340e1bbea82c7eb82b13c527e2
name: proxyco-small-runner
namespace: ci
resourceVersion: "153774297"
uid: d7dc5d03-5a3f-4a0d-9028-cb9ba6970285
spec:
effectiveTime: "2022-09-21T11:36:54Z"
replicas: 2
selector: null
template:
metadata: {}
spec:
dockerdContainerResources: {}
dockerdWithinRunnerContainer: true
env:
- name: DISABLE_RUNNER_UPDATE
value: "false"
ephemeral: true
image: ""
labels:
- small
organization: proxyco
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: "1"
memory: 1Gi
serviceAccountName: actions
workDir: /home/runner/work/
status:
availableReplicas: 2
desiredReplicas: 2
readyReplicas: 2
replicas: 2
updatedReplicas: 2 Scaler: apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"actions.summerwind.dev/v1alpha1","kind":"HorizontalRunnerAutoscaler","metadata":{"annotations":{},"labels":{"tanka.dev/environment":"f2d718c3f144840ec98a3b340e1bbea82c7eb82b13c527e2"},"name":"proxyco-small-scaler","namespace":"ci"},"spec":{"maxReplicas":10,"minReplicas":2,"scaleTargetRef":{"kind":"RunnerDeployment","name":"proxyco-small-runner"},"scaleUpTriggers":[{"duration":"10m","githubEvent":{}}]}}
creationTimestamp: "2022-04-03T20:27:15Z"
generation: 18881
labels:
tanka.dev/environment: f2d718c3f144840ec98a3b340e1bbea82c7eb82b13c527e2
name: proxyco-small-scaler
namespace: ci
resourceVersion: "153654676"
uid: b6510538-33e7-48b5-bf05-0e65f92b14bc
spec:
maxReplicas: 10
minReplicas: 2
scaleTargetRef:
kind: RunnerDeployment
name: proxyco-small-runner
scaleUpTriggers:
- duration: 10m
githubEvent: {}
status:
desiredReplicas: 2
lastSuccessfulScaleOutTime: "2022-09-21T11:36:54Z" Server is up and responds at: https://actions.k8s-us-west-2.deployment.proxy.co/ Github seems to be able to send events just fine: |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
Digging in the logs revealed a DEBUG message with the problem:
So - githubEvent: {}
+ githubEvent:
+ workflowJob: {} made autoscaling working again. Is this a regression? |
Beta Was this translation helpful? Give feedback.
-
We're locking this discussion because it has not had recent activity and/or other members have asked for more information to assist you but received no response. Thank you for helping us maintain a productive and tidy community for all our members. |
Beta Was this translation helpful? Give feedback.
Digging in the logs revealed a DEBUG message with the problem:
So
made autoscaling working again.
Is this a regression?