-
Notifications
You must be signed in to change notification settings - Fork 14
Update existing resources when CR changes #119
Update existing resources when CR changes #119
Conversation
159c97f
to
288bc09
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to work fine at first sight, but I also noticed a slight issue:
imagePullSecrets
seem to keep adding to the Backstage Deployment Spec if CR contains a spec.application.imagePullSecrets
and is updated (I just updated the image and number of replicas), e.g.:
$ kubectl get deployment backstage-my-rhdh -o yaml
...
volumeMounts:
- mountPath: /opt/app-root/src/dynamic-plugins-root
name: dynamic-plugins-root
- mountPath: /opt/app-root/src/app-config-rhdh.yaml
name: app-config-rhdh
subPath: app-config-rhdh.yaml
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
- name: rhdh-pull-secret
...
Can you take a look?
@rm3l fixed imagePullSecrets issue. |
624d3b5
to
a75a2f5
Compare
a187a23
to
946874a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main reason we postpone Update to the later version is edge cases which we have due to our configuration specific. I think it would be worth to investigate these cases before and come to some conclusions.
I've created some some issues for that, please take a look, it would be good to resolve it as a prerequisite.
@@ -21,6 +21,11 @@ import ( | |||
const ( | |||
RuntimeConditionRunning string = "RuntimeRunning" | |||
RuntimeConditionSynced string = "RuntimeSyncedWithConfig" | |||
RouteSynced string = "RouteSynced" | |||
LocalDbSynced string = "LocalDbSynced" | |||
SyncOK string = "SyncOK" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How it differs to RuntimeSyncedWithConfig = True condition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RuntimeSyncedWithConfig status is not currently implemented and we can have a follow up PR to get it done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean RuntimeSyncedWithConfig looks like the same thing as SyncOK, no?
RouteSynced string = "RouteSynced" | ||
LocalDbSynced string = "LocalDbSynced" | ||
SyncOK string = "SyncOK" | ||
SyncFailed string = "SyncFailed" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How it differs to RuntimeSyncedWithConfig = False condition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above.
if err != nil { | ||
if !errors.IsNotFound(err) { | ||
return fmt.Errorf("failed to get backstage route, reason: %s", err) | ||
if _, err := controllerutil.CreateOrUpdate(ctx, r.Client, route, r.routeObjectMutFun(ctx, route, *backstage, ns)); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the advantage of using controllerutil.CreateOrUpdate in this (and other similar) cases instead of plan API calls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
controllerutil.CreateOrUpdate() is a nice utility library function call that handles the logic like get the current resource and merge it with the new spec using the mutate(), and then do the update.
controllers/backstage_service.go
Outdated
} | ||
} | ||
service.Spec.ClusterIP = targetService.Spec.ClusterIP | ||
for _, ip1 := range targetService.Spec.ClusterIPs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure we need to process this for Kubernetes?
I believe all the options offered are quite valid (depending on infrastructure specific), why we need to be that opinionated here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a service is created, normally k8s may update the spec to include the clusterIP and clusterIPs assigned randomly. These values can not be updated later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, it is often use case but not only, user is able to specify it or to set None for headless service
https://kubernetes.io/docs/concepts/services-networking/service/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes the user is able to create headless service (actually the operator deploys a headless service for local db). What I meant here after the service has been created, the cluserIP should not be updated.
https://github.com/kubernetes/api/blob/master/core/v1/types.go#L5040
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand it.
Why we have this opinion and check it ourselves?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because without this checking, reconciliation would fail, as the operator tries to override the IPs that were populated by k8s, leading to an update error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not valid, I believe
According to https://kubernetes.io/docs/concepts/services-networking/service/
If you do not specify ClusterIP (which is a case a believe) it is is assigned by K8s, i.e it is controlled by Kubernetes, Operator's desired state is still "I let K8s manage it" so, I believe operator must not try to override it.
controllers/local_db_services.go
Outdated
if len(targetService.Spec.ClusterIP) > 0 && service.Spec.ClusterIP != "" && service.Spec.ClusterIP != "None" && service.Spec.ClusterIP != targetService.Spec.ClusterIP { | ||
return fmt.Errorf("db service IP can not be updated: %s, %s, %s", targetService.Name, targetService.Spec.ClusterIP, service.Spec.ClusterIP) | ||
} | ||
service.Spec.ClusterIP = targetService.Spec.ClusterIP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above about ClusterIP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
Yes resource update involves various use cases as you pointed out. Most of the ones in #124 are for default config/configmao/secret changes etc and they can be handled separately later. This PR is for the application of the CR changes only. |
ff714ce
to
9f163a9
Compare
f4702b0
to
89f5392
Compare
157a016
to
302957c
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be possible to update the route sub-domain after it is created?
I first created a CR with no spec.application.route
field set. Then I added spec.application.route.subdomain
to the CR and updated it. But I noticed that the Route was not updated. Is that expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tested this PR against scenarios depicted in https://github.com/janus-idp/operator/blob/main/.rhdh/docs/openshift.adoc, and everything works fine.
We'll need to update https://github.com/janus-idp/operator/blob/main/.rhdh/docs/openshift.adoc (to remove the need to manually delete the Backstage Deployment upon CR changes), but this can done in a subsequent PR/issue.
I think we can go ahead and merge this PR, so that QE can start testing, and also to simplify the docs workflow after CR is modified by the user.
#70 will also need to take that into account.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rm3l The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Yes this is expected. Since the route host was already created, the subdomain change in the route is ignored by the OpenShift server. |
Description
Update the deployed resources when backstage CR is updated.
Which issue(s) does this PR fix or relate to
PR acceptance criteria
How to test changes / Special notes to the reviewer
and verify that the deployed backstage has 2 pods running.
3) Update the CR spec to include:
and check that the deployed backstage pod has env vars myenv1 and myenv2 set in the main container.
4) Update the CR spec to disable the route:
and verify that the backstage route bas been cleaned up.
5) Update the CR spec to disable the localDB:
and verify that the local db statefulset, services and secret that were deployed earlier are now deleted.
Note that two status condition types "RouteSynced" and "LocalDbSynced" are introduced to support the functionality required.