diff --git a/antora.yml b/antora.yml
index e204e55..c94abfc 100644
--- a/antora.yml
+++ b/antora.yml
@@ -1,5 +1,5 @@
name: operator
-title: Autonomous Operator
+title: Kubernetes Operator
version: '2.8'
prerelease: false
start_page: ROOT:overview.adoc
diff --git a/modules/ROOT/assets/images/cloud-native-overview.png b/modules/ROOT/assets/images/cloud-native-overview.png
new file mode 100644
index 0000000..47fc052
Binary files /dev/null and b/modules/ROOT/assets/images/cloud-native-overview.png differ
diff --git a/modules/ROOT/assets/images/kubernettes_2.svg b/modules/ROOT/assets/images/kubernettes_2.svg
new file mode 100644
index 0000000..1add6bf
--- /dev/null
+++ b/modules/ROOT/assets/images/kubernettes_2.svg
@@ -0,0 +1,386 @@
+
+
\ No newline at end of file
diff --git a/modules/ROOT/pages/best-practices.adoc b/modules/ROOT/pages/best-practices.adoc
index 403846f..45e6b7e 100644
--- a/modules/ROOT/pages/best-practices.adoc
+++ b/modules/ROOT/pages/best-practices.adoc
@@ -1,7 +1,7 @@
= Guidelines and Best Practices
:page-aliases: node-recovery
-The Couchbase Autonomous Operator makes deploying Couchbase Server incredibly simple. However, there are some external influences and configurations that can cause issues. This topic outlines some of the deployment best practices that can help you avoid some of the most common pitfalls.
+The Couchbase Kubernetes Operator makes deploying Couchbase Server incredibly simple. However, there are some external influences and configurations that can cause issues. This topic outlines some of the deployment best practices that can help you avoid some of the most common pitfalls.
== Pod Scheduling
@@ -117,7 +117,7 @@ Ephemeral clusters favor caching use-cases where the data can be repopulated by
Since fully-ephemeral Couchbase clusters only use ephemeral storage, Couchbase Server logs are highly likely to be unavailable in the event of a crash.
This can make supporting an ephemeral cluster particularly difficult, and it is recommended that you exercise caution when using this type of deployment.
-Starting with version 2.2, the Autonomous Operator supports xref:howto-couchbase-log-forwarding.adoc[forwarding] Couchbase Server logs.
+Starting with version 2.2, the Kubernetes Operator supports xref:howto-couchbase-log-forwarding.adoc[forwarding] Couchbase Server logs.
However, the current implementation requires the use of a (persistent) volume.
====
diff --git a/modules/ROOT/pages/concept-backup.adoc b/modules/ROOT/pages/concept-backup.adoc
index d68c20c..34536f4 100644
--- a/modules/ROOT/pages/concept-backup.adoc
+++ b/modules/ROOT/pages/concept-backup.adoc
@@ -1,13 +1,13 @@
= Couchbase Backup and Restore
[abstract]
-The Autonomous Operator provides facilities that allow data to be backed up, restored, and archived in order to aid in cluster disaster recovery.
+The Kubernetes Operator provides facilities that allow data to be backed up, restored, and archived in order to aid in cluster disaster recovery.
== Overview
-The Autonomous Operator provides automated backup and restore capabilities through a native integration with the xref:server:backup-restore:enterprise-backup-restore.adoc[`cbbackupmgr` tool] in Couchbase Server.
+The Kubernetes Operator provides automated backup and restore capabilities through a native integration with the xref:server:backup-restore:enterprise-backup-restore.adoc[`cbbackupmgr` tool] in Couchbase Server.
Automated backup is enabled in the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource (it is _disabled_ by default).
-When backup is enabled, the Autonomous Operator defaults to a Couchbase-supplied https://hub.docker.com/r/couchbase/operator-backup[`operator-backup`^] container image that contains xref:server:backup-restore:cbbackupmgr.adoc[`cbbackupmgr`].
+When backup is enabled, the Kubernetes Operator defaults to a Couchbase-supplied https://hub.docker.com/r/couchbase/operator-backup[`operator-backup`^] container image that contains xref:server:backup-restore:cbbackupmgr.adoc[`cbbackupmgr`].
Once automated backup is enabled, individual backup policies can be configured using xref:resource/couchbasebackup.adoc[`CouchbaseBackup`] resources, which define things like _schedule_ and _backup strategy_.
Each xref:resource/couchbasebackup.adoc[`CouchbaseBackup`] resource creates one or two Kubernetes https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/[`CronJob`^] resources that will spawn backup jobs according to the given Cron schedule(s).
@@ -23,15 +23,15 @@ Because backup policies are configured with a separate resource, you can use xre
== About the `operator-backup` Image
Each version of Couchbase Server is released with a compatible version of the xref:server:backup-restore:enterprise-backup-restore.adoc[`cbbackupmgr` tool].
-This tool is included in the https://hub.docker.com/r/couchbase/operator-backup[`operator-backup`^] container image that is used by the Autonomous Operator to provide automated backup and restore capabilities.
+This tool is included in the https://hub.docker.com/r/couchbase/operator-backup[`operator-backup`^] container image that is used by the Kubernetes Operator to provide automated backup and restore capabilities.
-Whenever the Autonomous Operator gains support for a new version of Couchbase Server, a new and/or compatible version of the https://hub.docker.com/r/couchbase/operator-backup[`operator-backup`^] image will be made available at the same time that includes a fully compatible version of xref:server:backup-restore:cbbackupmgr.adoc[`cbbackupmgr`].
-For a list of compatible images for this release of the Autonomous Operator, refer to xref:prerequisite-and-setup.adoc#couchbase-backup-and-restore-compatibility[Couchbase Backup and Restore Compatibility].
+Whenever the Kubernetes Operator gains support for a new version of Couchbase Server, a new and/or compatible version of the https://hub.docker.com/r/couchbase/operator-backup[`operator-backup`^] image will be made available at the same time that includes a fully compatible version of xref:server:backup-restore:cbbackupmgr.adoc[`cbbackupmgr`].
+For a list of compatible images for this release of the Kubernetes Operator, refer to xref:prerequisite-and-setup.adoc#couchbase-backup-and-restore-compatibility[Couchbase Backup and Restore Compatibility].
[IMPORTANT]
====
Only the official Couchbase-supplied https://hub.docker.com/r/couchbase/operator-backup[`operator-backup`^] container image is supported.
-This image is designed only for use with the Autonomous Operator, and is not meant for any other context.
+This image is designed only for use with the Kubernetes Operator, and is not meant for any other context.
In addition, you should ensure that your image source is trusted.
The backup image requires access to the Couchbase cluster administrative credentials in order to login and perform collection.
@@ -40,10 +40,10 @@ Granting these credentials to arbitrary code is potentially harmful.
== Important Considerations
-* The Autonomous Operator supports two of the backup strategies available in xref:server:backup-restore:cbbackupmgr.adoc[`cbbackupmgr`]: _Full Only_ and _Full/Incremental_.
+* The Kubernetes Operator supports two of the backup strategies available in xref:server:backup-restore:cbbackupmgr.adoc[`cbbackupmgr`]: _Full Only_ and _Full/Incremental_.
Complete descriptions and explanations of these strategies can be found in the xref:server:backup-restore:cbbackupmgr-strategies.adoc[`cbbackupmgr` strategies documentation].
-* The Autonomous Operator runs the backup utility in a separate Pod.
+* The Kubernetes Operator runs the backup utility in a separate Pod.
Where this Pod is scheduled can have implications on backup performance, and can affect whether backup jobs are able to complete within the desired time window.
+
You should schedule backup Pods onto Kubernetes nodes that have enough resources to successfully fulfill your backup schedule.
@@ -62,7 +62,7 @@ When you re-enabled automated backup, any applicable xref:resource/couchbaseback
[IMPORTANT]
====
The xref:server:backup-restore:cbbackupmgr.adoc[`cbbackupmgr`] tool _does not_ support mutual TLS authentication.
-If your Couchbase cluster is using mandatory client certificate authentication, the Autonomous Operator, in an effort to keep the backup from failing, will downgrade the connection between the backup Pod and the cluster to _plain text_.
+If your Couchbase cluster is using mandatory client certificate authentication, the Kubernetes Operator, in an effort to keep the backup from failing, will downgrade the connection between the backup Pod and the cluster to _plain text_.
In both server-side TLS and optional client certificate authentication modes of operation, the backup will occur over TLS, using basic HTTP authentication.
====
diff --git a/modules/ROOT/pages/concept-cloud-native-gateway.adoc b/modules/ROOT/pages/concept-cloud-native-gateway.adoc
index 501a6de..d54fc3b 100644
--- a/modules/ROOT/pages/concept-cloud-native-gateway.adoc
+++ b/modules/ROOT/pages/concept-cloud-native-gateway.adoc
@@ -34,7 +34,7 @@ gRPC over HTTP/2 is an efficient binary wire protocol which can be efficiently m
== How it is Deployed
The Cloud Native Gateway runs as a _sidecar_ image to every Couchbase Server node in a cluster.
-This sidecar is set up and managed by the Couchbase Autonomous Operator.
+This sidecar is set up and managed by the Couchbase Kubernetes Operator.
When deploying a Couchbase Cluster in Kubernetes, you will define a `CouchbaseCluster` Object.
Starting with release 2.6.1, the object definition allows you to add a `cloudNativeGateway` object to add the Cloud Native Gateway to your cluster and create a `Service` object.
@@ -50,7 +50,7 @@ networking:
----
CAUTION: At the moment, adding CNG to an existing cluster requires a rebalance which will create new pods and move data.
-For compatibility with commonly deployed Kubernetes and OpenShift releases, the Couchbase Autonomous Operator cannot yet use some of the newer features for sidecar management.
+For compatibility with commonly deployed Kubernetes and OpenShift releases, the Couchbase Kubernetes Operator cannot yet use some of the newer features for sidecar management.
== Monitoring Health
diff --git a/modules/ROOT/pages/concept-couchbase-autoscaling-best-practices.adoc b/modules/ROOT/pages/concept-couchbase-autoscaling-best-practices.adoc
index f99f4f4..8336043 100644
--- a/modules/ROOT/pages/concept-couchbase-autoscaling-best-practices.adoc
+++ b/modules/ROOT/pages/concept-couchbase-autoscaling-best-practices.adoc
@@ -2,11 +2,11 @@
:HorizontalPodAutoscaler: pass:q[https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#horizontalpodautoscaler-v2-autoscaling[`HorizontalPodAutoscaler`^]]
[abstract]
-Recommended best practices, derived from tested performance metrics, for configuring Couchbase cluster auto-scaling using the Couchbase Autonomous Operator.
+Recommended best practices, derived from tested performance metrics, for configuring Couchbase cluster auto-scaling using the Couchbase Kubernetes Operator.
== How to Use This Page
-This page provides guidance on how to configure the Autonomous Operator's xref:concept-couchbase-autoscaling.adoc[auto-scaling feature] to effectively scale Couchbase clusters.
+This page provides guidance on how to configure the Kubernetes Operator's xref:concept-couchbase-autoscaling.adoc[auto-scaling feature] to effectively scale Couchbase clusters.
Specifically, it discusses relevant _metrics_ for scaling individual Couchbase Services, and provides recommended settings based on internal benchmark testing performed by Couchbase.
Auto-scaling is a generic feature and it is possible to use other metrics and options outside those listed in these best practices.
@@ -145,7 +145,7 @@ However, when using auto-scaling in production, it is recommended that you set `
This ensures a minimum level of protection against single-node failures.
+
CAUTION: You can technically set `minReplicas` to `0` by enabling the `HPAScaleToZero` https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/[feature gate^].
-You should never do this, as the Autonomous Operator prevents server class configurations from having sizes less than 1.
+You should never do this, as the Kubernetes Operator prevents server class configurations from having sizes less than 1.
* Depending on the cloud provider, provisioning of persistent volumes may take significantly longer than pods.
Therefore, the chances of exceeding a metric threshold while trying to reach its desired value is higher when using persistent volumes.
diff --git a/modules/ROOT/pages/concept-couchbase-autoscaling.adoc b/modules/ROOT/pages/concept-couchbase-autoscaling.adoc
index c67ecf8..f55ab53 100644
--- a/modules/ROOT/pages/concept-couchbase-autoscaling.adoc
+++ b/modules/ROOT/pages/concept-couchbase-autoscaling.adoc
@@ -2,18 +2,18 @@
:HorizontalPodAutoscaler: pass:q[https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#horizontalpodautoscaler-v2-autoscaling[`HorizontalPodAutoscaler`^]]
[abstract]
-The Autonomous Operator can be configured to enable automatic scaling for Couchbase clusters.
+The Kubernetes Operator can be configured to enable automatic scaling for Couchbase clusters.
== About Couchbase Cluster Auto-scaling
-The Autonomous Operator provides the necessary facilities for Couchbase clusters to be automatically scaled based on usage metrics.
+The Kubernetes Operator provides the necessary facilities for Couchbase clusters to be automatically scaled based on usage metrics.
Thresholds can be set for native Kubernetes metrics (such as pod CPU utilization) as well as Couchbase metrics (such as bucket memory utilization) that, when crossed, trigger _horizontal scaling_ of individual server classes.
Auto-scaling doesn't incur any cluster downtime, and allows for each Couchbase Service to be scaled _independently_ on the same cluster.
For example, the Data Service can automatically scale in response to fluctuations in memory utilization, while the Query Service can automatically scale in response to CPU utilization.
The sections on this page describe the conceptual information about Couchbase cluster auto-scaling.
-For information on how to configure and administrate auto-scaling using the Autonomous Operator, refer to xref:howto-couchbase-autoscaling.adoc[].
+For information on how to configure and administrate auto-scaling using the Kubernetes Operator, refer to xref:howto-couchbase-autoscaling.adoc[].
IMPORTANT: Auto-scaling only supports adding or removing pod replicas of the associated server class.
Auto-scaling does not currently scale a cluster _vertically_ by swapping pods with ones that have larger or smaller resource requests.
@@ -22,7 +22,7 @@ By extension, the _size_ of persistent storage also cannot be auto-scaled and mu
[[how-auto-scaling-works]]
== How Auto-scaling Works
-The Autonomous Operator maintains Couchbase cluster topology according to the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers[`couchbaseclusters.spec.servers`] section of the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource.
+The Kubernetes Operator maintains Couchbase cluster topology according to the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers[`couchbaseclusters.spec.servers`] section of the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource.
Within this section, _server classes_ are defined with, among other things, specifications for the following:
* The specific Couchbase Services that should run on a particular pod
@@ -58,17 +58,17 @@ spec:
<.> This _server class_, named `data`, specifies that the Couchbase cluster should include `3` nodes running exclusively the Data Service, and that those nodes should each have `4` vCPU and `16Gi` memory.
-This ability to have independently-configurable server classes is how the Autonomous Operator supports xref:concept-mds.adoc[_Multi-Dimensional Scaling_].
+This ability to have independently-configurable server classes is how the Kubernetes Operator supports xref:concept-mds.adoc[_Multi-Dimensional Scaling_].
Depending on the observed performance of a Couchbase cluster over time, its constituent server classes can be independently xref:howto-couchbase-scale.adoc[scaled] to meet the demands of current and future workloads.
_Auto-scaling_ extends this capability by allowing server classes to automatically change in `size` (number of nodes) when observed metrics are detected to have crossed above or below user-configured thresholds.
-The Autonomous Operator provides this capability through an integration with the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler (HPA)^].
+The Kubernetes Operator provides this capability through an integration with the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler (HPA)^].
image::autoscale-hpa-connection.png[]
Cluster auto-scaling is fundamentally provided by the following components:
-* A xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created and managed by the Autonomous Operator
+* A xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created and managed by the Kubernetes Operator
+
Refer to <>
@@ -83,7 +83,7 @@ Refer to <>
[[about-the-couchbase-autoscaler]]
== About the Couchbase Autoscaler
-The Autonomous Operator creates a separate xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for _each_ server class that has auto-scaling enabled.
+The Kubernetes Operator creates a separate xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for _each_ server class that has auto-scaling enabled.
[source,yaml,subs="attributes,verbatim"]
----
@@ -100,28 +100,28 @@ spec:
- query
----
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`]: Setting this field to `true` triggers the Autonomous Operator to create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource for the server class named `query`.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`]: Setting this field to `true` triggers the Kubernetes Operator to create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource for the server class named `query`.
Each xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource is named using the format `__.__`.
-For the example above, the Autonomous Operator would create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource named `query.cb-example`.
+For the example above, the Kubernetes Operator would create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource named `query.cb-example`.
-Once created, the Autonomous Operator keeps the size of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource in sync with the size of its associated server class.
+Once created, the Kubernetes Operator keeps the size of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource in sync with the size of its associated server class.
image::autoscale-crd.png[]
The xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource acts as the necessary bridge between the {HorizontalPodAutoscaler} resource and the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource.
The size of a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource is adjusted by the Horizontal Pod Autoscaler when the reported value of a user-specified metric crosses above or below a configured threshold.
-Once the changes have been propagated from the {HorizontalPodAutoscaler} resource to the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource, the Autonomous Operator will observe those changes and scale the server class accordingly.
+Once the changes have been propagated from the {HorizontalPodAutoscaler} resource to the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource, the Kubernetes Operator will observe those changes and scale the server class accordingly.
image::autoscale-crd-hpa.png[]
[IMPORTANT]
====
-xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources are fully managed by the Autonomous Operator and should not be manually created, modified, or deleted by the user.
-If one is manually deleted, the Autonomous Operator will re-create it.
+xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources are fully managed by the Kubernetes Operator and should not be manually created, modified, or deleted by the user.
+If one is manually deleted, the Kubernetes Operator will re-create it.
However, it is possible to edit the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] (refer to <> below).
-A xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource only gets deleted by the Autonomous Operator when xref:howto-couchbase-autoscaling.adoc#disabling-auto-scaling[auto-scaling is disabled] for the associated server class, or if the associated xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource is deleted altogether.
+A xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource only gets deleted by the Kubernetes Operator when xref:howto-couchbase-autoscaling.adoc#disabling-auto-scaling[auto-scaling is disabled] for the associated server class, or if the associated xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource is deleted altogether.
====
[[scale-subresource]]
@@ -140,7 +140,7 @@ $ kubectl scale couchbaseautoscalers --replicas=6 query.cb-example
----
The above command results in scaling the server class named `query` to support `6` replicas.
-The Autonomous Operator monitors the value of xref:resource/couchbaseautoscaler.adoc#couchbaseautoscalers-spec-size[`couchbaseautoscalers.spec.size`] and applies the value to xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-size[`couchbaseclusters.spec.servers.size`].
+The Kubernetes Operator monitors the value of xref:resource/couchbaseautoscaler.adoc#couchbaseautoscalers-spec-size[`couchbaseautoscalers.spec.size`] and applies the value to xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-size[`couchbaseclusters.spec.servers.size`].
NOTE: The Horizontal Pod Autoscaler will reconcile the number of replicas with the last computed desired state.
Manual changes to the number of replicas will be reverted if the specified size falls outside of `minReplicas` or `maxReplicas`, or if the Horizontal Pod Autoscaler is currently recommending a different size.
@@ -148,14 +148,14 @@ Manual changes to the number of replicas will be reverted if the specified size
[[about-the-horizontal-pod-autoscaler]]
== About the Horizontal Pod Autoscaler
-The Autonomous Operator relies on the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler (HPA)^] to provide auto-scaling capabilities.
+The Kubernetes Operator relies on the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler (HPA)^] to provide auto-scaling capabilities.
The Horizontal Pod Autoscaler is responsible for observing target metrics, making https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details[sizing calculations^], and sending sizing requests to the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`].
The Horizontal Pod Autoscaler is configured via a {HorizontalPodAutoscaler} resource.
The {HorizontalPodAutoscaler} resource is the primary interface by which auto-scaling is configured, and must be manually created and managed by the user.
Simply enabling auto-scaling for a server class in the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource will not result in any auto-scaling operations until a {HorizontalPodAutoscaler} resource has been manually created and configured to reference the appropriate xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource.
-NOTE: The Autonomous Operator has no facility for creating or managing {HorizontalPodAutoscaler} resources.
+NOTE: The Kubernetes Operator has no facility for creating or managing {HorizontalPodAutoscaler} resources.
Deleting a xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource does not delete any associated {HorizontalPodAutoscaler} resources.
[[referencing-the-couchbase-autoscaler]]
@@ -177,7 +177,7 @@ spec:
name: query.cb-example # <.>
----
-<.> `scaleTargetRef.kind`: This field must be set to xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`], which is the `kind` of custom resource that gets automatically created by the Autonomous Operator when auto-scaling is enabled for a server class.
+<.> `scaleTargetRef.kind`: This field must be set to xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`], which is the `kind` of custom resource that gets automatically created by the Kubernetes Operator when auto-scaling is enabled for a server class.
<.> `scaleTargetRef.name`: This field needs to reference the unique `name` of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource.
As discussed in <>, xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resources are created with the name format `__.__`.
@@ -401,7 +401,7 @@ Refer to <> for more information.
<.> The `periodSeconds` field defines the length of time in the past for which the policy must hold true before successive scaling changes in the same direction are allowed to occur.
+
-NOTE: The `periodSeconds` setting is effectively unnecessary when it comes to auto-scaling Couchbase clusters with the Autonomous Operator, and should be left to its default value unless recommended otherwise by the xref:concept-couchbase-autoscaling-best-practices.adoc[].
+NOTE: The `periodSeconds` setting is effectively unnecessary when it comes to auto-scaling Couchbase clusters with the Kubernetes Operator, and should be left to its default value unless recommended otherwise by the xref:concept-couchbase-autoscaling-best-practices.adoc[].
Instead, <> should be used, as it is the preferred method for controlling successive changes in scaling direction.
<.> The `selectPolicy` field controls which policy is chosen by the Horizontal Pod Autoscaler if more than one policy is defined.
@@ -422,10 +422,10 @@ The Horizontal Pod Autoscaler can be prevented from ever scaling down a server c
----
====
-IMPORTANT: The Autonomous Operator can only auto-scale one server class at a time.
+IMPORTANT: The Kubernetes Operator can only auto-scale one server class at a time.
This is an important point to consider when enabling auto-scaling for multiple server classes on the same Couchbase cluster.
If any associated {HorizontalPodAutoscaler} resource makes a new scaling recommendation while the cluster is currently undergoing a scaling operation based on a previous recommendation, then the new recommendation will not be honored until the current scaling operation is complete.
-For example, in a hypothetical scenario where the Autonomous Operator has already begun scaling up a server class named `data` when another {HorizontalPodAutoscaler} resource recommends scaling up a server class named `query` on the same cluster, the Autonomous Operator will not balance in any `query` replicas until the `data` server class has been scaled up to the desired size.
+For example, in a hypothetical scenario where the Kubernetes Operator has already begun scaling up a server class named `data` when another {HorizontalPodAutoscaler} resource recommends scaling up a server class named `query` on the same cluster, the Kubernetes Operator will not balance in any `query` replicas until the `data` server class has been scaled up to the desired size.
(Note that this scenario is only possible when the <> is disabled.)
[[scaling-increments]]
@@ -471,7 +471,7 @@ Refer to xref:concept-couchbase-autoscaling-best-practices.adoc[] for help with
If the targeted metric fluctuates back and forth across the configured scaling threshold over a short period of time, it can cause the cluster to scale up and down unnecessarily as it chases after the metric.
This behavior is sometimes referred to as "flapping" or "thrashing".
-The Horizontal Pod Autoscaler and the Autonomous Operator _both_ provide different but equally important mechanisms to control the flapping of pod replicas.
+The Horizontal Pod Autoscaler and the Kubernetes Operator _both_ provide different but equally important mechanisms to control the flapping of pod replicas.
These controls, described in the subsections below, are meant to be used in tandem with each other, and should be tested using different permutations when determining the appropriate auto-scaling configuration for a particular workload.
[[couchbase-stabilization-period]]
@@ -480,8 +480,8 @@ These controls, described in the subsections below, are meant to be used in tand
Both during and directly after a rebalance operation, some metrics may behave erratically while the cluster continues to stabilize.
If the Horizontal Pod Autoscaler is monitoring a targeted metric that is unstable due to rebalance, it may lead the Horizontal Pod Autoscaler to erroneously scale the cluster in undesirable ways.
-The _Couchbase Stabilization Period_ is an internal safety mechanism provided by the Autonomous Operator that is meant to help prevent the types of over-scaling caused by metrics instability during rebalance.
-When the Couchbase Stabilization Period is specified, the Autonomous Operator will put _all_ {HorizontalPodAutoscaler} resources associated with the Couchbase cluster into https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#implicit-maintenance-mode-deactivation[_maintenance mode_^] during rebalance operations.
+The _Couchbase Stabilization Period_ is an internal safety mechanism provided by the Kubernetes Operator that is meant to help prevent the types of over-scaling caused by metrics instability during rebalance.
+When the Couchbase Stabilization Period is specified, the Kubernetes Operator will put _all_ {HorizontalPodAutoscaler} resources associated with the Couchbase cluster into https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#implicit-maintenance-mode-deactivation[_maintenance mode_^] during rebalance operations.
When in maintenance mode, the Horizontal Pod Autoscaler will not monitor targeted metrics, and therefore will stop making scaling recommendations.
Once the rebalance operation is complete, the Horizontal Pod Autoscaler will remain in maintenance mode for the duration of the stabilization period, after which it will resume monitoring metrics.
@@ -580,7 +580,7 @@ $ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
----
If you receive a `NotFound` error then you will need to install a custom metrics service.
-The recommended custom metrics service to use with the Autonomous Operator is the https://github.com/DirectXMan12/k8s-prometheus-adapter[Prometheus Adapter^].
+The recommended custom metrics service to use with the Kubernetes Operator is the https://github.com/DirectXMan12/k8s-prometheus-adapter[Prometheus Adapter^].
When performing auto-scaling based on Couchbase Server metrics, the discovery of available metrics can be performed through Prometheus queries that are beyond the scope of this document.
However, the Couchbase Server docs contain a https://docs.couchbase.com/server/current/metrics-reference/metrics-reference.html[list of the Couchbase metrics available^].
diff --git a/modules/ROOT/pages/concept-couchbase-logging.adoc b/modules/ROOT/pages/concept-couchbase-logging.adoc
index f888cef..b6cc819 100644
--- a/modules/ROOT/pages/concept-couchbase-logging.adoc
+++ b/modules/ROOT/pages/concept-couchbase-logging.adoc
@@ -24,7 +24,7 @@ By default, Couchbase cluster deployments process logs within the Couchbase Serv
When a Couchbase cluster deployment is configured to use xref:concept-persistent-volumes.adoc[persistent volumes] -- as is xref:best-practices.adoc#storage[recommended] for all production deployments -- log files are written to either the `default` or `logs` volume.
When using default logging, logs cannot be collected from the Couchbase Server container's standard console output, as is https://kubernetes.io/docs/concepts/cluster-administration/logging/[typical^] in Kubernetes environments.
-Instead, the Autonomous Operator package is distributed with a support tool -- xref:tools/cao.adoc[`cao`] -- which can be used to collect a log snapshot at any time.
+Instead, the Kubernetes Operator package is distributed with a support tool -- xref:tools/cao.adoc[`cao`] -- which can be used to collect a log snapshot at any time.
This tool is often used for collecting resources, logs, and events from the Kubernetes cluster for use in Couchbase Support requests.
It is also capable of collecting just Couchbase-related logs via its `--collectinfo` option.
@@ -39,19 +39,19 @@ To avoid these limitations, you can choose to configure <>.
-Although Couchbase Server doesn't natively support log forwarding, the Autonomous Operator can optionally deploy and manage a third-party log processor on each Couchbase pod that enables Couchbase Server logs to be forwarded to the log processor's standard console output as well as other destinations.
+Although Couchbase Server doesn't natively support log forwarding, the Kubernetes Operator can optionally deploy and manage a third-party log processor on each Couchbase pod that enables Couchbase Server logs to be forwarded to the log processor's standard console output as well as other destinations.
Forwarding logs to standard console output is desirable since it allows for https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs[simple debugging^] and standards-based integration with centralized log management systems running in the Kubernetes cluster.
=== How Log Forwarding Works
There are two primary components that provide log forwarding:
-. A _log processor image_ is used by the Autonomous Operator to deploy the `logging` sidecar container onto each Couchbase Server pod.
+. A _log processor image_ is used by the Kubernetes Operator to deploy the `logging` sidecar container onto each Couchbase Server pod.
The log processor reads the logs that Couchbase Server has written to a persistent volume, processes them, and then forwards them to a destination such as standard console output.
. The _log forwarding configuration_, stored in a Kubernetes Secret, that gets consumed by the `logging` sidecar container and which controls its behavior.
-When log forwarding is enabled, the Autonomous Operator uses Couchbase-provided <> for both of these components.
+When log forwarding is enabled, the Kubernetes Operator uses Couchbase-provided <> for both of these components.
[NOTE]
====
@@ -72,12 +72,12 @@ image::logging-sidecar-overview.png[]
=== About the Default Log Forwarding Image and Configuration
Log forwarding can be enabled via the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource specification (it is _disabled_ by default).
-When log forwarding is enabled, the Autonomous Operator defaults to a Couchbase-supplied https://hub.docker.com/r/couchbase/fluent-bit[_log processor image_^] that is based on https://fluentbit.io/[Fluent Bit^].
-The Autonomous Operator also automatically deploys a default _log forwarding configuration_ in the form of a Kubernetes Secret that gets consumed by the `logging` sidecar container.
+When log forwarding is enabled, the Kubernetes Operator defaults to a Couchbase-supplied https://hub.docker.com/r/couchbase/fluent-bit[_log processor image_^] that is based on https://fluentbit.io/[Fluent Bit^].
+The Kubernetes Operator also automatically deploys a default _log forwarding configuration_ in the form of a Kubernetes Secret that gets consumed by the `logging` sidecar container.
The default, Couchbase-supplied log processor image provides several benefits, such as built-in _parsers_, _filters_, and optional log _redaction_ (another type of filtering), as well as the ability to restart Fluent Bit without having to restart the entire pod, thus providing better performance and higher availability than the standard Fluent Bit image.
-The built-in parsers and filters are stored in individual configuration files, which are then combined to provide the default https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/configuration-file[_main configuration_^] deployed by the Autonomous Operator.
-Alternatively, these built-in parsers and filters can be selectively invoked by a custom, user-supplied main configuration that can be used instead of the default one provided by the Autonomous Operator.
+The built-in parsers and filters are stored in individual configuration files, which are then combined to provide the default https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/configuration-file[_main configuration_^] deployed by the Kubernetes Operator.
+Alternatively, these built-in parsers and filters can be selectively invoked by a custom, user-supplied main configuration that can be used instead of the default one provided by the Kubernetes Operator.
The default log forwarding configuration outputs log events to the `logging` container's standard console output.
However, a custom configuration can include more than one output, allowing specific logs to be https://docs.fluentbit.io/manual/concepts/data-pipeline/router[routed^] to different -- even multiple -- destinations.
diff --git a/modules/ROOT/pages/concept-couchbase-networking.adoc b/modules/ROOT/pages/concept-couchbase-networking.adoc
index 304c1f9..84d4a0c 100644
--- a/modules/ROOT/pages/concept-couchbase-networking.adoc
+++ b/modules/ROOT/pages/concept-couchbase-networking.adoc
@@ -221,7 +221,7 @@ However, if you are running Sync Gateway on the same Kubernetes cluster as Couch
| ⚠️ Yes
|===
-In <> above, the *Relationship to Cluster* column indicates the location of the Sync Gateway cluster in relation to the Couchbase cluster that is being managed by the Autonomous Operator. *_Local_* refers to instances where Sync Gateway is deployed in the same Kubernetes cluster where Couchbase Server is running (see <> and <>).
+In <> above, the *Relationship to Cluster* column indicates the location of the Sync Gateway cluster in relation to the Couchbase cluster that is being managed by the Kubernetes Operator. *_Local_* refers to instances where Sync Gateway is deployed in the same Kubernetes cluster where Couchbase Server is running (see <> and <>).
*_Remote_* refers to instances where Sync Gateway is deployed outside of the Kubernetes cluster where Couchbase Server is running (see <>).
* Sync Gateway 2.8.2 and higher do not experience any connection issues related to exposed features as these versions have full support for DNS SRV lookup _and_ support explicit network selection.
diff --git a/modules/ROOT/pages/concept-kubernetes-networking.adoc b/modules/ROOT/pages/concept-kubernetes-networking.adoc
index e77b612..85de90c 100644
--- a/modules/ROOT/pages/concept-kubernetes-networking.adoc
+++ b/modules/ROOT/pages/concept-kubernetes-networking.adoc
@@ -77,7 +77,7 @@ image:NetworkDocsRouted-Overlay.png["Overlay network architecture"]
Istio is currently the only supported service mesh.
-Use of service meshes should, for the most part, be transparent to the Autonomous Operator and Couchbase cluster, however there are a few things to be aware of:
+Use of service meshes should, for the most part, be transparent to the Kubernetes Operator and Couchbase cluster, however there are a few things to be aware of:
* Dynamic Admission Controller Considerations
@@ -87,11 +87,11 @@ Use of service meshes should, for the most part, be transparent to the Autonomou
* Couchbase Cluster Considerations
-** The service mesh must be enabled in the namespace before you install the Autonomous Operator and provision any Couchbase clusters.
-** You cannot enable/disable a service mesh in a namespace where an Autonomous Operator deployment is already running.
+** The service mesh must be enabled in the namespace before you install the Kubernetes Operator and provision any Couchbase clusters.
+** You cannot enable/disable a service mesh in a namespace where an Kubernetes Operator deployment is already running.
This is especially true of migration to strict mTLS as Couchbase cluster nodes will not be able to communicate with one another during the upgrade.
While an upgrade to permissive mTLS may work, it has not been tested, therefore is unsupported.
-** You should not configure the Autonomous Operator to use TLS if the service mesh is already providing an mTLS transport.
+** You should not configure the Kubernetes Operator to use TLS if the service mesh is already providing an mTLS transport.
** IP-based networking should not be used with strict mTLS.
** In order to establish connections between a client and server when strict mTLS is enabled, both the client and server need to be running with an Envoy proxy, and be part of the same Istio control plane.
For this reason, clients and XDCR connections originating from outside of the Kubernetes cluster must connect to a Couchbase cluster with either no or permissive mTLS.
@@ -137,7 +137,7 @@ Each network plugin may have its own way of configuring the network policy or us
For the purposes of this documentation we only reference the default Kubernetes configuration to allow it to be reused or translated to the specific implementation required.
For more secure deployments, the typical approach is to have a https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-and-all-egress-traffic[deny-all traffic that is not approved^] within a namespace.
-If this is the case then the following rules need to be implemented to allow Couchbase Server and Couchbase Autonomous Operator to correctly function.
+If this is the case then the following rules need to be implemented to allow Couchbase Server and Couchbase Kubernetes Operator to correctly function.
.Network Policy Rules
* DNS resolution must be possible, typically by enabling port 53 traffic although this can be constrained to the DNS deployment namespace and/or pods as required.
@@ -152,7 +152,7 @@ If this is the case then the following rules need to be implemented to allow Cou
* SDKs require traffic to be allowed from their location to the Couchbase Server pods, primarily they use ports 8091/18091 and 11210/11207 for bootstrapping.
** This will be specific to the SDK used but each SDK has a https://docs.couchbase.com/go-sdk/current/howtos/managing-connections.html[managing connections] page.
-The recommendation would be deploy Couchbase Server and Couchbase Autonomous Operator within a dedicated namespace and allow internal traffic within it.
+The recommendation would be deploy Couchbase Server and Couchbase Kubernetes Operator within a dedicated namespace and allow internal traffic within it.
The limited external ingress and egress required can then be added as well.
It may be good to start with this approach and then lock it down as required or to aid with debugging any issues found.
diff --git a/modules/ROOT/pages/concept-label-selection.adoc b/modules/ROOT/pages/concept-label-selection.adoc
index d1c8ba6..320161f 100644
--- a/modules/ROOT/pages/concept-label-selection.adoc
+++ b/modules/ROOT/pages/concept-label-selection.adoc
@@ -1,23 +1,23 @@
= Couchbase Resource Label Selection
[abstract]
-The Autonomous Operator manages a Couchbase deployment by aggregating many different types of Kubernetes custom resources.
-By labeling resources, the Autonomous Operator knows which resources to select and aggregate into a logical configuration.
+The Kubernetes Operator manages a Couchbase deployment by aggregating many different types of Kubernetes custom resources.
+By labeling resources, the Kubernetes Operator knows which resources to select and aggregate into a logical configuration.
== Overview
The `CouchbaseCluster` resource does not contain a single, monolithic configuration for an entire Couchbase cluster.
-Instead, configurations for things like buckets, replications, users, etc. are defined as separate resources, which the Autonomous Operator then selects and aggregates into a logical configuration.
+Instead, configurations for things like buckets, replications, users, etc. are defined as separate resources, which the Kubernetes Operator then selects and aggregates into a logical configuration.
(One of the main reasons for this design is to allow for xref:concept-rbac.adoc[custom resource RBAC].)
-All of the Couchbase resources outside of the main `CouchbaseCluster` type are collected by the Autonomous Operator using a list operation in the namespace of the Couchbase cluster.
+All of the Couchbase resources outside of the main `CouchbaseCluster` type are collected by the Kubernetes Operator using a list operation in the namespace of the Couchbase cluster.
The list operation is optionally supplied with a user-defined _label selector_.
Any resource that has the same set of _labels_ that match the label selector of a `CouchbaseCluster` resource will be aggregated.
== Default Selection Behavior
Let's take the `CouchbaseBucket` resource for example.
-By default, when bucket management is enabled in the `CouchbaseCluster`, but no label selector is defined, the Autonomous Operator will select and aggregate any "label-less" bucket resources for management on the cluster.
+By default, when bucket management is enabled in the `CouchbaseCluster`, but no label selector is defined, the Kubernetes Operator will select and aggregate any "label-less" bucket resources for management on the cluster.
Refer to diagram below:
[#image-buckets-single-cluster-no-label]
@@ -25,7 +25,7 @@ Refer to diagram below:
image::selection-default.png[]
This default arrangement is well suited for when a single `CouchbaseCluster` resource is deployed in a single namespace.
-However, when _multiple_ `CouchbaseCluster` resources are deployed in the same namespace, this arrangement results in the Autonomous Operator selecting and aggregating all `CouchbaseBucket` resources to all `CouchbaseCluster` resources -- meaning that each cluster would be managing the same buckets.
+However, when _multiple_ `CouchbaseCluster` resources are deployed in the same namespace, this arrangement results in the Kubernetes Operator selecting and aggregating all `CouchbaseBucket` resources to all `CouchbaseCluster` resources -- meaning that each cluster would be managing the same buckets.
Refer to diagram below:
[#image-buckets-multi-cluster-no-label]
@@ -34,7 +34,7 @@ image::selection-default-shared.png[]
While you might desire the sharing of resources for the purposes of reducing configuration overhead, it can lead to surprising outcomes if you are not aware of the underlying selection algorithm.
For this reason, it is recommended that you specify explicit labels for resources, along with their corresponding label selectors for `CouchbaseCluster` resources.
-This ensures that the Autonomous Operator will only select and aggregate the appropriate resource for each cluster.
+This ensures that the Kubernetes Operator will only select and aggregate the appropriate resource for each cluster.
== Using Resource Labels
@@ -68,7 +68,7 @@ metadata:
cluster: my-cluster
----
-The reason for defining the label selector first is that without a label selector defined, the Autonomous Operator will immediately aggregate any unlabeled resources to the `CouchbaseCluster` once it's deployed.
+The reason for defining the label selector first is that without a label selector defined, the Kubernetes Operator will immediately aggregate any unlabeled resources to the `CouchbaseCluster` once it's deployed.
As discussed in the previous section, this can have deleterious effects if you have more than one `CouchbaseCluster` resource already deployed in the same namespace.
However, by deploying the `CouchbaseCluster` resource with the bucket label selector `cluster: my-cluster` in this example, you can ensure that the cluster will only select `CouchbaseBucket` resources that have the matching `cluster: my-cluster` label.
diff --git a/modules/ROOT/pages/concept-memory-allocation.adoc b/modules/ROOT/pages/concept-memory-allocation.adoc
index 0ead224..d7c79ef 100644
--- a/modules/ROOT/pages/concept-memory-allocation.adoc
+++ b/modules/ROOT/pages/concept-memory-allocation.adoc
@@ -2,10 +2,10 @@
[abstract]
Couchbase memory allocation is configured in the `CouchbaseCluster` resource.
-It's important to understand how memory allocation works in Couchbase Server, and how it applies to deployments using the Autonomous Operator.
+It's important to understand how memory allocation works in Couchbase Server, and how it applies to deployments using the Kubernetes Operator.
Kubernetes presents some unique challenges when it comes to allocating memory for Couchbase Server.
-This page discusses the various Couchbase memory allocation settings presented by the Autonomous Operator, what they actually mean, and how they should be used optimally in your deployment.
+This page discusses the various Couchbase memory allocation settings presented by the Kubernetes Operator, what they actually mean, and how they should be used optimally in your deployment.
== Memory Quota Basics
@@ -13,7 +13,7 @@ In Couchbase Server, memory is allocated _per node_, with each service having it
Once you specify the memory quota for a particular Couchbase service, an amount of memory equal to the quota will be reserved on each Couchbase cluster node where an instance of that service exists.
Note that instances of the same service cannot have different memory allocations within a cluster.
-For deployments using the Autonomous Operator, memory quotas are configured in the `CouchbaseCluster` resource.
+For deployments using the Kubernetes Operator, memory quotas are configured in the `CouchbaseCluster` resource.
Consider the following cluster of three nodes, with each node running all services:
[#image-cluster-homogeneous-service-distribution]
@@ -26,7 +26,7 @@ You'll notice that the Query service is not pictured in <> using the Autonomous Operator, the `CouchbaseCluster` configuration would include the following:
+When deploying the cluster in <> using the Kubernetes Operator, the `CouchbaseCluster` configuration would include the following:
.Cluster with homogeneous service distribution
[#cluster-homogeneous-service-distribution]
@@ -55,7 +55,7 @@ spec:
[NOTE]
====
-The memory quotas from the configuration above are the defaults that the Autonomous Operator will use if none are specified.
+The memory quotas from the configuration above are the defaults that the Kubernetes Operator will use if none are specified.
The defaults are the lowest allowed and almost certainly will need modification for your specific workload.
====
@@ -102,7 +102,7 @@ Consider the following cluster of four nodes:
.Cluster with heterogeneous service distribution
image::memory-allocation-Heterogeneous.png[]
-When deploying the cluster in <> using the Autonomous Operator, the `CouchbaseCluster` configuration would include the following:
+When deploying the cluster in <> using the Kubernetes Operator, the `CouchbaseCluster` configuration would include the following:
.Cluster with heterogeneous service distribution
[#cluster-heterogeneous-service-distribution]
@@ -156,7 +156,7 @@ When setting memory quotas for your cluster, you'll need to consider the memory
If a Couchbase Server Pod has a total memory quota that is greater than 90% of the Kubernetes node's overall memory, Couchbase Server will produce an error.
However, since the application's memory requirements can vary by workload, it's generally recommended that Couchbase Server Pods reserve 25% more memory on top of their total memory quota (especially if the Pod is running the Data service).
-When a Couchbase cluster is deployed by the Autonomous Operator, each server Pod is scheduled onto its own dedicated Kubernetes node (recommended), or onto a shared Kubernetes node with other Pods.
+When a Couchbase cluster is deployed by the Kubernetes Operator, each server Pod is scheduled onto its own dedicated Kubernetes node (recommended), or onto a shared Kubernetes node with other Pods.
Depending on whether your Kubernetes nodes are dedicated or shared, there are slightly different considerations for when you go about setting memory quotas for Couchbase Pods.
For shared nodes, you'll be using pod resource requests with the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-resources[`couchbaseclusters.spec.servers.resources`] attribute for each server in the `CouchbaseCluster` configuration.
@@ -187,7 +187,7 @@ Modification of memory allocation will cause an upgrade of the affected pods.
[IMPORTANT]
====
It is dangerous to change both a memory quota and the resource request at the same time.
-Changing both parameters, the resource request and a quota to take advantage of the new request, at the same time could potentially lead to the Couchbase Autonomous Operator performing a swap/rebalance of all nodes in the cluster.
+Changing both parameters, the resource request and a quota to take advantage of the new request, at the same time could potentially lead to the Couchbase Kubernetes Operator performing a swap/rebalance of all nodes in the cluster.
This is due to the order in which these changes may be applied.
If the quota modification gets applied before the new resource request, the memory will not be available for the pod, precipitating operator to create a new pod.
To prevent this, change the resource request first, then apply the quota modification.
diff --git a/modules/ROOT/pages/concept-operator-logging.adoc b/modules/ROOT/pages/concept-operator-logging.adoc
index 62be015..4c577cf 100644
--- a/modules/ROOT/pages/concept-operator-logging.adoc
+++ b/modules/ROOT/pages/concept-operator-logging.adoc
@@ -1,19 +1,19 @@
-= Autonomous Operator Logging
+= Kubernetes Operator Logging
[abstract]
-The Autonomous Operator provides flexible logging support to enable failure detection and alerting.
+The Kubernetes Operator provides flexible logging support to enable failure detection and alerting.
The Operator provides flexible logging support to enable failure detection and alerting.
It is also a key resource when submitting a support request with the xref:tools/cbopinfo.adoc[`cbopinfo`] tool.
-This page describes logging that is specific to the Autonomous Operator itself.
+This page describes logging that is specific to the Kubernetes Operator itself.
For information about Couchbase cluster logging, refer to xref:concept-couchbase-logging.adoc[].
== Overview
-The Autonomous Operator xref:howto-manage-operator-logging.adoc[emits logs] on the pod console.
+The Kubernetes Operator xref:howto-manage-operator-logging.adoc[emits logs] on the pod console.
Logs are structured as JSON with one entry per line, thus providing a simple and stable foundation for machine parsing and ingestion into 3rd-party logging databases.
-.Example of Autonomous Operator Logs
+.Example of Kubernetes Operator Logs
[source,json]
----
{"level":"info","ts":1580377225.2966235,"logger":"couchbaseutil","msg":"Cluster status","cluster":"default/cb-example","balance":"balanced","rebalancing":false}
diff --git a/modules/ROOT/pages/concept-platform-certification.adoc b/modules/ROOT/pages/concept-platform-certification.adoc
index 8d3ff5a..6be074f 100644
--- a/modules/ROOT/pages/concept-platform-certification.adoc
+++ b/modules/ROOT/pages/concept-platform-certification.adoc
@@ -2,7 +2,7 @@
:stem:
[abstract]
-Certifying your platform for use with Couchbase Autonomous Operator.
+Certifying your platform for use with Couchbase Kubernetes Operator.
== Why Self-Certify?
@@ -66,7 +66,7 @@ Additionally, to test platform behavior, the self-certification image will need
== What is the Self-Certification Lifecycle Process?
-Couchbase Operator Self-Certification Lifecycle is a self-service offering with an easy, step-by-step process to validate the compatibility of Kubernetes platforms and other platform-specific components such as storage and networking with Autonomous Operator.
+Couchbase Operator Self-Certification Lifecycle is a self-service offering with an easy, step-by-step process to validate the compatibility of Kubernetes platforms and other platform-specific components such as storage and networking with Kubernetes Operator.
The certification workflows consist of the following steps:
@@ -74,7 +74,7 @@ The certification workflows consist of the following steps:
. Configure the Kubernetes platform and other platform-specific components such as storage and networking correctly.
-. Run the Operator Self-Certification Tool (shipped with Couchbase Autonomous Operator 2.3 and above).
+. Run the Operator Self-Certification Tool (shipped with Couchbase Kubernetes Operator 2.3 and above).
. Submit the results to Couchbase for approval.
diff --git a/modules/ROOT/pages/concept-pod-templating.adoc b/modules/ROOT/pages/concept-pod-templating.adoc
index 563c75b..fb5b611 100644
--- a/modules/ROOT/pages/concept-pod-templating.adoc
+++ b/modules/ROOT/pages/concept-pod-templating.adoc
@@ -1,28 +1,28 @@
= Couchbase Pod Templating
[abstract]
-The Autonomous Operator allows users to define a pod template to use when creating pods for a Couchbase Server class.
+The Kubernetes Operator allows users to define a pod template to use when creating pods for a Couchbase Server class.
Modifying pod metadata such as labels and annotations will update the pod in-place. Any other modification will result in a cluster upgrade in order to fulfill the request. The Operator reserves the right to modify or replace any field.
== Operators Attribute Modifications
-Some of the attributes in the pod template are overridden by the Autonomous Operator. The following attributes are overridden:
+Some of the attributes in the pod template are overridden by the Kubernetes Operator. The following attributes are overridden:
=== Metadata
-* `metadata.name` - The name of the pod is overridden by the Autonomous Operator.
-* `metadata.labels` - The Autonomous Operator adds labels to the pod to identify the pod as part of a operator managed Couchbase cluster.
-* `metadata.annotaions` - The Autonomous Operator adds annotations to the pod to identify the pod as part of a operator managed Couchbase cluster.
+* `metadata.name` - The name of the pod is overridden by the Kubernetes Operator.
+* `metadata.labels` - The Kubernetes Operator adds labels to the pod to identify the pod as part of a operator managed Couchbase cluster.
+* `metadata.annotaions` - The Kubernetes Operator adds annotations to the pod to identify the pod as part of a operator managed Couchbase cluster.
=== Spec
-* `spec.containers` - The Autonomous Operator overrides any containers to run the Couchbase Server process and any other required containers.
-* `spec.restartPolicy` - The Autonomous Operator overrides the restart policy to `Never`.
-* `spec.hostname` - The Autonomous Operator overrides the hostname to the pod name.
-* `spec.subdomain` - The Autonomous Operator overrides the subdomain to the cluster name.
-* `spec.terminationGracePeriodSeconds` - The Autonomous Operator overrides the termination grace period to 1200 seconds.
-* `spec.securityContext` - The Autonomous Operator overrides the security context to the security context provided in `cluster.spec.security.podSecurityContext`.
-* `spec.readinessGates` - The Autonomous Operator overrides the readiness gates to manage the readiness of the pods.
-* `spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` - The Autonomous Operator adds a pod anti-affinity rule to ensure that pods are not scheduled on the same node if `cluster.spec.antiAffinity` is set to `true`.
-* `spec.nodeSelector` - The Autonomous Operator adds a node selector to ensure that pods are scheduled on the correct Availability Zone if they are enabled for the pods server class.
-* `spec.hostAliases` - The Autonomous Operator overrides any host aliases to map `127.0.0.1` to `localhost` the pod DNS name, unless istio mode is enabled.
-* `spec.volumes` - The Autonomous Operator adds any required volumes to the pod.
+* `spec.containers` - The Kubernetes Operator overrides any containers to run the Couchbase Server process and any other required containers.
+* `spec.restartPolicy` - The Kubernetes Operator overrides the restart policy to `Never`.
+* `spec.hostname` - The Kubernetes Operator overrides the hostname to the pod name.
+* `spec.subdomain` - The Kubernetes Operator overrides the subdomain to the cluster name.
+* `spec.terminationGracePeriodSeconds` - The Kubernetes Operator overrides the termination grace period to 1200 seconds.
+* `spec.securityContext` - The Kubernetes Operator overrides the security context to the security context provided in `cluster.spec.security.podSecurityContext`.
+* `spec.readinessGates` - The Kubernetes Operator overrides the readiness gates to manage the readiness of the pods.
+* `spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` - The Kubernetes Operator adds a pod anti-affinity rule to ensure that pods are not scheduled on the same node if `cluster.spec.antiAffinity` is set to `true`.
+* `spec.nodeSelector` - The Kubernetes Operator adds a node selector to ensure that pods are scheduled on the correct Availability Zone if they are enabled for the pods server class.
+* `spec.hostAliases` - The Kubernetes Operator overrides any host aliases to map `127.0.0.1` to `localhost` the pod DNS name, unless istio mode is enabled.
+* `spec.volumes` - The Kubernetes Operator adds any required volumes to the pod.
diff --git a/modules/ROOT/pages/concept-rbac.adoc b/modules/ROOT/pages/concept-rbac.adoc
index 6ea595b..466b582 100644
--- a/modules/ROOT/pages/concept-rbac.adoc
+++ b/modules/ROOT/pages/concept-rbac.adoc
@@ -1,7 +1,7 @@
= Couchbase Resource RBAC
[abstract]
-The Autonomous Operator manages many different types of Kubernetes custom resources, giving you the ability to control access to your Couchbase deployments based on which resource type each user should have access to.
+The Kubernetes Operator manages many different types of Kubernetes custom resources, giving you the ability to control access to your Couchbase deployments based on which resource type each user should have access to.
== Overview
diff --git a/modules/ROOT/pages/concept-scheduling.adoc b/modules/ROOT/pages/concept-scheduling.adoc
index 08ab554..a36f32b 100644
--- a/modules/ROOT/pages/concept-scheduling.adoc
+++ b/modules/ROOT/pages/concept-scheduling.adoc
@@ -26,7 +26,7 @@ In a Kubernetes cluster dedicated to a single Couchbase cluster deployment, enab
Anti-affinity does not prevent distinct Couchbase clusters from being scheduled on the same Kubernetes nodes.
Neither does it prevent Couchbase clusters from being scheduled alongside other applications that may interfere with them.
-The Autonomous Operator offers two parameters that can be used in concert to guarantee workload isolation.
+The Kubernetes Operator offers two parameters that can be used in concert to guarantee workload isolation.
=== Taints and Tolerations
diff --git a/modules/ROOT/pages/concept-tls.adoc b/modules/ROOT/pages/concept-tls.adoc
index be7cce3..9aa3fbb 100644
--- a/modules/ROOT/pages/concept-tls.adoc
+++ b/modules/ROOT/pages/concept-tls.adoc
@@ -190,7 +190,7 @@ Determining a full proof way to completely obfuscate the passphrase is beyond th
=== Securing Local Scripts
-When script passphrase registration is configured, the Autonomous Operator employs additional security measures to ensure that script creation and execution cannot be compromised by any external parties.
+When script passphrase registration is configured, the Kubernetes Operator employs additional security measures to ensure that script creation and execution cannot be compromised by any external parties.
Therefore, the user only needs to provide a passphrase secret while the Operator internally creates the passphrase script and mounts it into each of the Couchbase Server Pods.
The Operator also mounts another private key into Couchbase Server in order to securely transfer the passphrase to Couchbase for on-demand use rather than statically mounting the passphrase in the Server Pod alongside the encrypted key. The overall design is as follows:
diff --git a/modules/ROOT/pages/concept-user-rbac.adoc b/modules/ROOT/pages/concept-user-rbac.adoc
index d03963d..4b2dd74 100644
--- a/modules/ROOT/pages/concept-user-rbac.adoc
+++ b/modules/ROOT/pages/concept-user-rbac.adoc
@@ -1,7 +1,7 @@
= Couchbase User RBAC
[abstract]
-The Couchbase Autonomous Operator manages Couchbase Role-Based Access Control (RBAC) for the authorization of administrative users and groups.
+The Couchbase Kubernetes Operator manages Couchbase Role-Based Access Control (RBAC) for the authorization of administrative users and groups.
== Overview
@@ -12,7 +12,7 @@ The `CouchbaseGroup` resource contains a set of roles that are to be applied to
.Basic User Role Binding
image::user-binding-default.png[]
-The Autonomous Operator only creates users that are bound to groups.
+The Kubernetes Operator only creates users that are bound to groups.
Therefore, all three resources are required in order to create an authorized user.
== User Role Binding
@@ -48,7 +48,7 @@ a| * **security_admin:** Management of user roles
The `CouchbaseGroup` represents a collection of administrative and bucket roles which can be applied to users.
There is a direct association between the `CouchbaseGroup` resource and xref:server:learn:security/authorization-overview.adoc[Couchbase Server RBAC groups] such that Couchbase Server RBAC groups are created and deleted when corresponding `CouchbaseGroup` resources are created and deleted.
-Whenever users are bound to a group, the Autonomous Operator ensures that the corresponding Couchbase Server RBAC group exists with the requested roles, and then proceeds to add the requested users to the group.
+Whenever users are bound to a group, the Kubernetes Operator ensures that the corresponding Couchbase Server RBAC group exists with the requested roles, and then proceeds to add the requested users to the group.
=== Setting Roles
@@ -111,7 +111,7 @@ Multiple Buckets, Scopes and Collections can be specified. However, `CouchbaseS
=== Referencing LDAP Groups
Couchbase Server provides LDAP integration which allows external user authentication.
-The Autonomous Operator can be used to enable this functionality when an `ldapGroupRef` is specified within the `CouchbaseGroup` resource.
+The Kubernetes Operator can be used to enable this functionality when an `ldapGroupRef` is specified within the `CouchbaseGroup` resource.
[source,yaml]
----
@@ -180,7 +180,7 @@ metadata:
----
Note that `CouchbaseRoleBinding` resources don't support labels and don't directly utilize label selection.
-Instead, the Autonomous Operator looks at the names of the users and groups specified in the `CouchbaseRoleBinding` resource.
+Instead, the Kubernetes Operator looks at the names of the users and groups specified in the `CouchbaseRoleBinding` resource.
If those names match the `CouchbaseUser` and `CouchbaseGroup` resources that are both being selected by the same cluster, then they will be bound together.
Therefore, so long as the `CouchbaseUser` and `CouchbaseGroup` resources have the same label, the users and groups specified in the `CouchbaseRoleBinding` resource will be bound together.
diff --git a/modules/ROOT/pages/helm-couchbase-config.adoc b/modules/ROOT/pages/helm-couchbase-config.adoc
index 1c12fde..d2409c7 100644
--- a/modules/ROOT/pages/helm-couchbase-config.adoc
+++ b/modules/ROOT/pages/helm-couchbase-config.adoc
@@ -2,7 +2,7 @@
include::partial$constants.adoc[]
[abstract]
-The official Couchbase Helm Chart for the Autonomous Operator comes with a default configuration that can be customized to fit your deployment needs.
+The official Couchbase Helm Chart for the Kubernetes Operator comes with a default configuration that can be customized to fit your deployment needs.
This page describes the parameters of the official Couchbase Helm Chart.
In particular, this page describes the contents of the chart's https://github.com/couchbase-partners/helm-charts/blob/master/charts/couchbase-operator/values.yaml[values.yaml^], which contains the chart's default values.
@@ -62,7 +62,7 @@ The Couchbase Chart is capable of installing the Operator, Admission Controller,
[#install]
=== `couchbaseOperator`
-This field specifies whether or not the Couchbase Autonomous Operator will be installed.
+This field specifies whether or not the Couchbase Kubernetes Operator will be installed.
[cols="1",options="header"]
|===
@@ -513,7 +513,7 @@ The search domains to use when looking up hostnames
_Value rules:_ The `coredns.searches` value is an optional list of strings.
-== Autonomous Operator
+== Kubernetes Operator
The Helm chart deploys the Operator as a Kubernetes https://kubernetes.io/docs/concepts/workloads/controllers/deployment/[Deployment].
diff --git a/modules/ROOT/pages/helm-managing-guide.adoc b/modules/ROOT/pages/helm-managing-guide.adoc
index 296e24c..c5155ae 100644
--- a/modules/ROOT/pages/helm-managing-guide.adoc
+++ b/modules/ROOT/pages/helm-managing-guide.adoc
@@ -7,7 +7,7 @@ You can manage the release by updating, upgrading, or uninstalling it.
When you install a Helm chart, the Helm client creates an instance of the chart in your Kubernetes cluster.
This instance is called a _release_, and Helm uses it to manage all of the objects and resources that the chart creates.
-When you deploy the Autonomous Operator or another component via the Couchbase Helm Chart, you will manage the deployed resources by making updates to the release in Helm.
+When you deploy the Kubernetes Operator or another component via the Couchbase Helm Chart, you will manage the deployed resources by making updates to the release in Helm.
The sections on this page describe how to manage the Helm releases that make up your deployments.
@@ -31,9 +31,9 @@ This is to ensure that Helm can continue to update and manage all resources appr
== Upgrade a Deployment
-=== Upgrading the Autonomous Operator
+=== Upgrading the Kubernetes Operator
-Upgrading the Autonomous Operator and Admission Controller to a newer version requires that you upgrade the _release_ to a newer version of its _chart_.
+Upgrading the Kubernetes Operator and Admission Controller to a newer version requires that you upgrade the _release_ to a newer version of its _chart_.
This allows Helm to ensure that all dependencies specified by the chart get updated appropriately.
[IMPORTANT]
@@ -65,7 +65,7 @@ Once the CRD upgrade is complete, then upgrade the chart version:
helm upgrade --version {helm-repo}
----
-Where `` is the version of the Couchbase Helm Chart that you want to upgrade to, and `` is the name of the release that is managing the instance of the Autonomous Operator that you wish to upgrade.
+Where `` is the version of the Couchbase Helm Chart that you want to upgrade to, and `` is the name of the release that is managing the instance of the Kubernetes Operator that you wish to upgrade.
[WARNING]
====
@@ -87,11 +87,11 @@ If upgrade fails you will need to rebuild your cluster.
==== Limitations
-* It is not possible to upgrade the 1.2.x Autonomous Operator chart to 2.x.x.
-If you have a previous installation of the Autonomous Operator chart, you will need to <>, as well as delete the CRD.
+* It is not possible to upgrade the 1.2.x Kubernetes Operator chart to 2.x.x.
+If you have a previous installation of the Kubernetes Operator chart, you will need to <>, as well as delete the CRD.
-* If you didn't originally install the Autonomous Operator using Helm, then you cannot upgrade the Autonomous Operator using Helm.
-At this time, installations of the Autonomous Operator that weren't created with Helm cannot be ported over to using Helm.
+* If you didn't originally install the Kubernetes Operator using Helm, then you cannot upgrade the Kubernetes Operator using Helm.
+At this time, installations of the Kubernetes Operator that weren't created with Helm cannot be ported over to using Helm.
=== Upgrading a Couchbase Cluster
@@ -102,7 +102,7 @@ The first method is preferred, since upgrading the entire chart will ensure that
[IMPORTANT]
====
-When upgrading a Couchbase cluster, you should first upgrade the Autonomous Operator to the latest compatible version so as to ensure that the cluster can be properly managed.
+When upgrading a Couchbase cluster, you should first upgrade the Kubernetes Operator to the latest compatible version so as to ensure that the cluster can be properly managed.
====
.To upgrade a Couchbase cluster by using a newer chart
diff --git a/modules/ROOT/pages/helm-setup-guide.adoc b/modules/ROOT/pages/helm-setup-guide.adoc
index f22071d..146bcc9 100644
--- a/modules/ROOT/pages/helm-setup-guide.adoc
+++ b/modules/ROOT/pages/helm-setup-guide.adoc
@@ -2,17 +2,17 @@
include::partial$constants.adoc[]
[abstract]
-Use the official Couchbase Helm Chart to deploy multiple components, including the Autonomous Operator, Admission Controller, Couchbase clusters, and Sync Gateway.
+Use the official Couchbase Helm Chart to deploy multiple components, including the Kubernetes Operator, Admission Controller, Couchbase clusters, and Sync Gateway.
https://helm.sh/[Helm^] is a tool that streamlines the installation and management of applications on Kubernetes platforms.
-The official Couchbase Helm Chart can help you easily set up the Couchbase Autonomous Operator and deploy Couchbase clusters.
+The official Couchbase Helm Chart can help you easily set up the Couchbase Kubernetes Operator and deploy Couchbase clusters.
-This page describes how to use the Couchbase Helm Chart to create various deployments of the Autonomous Operator, Admission Controller, Couchbase clusters, and Sync Gateway.
+This page describes how to use the Couchbase Helm Chart to create various deployments of the Kubernetes Operator, Admission Controller, Couchbase clusters, and Sync Gateway.
The Couchbase Helm Chart is primarily intended to make it easy to deploy with the defaults to get a working system in an empty cluster.
For more complex scenarios, make sure to refer to the operator documentation as well, particularly the xref:concept-operator.adoc[operator architecture] and xref:reference-reference-architecture.adoc[reference architecture].
-A particular use case that is complex is upgrading so make sure to cover all the xref:howto-operator-upgrade.adoc[Autonomous Operator upgrade] and xref:concept-upgrade.adoc[Couchbase Server upgrade] sections.
+A particular use case that is complex is upgrading so make sure to cover all the xref:howto-operator-upgrade.adoc[Kubernetes Operator upgrade] and xref:concept-upgrade.adoc[Couchbase Server upgrade] sections.
The recommendation, for more complex scenarios, is to manage the operator directly rather than relying on Helm to do it as the operator provides a lot more direct control and this approach simplifies the upgrade process.
@@ -46,7 +46,7 @@ helm repo update
== Install the Couchbase Helm Chart
Use the following commands to install the default Couchbase Helm Chart.
-The default chart deploys the Autonomous Operator, the Admission Controller, and a Couchbase cluster.
+The default chart deploys the Kubernetes Operator, the Admission Controller, and a Couchbase cluster.
[{tabs}]
@@ -104,13 +104,13 @@ helm install -f openshift_values.yaml {helm-repo}
--
====
-Installing the default chart provides a quick way to try out using the Autonomous Operator for managing Couchbase Server on Kubernetes platforms.
+Installing the default chart provides a quick way to try out using the Kubernetes Operator for managing Couchbase Server on Kubernetes platforms.
However, for more involved development and production use-cases, you will need to customize the installation to better your needs.
[#custom-installation]
== Customize the Installation
-The Couchbase Helm Chart can be installed as-is for previewing Autonomous Operator functionality.
+The Couchbase Helm Chart can be installed as-is for previewing Kubernetes Operator functionality.
However, customizing the installation with your own configuration will be necessary for production environments.
Customizing the chart installation allows you to do two things:
@@ -118,7 +118,7 @@ Customizing the chart installation allows you to do two things:
. Specify which components will be deployed
. Configure the deployed components
-The Couchbase Helm Chart is capable of installing and configuring the Autonomous Operator, Admission Controller, Couchbase cluster, and Sync Gateway.
+The Couchbase Helm Chart is capable of installing and configuring the Kubernetes Operator, Admission Controller, Couchbase cluster, and Sync Gateway.
Enabling and configuring each component is accomplished by https://helm.sh/docs/chart_template_guide/values_files/[overriding^] the default values in the Couchbase Helm Chart's xref:helm-couchbase-config.adoc[`values.yaml` file].
There are two methods for specifying overrides during chart installation: `--values` and `--set`.
@@ -206,8 +206,8 @@ install:
syncGateway: false
----
-For example, if you wanted to have a Helm release that exclusively managed the Autonomous Operator and Admission Controller, then you would override the value for `couchbaseCluster` with a value of `false`, leaving only `couchbaseOperator: true` and `admissionController: true`, and all others `false`.
-Likewise, if you already had the Autonomous Operator and Admission Controller deployed in your environment, and you just wanted to deploy a Couchbase cluster, then you would override the values for `couchbaseOperator` and `admissionController` with a value of `false`, leaving only `couchbaseCluster: true`, and all others `false`.
+For example, if you wanted to have a Helm release that exclusively managed the Kubernetes Operator and Admission Controller, then you would override the value for `couchbaseCluster` with a value of `false`, leaving only `couchbaseOperator: true` and `admissionController: true`, and all others `false`.
+Likewise, if you already had the Kubernetes Operator and Admission Controller deployed in your environment, and you just wanted to deploy a Couchbase cluster, then you would override the values for `couchbaseOperator` and `admissionController` with a value of `false`, leaving only `couchbaseCluster: true`, and all others `false`.
Even though the Couchbase Helm Chart has full configuration parameters for each component, if a component is disabled in the `install` section, then that component's configuration parameters are ignored.
@@ -256,14 +256,14 @@ To change the type of service that is used to expose Sync Gateway, you can speci
helm install mobile --set install.syncGateway=true --set syncGateway.config.use_tls_server=false --set syncGateway.exposeServiceType=LoadBalancer {helm-repo}
----
-For more information about using Sync Gateway with the Autonomous Operator, you can refer to the xref:tutorial-sync-gateway.adoc[Sync Gateway Tutorial].
+For more information about using Sync Gateway with the Kubernetes Operator, you can refer to the xref:tutorial-sync-gateway.adoc[Sync Gateway Tutorial].
[#deploy-production]
== Production Considerations
=== TLS Encryption
-Production deployments should enable TLS to encrypt traffic between the Autonomous Operator and the Couchbase cluster.
+Production deployments should enable TLS to encrypt traffic between the Kubernetes Operator and the Couchbase cluster.
TLS certificates can be auto-generated, or provided by the user.
[#tls-certificate-gen]
@@ -276,17 +276,17 @@ Install the chart with `tls` enabled:
helm install my-release --set tls.generate=true {helm-repo}
----
-The Autonomous Operator will create the certificates and then configure them as Kubernetes Secrets for the cluster.
+The Kubernetes Operator will create the certificates and then configure them as Kubernetes Secrets for the cluster.
[NOTE]
====
-There is an issue (https://issues.couchbase.com/browse/K8S-1900[K8S-1900^]) that may cause a certificate error when using the Helm chart to upgrade the Autonomous Operator:
+There is an issue (https://issues.couchbase.com/browse/K8S-1900[K8S-1900^]) that may cause a certificate error when using the Helm chart to upgrade the Kubernetes Operator:
----
certificate cannot be verified for zone
----
-This issue is caused by the certificate not having the necessary subject alternative names (SANs) required by the new version of the Autonomous Operator.
+This issue is caused by the certificate not having the necessary subject alternative names (SANs) required by the new version of the Kubernetes Operator.
To resolve this issue, start by regenerating the Secrets from the new chart version:
@@ -304,7 +304,7 @@ Now update the Secrets in Kubernetes with the new ones:
kubectl apply -f secrets.yaml
----
-The Autonomous Operator should now pick up the new certificates and proceed through the upgrade process.
+The Kubernetes Operator should now pick up the new certificates and proceed through the upgrade process.
====
[#tls-certificate-byo]
@@ -332,7 +332,7 @@ helm install my-release -f tls_values.yaml {helm-repo}
=== Deploying Multiple Chart Instances (Releases)
The example installation commands on this page assume the default namespace is used (these commands don't specify the `-n` option).
-This is important to note because the Couchbase Helm Chart deploys both the Autonomous Operator and the Admission Controller by default, _and these components should not be deployed more than once in the same namespace_.
+This is important to note because the Couchbase Helm Chart deploys both the Kubernetes Operator and the Admission Controller by default, _and these components should not be deployed more than once in the same namespace_.
_The Admission Controller should only be deployed once per Kubernetes cluster_ as indicated in <> and in the xref:concept-operator.adoc#dynamic-admission-controller[operator architecture].
To prevent deployment of the Admission Controller by the Couchbase Helm Chart, you can set the `install.admissionController=false` parameter either in the values file or on the command line:
@@ -342,7 +342,7 @@ To prevent deployment of the Admission Controller by the Couchbase Helm Chart, y
helm install my-release --set install.admissionController=false {helm-repo}
----
-If you install the default Couchbase Helm Chart multiple times in the same namespace, then you'll end up with multiple instances of the Autonomous Operator and the Admission Controller, which will cause errors in your deployments.
+If you install the default Couchbase Helm Chart multiple times in the same namespace, then you'll end up with multiple instances of the Kubernetes Operator and the Admission Controller, which will cause errors in your deployments.
In addition, the example installation commands on this page also specify `my-release` as the name for the https://helm.sh/docs/glossary/#release[chart release^].
If you plan to use Helm to install multiple instances (releases) of the Couchbase Helm Chart, you should consider giving each release a unique name to help you more easily identify the resources that are associated with each release.
@@ -375,7 +375,7 @@ NAME CHART VERSION APP VERSION DES
https://hub.helm.sh/charts/couchbase/couchbase-... 2.1.0 2.1.0 A Helm chart to deploy the Couchbase Autonomous...
----
-Here, the `CHART VERSION` is *2.1.0*, and the `APP VERSION` (the Autonomous Operator version) is *2.1.0*.
+Here, the `CHART VERSION` is *2.1.0*, and the `APP VERSION` (the Kubernetes Operator version) is *2.1.0*.
To install a specific version of the Couchbase Helm Chart chart, include the `--version` argument during installation:
diff --git a/modules/ROOT/pages/howto-backup.adoc b/modules/ROOT/pages/howto-backup.adoc
index ed9f6a6..46cf132 100644
--- a/modules/ROOT/pages/howto-backup.adoc
+++ b/modules/ROOT/pages/howto-backup.adoc
@@ -3,14 +3,14 @@
include::partial$constants.adoc[]
[abstract]
-You can configure the Autonomous Operator to take periodic, automated backups of your Couchbase cluster with the existing functionality provided by `cbbackupmgr`, as well as being able to trigger automated immediate backups.
+You can configure the Kubernetes Operator to take periodic, automated backups of your Couchbase cluster with the existing functionality provided by `cbbackupmgr`, as well as being able to trigger automated immediate backups.
== Overview
This page details how to backup a Couchbase cluster and restore data in the face of disaster.
-A conceptual overview of using the Autonomous Operator to backup and restore Couchbase clusters can be found in xref:concept-backup.adoc[].
+A conceptual overview of using the Kubernetes Operator to backup and restore Couchbase clusters can be found in xref:concept-backup.adoc[].
-The Autonomous Operator supports two of the backup strategies available in `cbbackupmgr`: _Full Only_ and _Full/Incremental_.
+The Kubernetes Operator supports two of the backup strategies available in `cbbackupmgr`: _Full Only_ and _Full/Incremental_.
Complete descriptions and explanations of these strategies can be found in the xref:server:backup-restore:cbbackupmgr-strategies.adoc[`cbbackupmgr` documentation].
The examples on this page assume a backup schedule based on the _Full/Incremental_ strategy for both creating backups and performing restores.
@@ -25,7 +25,7 @@ For further information about setting file system groups see the xref:concept-pe
== Enable Automated Backup
-In order for the Autonomous Operator to manage the automated backup of a cluster, the feature must be enabled in the `CouchbaseCluster` resource.
+In order for the Kubernetes Operator to manage the automated backup of a cluster, the feature must be enabled in the `CouchbaseCluster` resource.
[source,yaml,subs="attributes,verbatim"]
----
@@ -40,7 +40,7 @@ spec:
<.> The only required field to enable automated backup is xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-backup-managed[`couchbaseclusters.spec.backup.managed`].
-<.> If the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-backup-image[`couchbaseclusters.spec.backup.image`] field is left unspecified, then it will be automatically populated with the most recent container image that was available when the installed version of the Autonomous Operator was released.
+<.> If the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-backup-image[`couchbaseclusters.spec.backup.image`] field is left unspecified, then it will be automatically populated with the most recent container image that was available when the installed version of the Kubernetes Operator was released.
The default image for open source Kubernetes comes from https://hub.docker.com/r/couchbase/operator-backup[Docker Hub^], and the default image for OpenShift comes from the https://access.redhat.com/containers/#/vendor/couchbase[Red Hat Container Catalog^].
+
When running on Red Hat OpenShift, you will want to modify this to use the Red Hat Container Catalog image.
@@ -96,15 +96,15 @@ spec:
<1> Periodic backups require `spec.strategy` to be either `full_only` or `full_incremental`
-<2> On detection of the `CouchbaseBackup` resource, the Autonomous Operator creates the correct cron jobs for the `spec.full.schedule` and the `spec.incremental.schedule`.
+<2> On detection of the `CouchbaseBackup` resource, the Kubernetes Operator creates the correct cron jobs for the `spec.full.schedule` and the `spec.incremental.schedule`.
In this example a full backup would be performed at 3:00AM on a Sunday and then an incremental backup on every other day of the week at 3:00AM.
-<3> The Autonomous Operator will also create a PersistentVolumeClaim (PVC) to store the backups and logs with the same name that is specified in `metadata.name`.
+<3> The Kubernetes Operator will also create a PersistentVolumeClaim (PVC) to store the backups and logs with the same name that is specified in `metadata.name`.
So if a PVC called "my-backup" does not yet exist in this case, one will be created.
This would also happen if for some reason the PVC was deleted.
-An immediate backup can also be triggered immediately using `CouchbaseBackup` resources with the `immediate_full` or `immediate_incremental` strategies. When the Autonomous Operator detects `CouchbaseBackup` resource with either of these strategies it will attempt to trigger a backup Job immediately. The following is a simple configuration with the minimum required fields set to take an immediate backup.
+An immediate backup can also be triggered immediately using `CouchbaseBackup` resources with the `immediate_full` or `immediate_incremental` strategies. When the Kubernetes Operator detects `CouchbaseBackup` resource with either of these strategies it will attempt to trigger a backup Job immediately. The following is a simple configuration with the minimum required fields set to take an immediate backup.
[source,yaml]
----
@@ -229,7 +229,7 @@ spec:
A `CouchbaseBackupRestore` resource behaves differently from a `CouchbaseBackup` resource in that it spawns just a singular, one-time job which attempts to restore the requested backup or range of backups.
In the example above, the `CouchbaseBackupRestore` resource configuration is restoring the first backup in the repository `"cb-example-2020-02-12T19_00_03"`.
-The first backup in any repository will be a full backup since the Autonomous Operator performs a full backup of the cluster after the creation of each backup repository.
+The first backup in any repository will be a full backup since the Kubernetes Operator performs a full backup of the cluster after the creation of each backup repository.
If you don't know the name of the backup repository that you want to restore from, you can find the name without having to explore the contents of a Persistent Volume Claim by simply referring to the xref:resource/couchbasebackup.adoc#couchbasebackups-status[`couchbasebackups.status`] object of the existing `CouchbaseBackup` resource.
@@ -249,8 +249,8 @@ spec:
str: latest
----
-In this example above, the Autonomous Operator would restore a range of backups from the latest backup repository.
-The omission of the `spec.repo` field means that the Autonomous Operator will look for the most recent backup repository.
+In this example above, the Kubernetes Operator would restore a range of backups from the latest backup repository.
+The omission of the `spec.repo` field means that the Kubernetes Operator will look for the most recent backup repository.
[IMPORTANT]
====
@@ -574,7 +574,7 @@ Attempts to edit things like the name or strategy will fail.
[[online-backup-volume-resizing]]
=== Online Backup Volume Resizing
-A Backup PVC that is referenced by an existing xref:resource/couchbasebackup.adoc[`CouchbaseBackup`] resource can be resized _manually_ by the user, or _automatically_ by the Autonomous Operator.
+A Backup PVC that is referenced by an existing xref:resource/couchbasebackup.adoc[`CouchbaseBackup`] resource can be resized _manually_ by the user, or _automatically_ by the Kubernetes Operator.
[IMPORTANT]
====
@@ -591,12 +591,12 @@ To perform a manual resize, simply edit xref:resource/couchbasebackup.adoc#couch
The resize will then be performed with the next scheduled backup job.
NOTE: The underlying StorageClass must be configured to allow volume expansion in order to modify the size of the Backup PVC (as stated <>).
-Changes to the volume size may go through,but the Autonomous Operator will error until the change is reverted.
+Changes to the volume size may go through,but the Kubernetes Operator will error until the change is reverted.
[[automated-backup-volume-resizing]]
==== Automated Backup Volume Resizing
-A xref:resource/couchbasebackup.adoc[`CouchbaseBackup`] resource can be modified to allow the Autonomous Operator to automatically resize the Backup PVC once a specific percentage of space is left.
+A xref:resource/couchbasebackup.adoc[`CouchbaseBackup`] resource can be modified to allow the Kubernetes Operator to automatically resize the Backup PVC once a specific percentage of space is left.
[source,yaml]
----
@@ -632,7 +632,7 @@ In this case, if the volume is currently 80 GiB when the threshold is reached, t
When this field is not defined, no bounds are imposed.
NOTE: The underlying StorageClass must be configured to allow volume expansion in order to modify the size of the Backup PVC (as stated <>).
-Changes to the volume size may go through, but the Autonomous Operator will error until the change is reverted.
+Changes to the volume size may go through, but the Kubernetes Operator will error until the change is reverted.
=== Deleting a Backup Configuration
@@ -727,7 +727,7 @@ Like with other Couchbase custom resources, this means specifying a label for RB
<.> Tolerations are applied to pods, and allow (but do not require) the pods to be scheduled onto nodes with matching taints.
With taints and tolerations, you can grant backup pods exclusive access to specific nodes.
-In this example, if we wish to run all backup pods on a dedicated node and isolate them from the rest of the Autonomous Operator pods, we can do this by tainting a node with the key-value of `app:cbbackup` and defining a matching toleration.
+In this example, if we wish to run all backup pods on a dedicated node and isolate them from the rest of the Kubernetes Operator pods, we can do this by tainting a node with the key-value of `app:cbbackup` and defining a matching toleration.
Further reference on all of these fields can be found in the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-backup[`couchbaseclusters.spec.backup`] resource configuration.
For more overall information please see xref:concept-scheduling[Couchbase Scheduling and Isolation].
diff --git a/modules/ROOT/pages/howto-client-sdks.adoc b/modules/ROOT/pages/howto-client-sdks.adoc
index b01fdc4..aad74fc 100644
--- a/modules/ROOT/pages/howto-client-sdks.adoc
+++ b/modules/ROOT/pages/howto-client-sdks.adoc
@@ -23,7 +23,7 @@ Various Couchbase clients behave differently from one another in the way they pe
This inconsistent behavior is avoided by specifying a network selection flag in the connection string that is used by the client to connect to the Couchbase cluster.
This is known as _explicit network selection_.
-The connection string examples throughout the Autonomous Operator documentation use explicit network selection to help avoid undesirable client behavior.
+The connection string examples throughout the Kubernetes Operator documentation use explicit network selection to help avoid undesirable client behavior.
Connection strings for internally-networked clients use `network=default`, while externally-networked clients use `network=external`.
One important caveat is that explicit network selection is not supported by older Couchbase clients.
diff --git a/modules/ROOT/pages/howto-couchbase-autoscaling.adoc b/modules/ROOT/pages/howto-couchbase-autoscaling.adoc
index 900d7b6..5d2f6c7 100644
--- a/modules/ROOT/pages/howto-couchbase-autoscaling.adoc
+++ b/modules/ROOT/pages/howto-couchbase-autoscaling.adoc
@@ -7,10 +7,10 @@ Configure Couchbase clusters to automatically scale based on observed usage metr
== Overview
-The Autonomous Operator supports xref:concept-mds.adoc[Multi-Dimensional Scaling] through independently-configurable server classes, which are xref:howto-couchbase-scale.adoc[manually scalable] by default.
-However, the Autonomous Operator optionally supports the automatic scaling of Couchbase clusters through an integration with the https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler (HPA)^].
+The Kubernetes Operator supports xref:concept-mds.adoc[Multi-Dimensional Scaling] through independently-configurable server classes, which are xref:howto-couchbase-scale.adoc[manually scalable] by default.
+However, the Kubernetes Operator optionally supports the automatic scaling of Couchbase clusters through an integration with the https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler (HPA)^].
-The sections on this page describe how to enable and configure auto-scaling for Couchbase clusters managed by the Autonomous Operator.
+The sections on this page describe how to enable and configure auto-scaling for Couchbase clusters managed by the Kubernetes Operator.
For a conceptual description of this feature, please refer to xref:concept-couchbase-autoscaling.adoc[].
== Preparing for Auto-scaling
@@ -32,7 +32,7 @@ Refer to xref:concept-couchbase-autoscaling.adoc#about-exposed-metrics[About Exp
Enabling auto-scaling for a particular Couchbase cluster starts with modifying the relevant xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource.
The required configuration parameters for enabling log forwarding are described in the example below.
-(The Autonomous Operator will set the default values for any fields that are not specified by the user.)
+(The Kubernetes Operator will set the default values for any fields that are not specified by the user.)
.Basic xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] Auto-Scaling Parameters
[source,yaml,subs="attributes,verbatim"]
@@ -60,19 +60,19 @@ spec:
autoscaleStabilizationPeriod: 600s # <.>
----
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`]: Setting this field to `true` triggers the Autonomous Operator to create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for the relevant server class.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`]: Setting this field to `true` triggers the Kubernetes Operator to create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for the relevant server class.
In this example, a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource will be created for the `index` server class.
Refer to xref:concept-couchbase-autoscaling.adoc#about-the-couchbase-autoscaler[About the Couchbase Autoscaler] for a conceptual overview of the role the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource plays in auto-scaling.
<.> In this example, a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource will also be created for the `query` server class.
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-autoscalestabilizationperiod[`couchbaseclusters.spec.autoscaleStabilizationPeriod`]: This field defines the xref:concept-couchbase-autoscaling.adoc#couchbase-stabilization-period[_Couchbase Stabilization Period_], which is an internal safety mechanism provided by the Autonomous Operator that is meant to help prevent over-scaling caused by metrics instability during rebalance.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-autoscalestabilizationperiod[`couchbaseclusters.spec.autoscaleStabilizationPeriod`]: This field defines the xref:concept-couchbase-autoscaling.adoc#couchbase-stabilization-period[_Couchbase Stabilization Period_], which is an internal safety mechanism provided by the Kubernetes Operator that is meant to help prevent over-scaling caused by metrics instability during rebalance.
The value specified in this field determines how long {HorizontalPodAutoscaler} resources will remain in https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#implicit-maintenance-mode-deactivation[_maintenance mode_^] after the cluster finishes rebalancing.
+
In this example, the stabilization period has been set to `600s`, which means that the Horizontal Pod Autoscaler will not restart monitoring until 10 minutes after the previous rebalance has completed.
Refer to xref:concept-couchbase-autoscaling-best-practices.adoc[] for additional guidance on setting this value in production environments.
-After deploying the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource specification, the Autonomous Operator will create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource for each server class configuration that has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] set to `true`.
+After deploying the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource specification, the Kubernetes Operator will create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource for each server class configuration that has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] set to `true`.
IMPORTANT: Enabling auto-scaling for a particular server class configuration *does not* immediately subject the cluster to being auto-scaled.
The xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource simply acts as an endpoint for the {HorizontalPodAutoscaler} resource to access the pods that are selected for auto-scaling.
@@ -95,24 +95,24 @@ query.cb-example 2 query
<.> `NAME`: Each xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource is named using the format `__.__`.
The name is important as it must be referenced when <> in order link the two resources together.
-<.> `SIZE`: This is the current number of Couchbase nodes that the Autonomous Operator is maintaining for the `index` server class.
-The Autonomous Operator keeps the size of a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource in sync with the size of its associated server class configuration.
+<.> `SIZE`: This is the current number of Couchbase nodes that the Kubernetes Operator is maintaining for the `index` server class.
+The Kubernetes Operator keeps the size of a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource in sync with the size of its associated server class configuration.
[IMPORTANT]
====
-xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources are fully managed by the Autonomous Operator and should not be manually created, modified, or deleted by the user.
-If one is manually deleted, the Autonomous Operator will re-create it.
+xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources are fully managed by the Kubernetes Operator and should not be manually created, modified, or deleted by the user.
+If one is manually deleted, the Kubernetes Operator will re-create it.
However, it is possible to edit the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] (refer to <> below).
-A xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource only gets deleted by the Autonomous Operator when <> for the associated server class, or if the associated xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource is deleted altogether.
+A xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource only gets deleted by the Kubernetes Operator when <> for the associated server class, or if the associated xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource is deleted altogether.
====
[[creating-a-horizontalpodautoscaler-resource]]
== Creating a `HorizontalPodAutoscaler` Resource
-The Autonomous Operator relies on the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler (HPA)^] to provide auto-scaling capabilities.
+The Kubernetes Operator relies on the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler (HPA)^] to provide auto-scaling capabilities.
The Horizontal Pod Autoscaler is configured via a {HorizontalPodAutoscaler} resource, which is the primary interface by which auto-scaling is configured.
-Unlike the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created by the Autonomous Operator, the {HorizontalPodAutoscaler} resource is created and managed by the user.
+Unlike the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created by the Kubernetes Operator, the {HorizontalPodAutoscaler} resource is created and managed by the user.
The following configuration represents an example to scale the server class from <>.
@@ -155,7 +155,7 @@ spec:
* `kind`: This field must be set to `CouchbaseAutoscaler`.
* `name`: This field must reference the unique `name` of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource.
-As discussed in the previous section, xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resources are automatically created by the Autonomous Operator using the name format `__.__`.
+As discussed in the previous section, xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resources are automatically created by the Kubernetes Operator using the name format `__.__`.
--
+
Refer to xref:concept-couchbase-autoscaling.adoc#referencing-the-couchbase-autoscaler[Referencing the Couchbase Autoscaler] in the concept documentation for more detailed information about these fields.
@@ -251,7 +251,7 @@ Clusters that have automatic down-scaling disabled can be manually scaled down b
$ kubectl scale --replicas=2 query.cb-example
----
-The above command edits the xref:concept-couchbase-autoscaling.adoc#scale-subresource[_scale subresource_] and results in the Autonomous Operator scaling the server class named `query` to a size of `2`.
+The above command edits the xref:concept-couchbase-autoscaling.adoc#scale-subresource[_scale subresource_] and results in the Kubernetes Operator scaling the server class named `query` to a size of `2`.
[[disabling-auto-scaling]]
== Disabling Auto-scaling
@@ -280,24 +280,24 @@ spec:
- query
----
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`]: Setting this field to `false` triggers the Autonomous Operator to delete the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource that had previously been created for the relevant server class.
-In this example, the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource associated with the `index` server class will be deleted by the Autonomous Operator upon submitting the configuration.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`]: Setting this field to `false` triggers the Kubernetes Operator to delete the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource that had previously been created for the relevant server class.
+In this example, the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource associated with the `index` server class will be deleted by the Kubernetes Operator upon submitting the configuration.
-Upon deleting the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource, the Autonomous Operator will no longer reconcile the current size of the server class with the recommendations of the Horizontal Pod Autoscaler, and instead the value specified in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-size[`couchbaseclusters.spec.servers.size`] will become the new source of truth.
+Upon deleting the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource, the Kubernetes Operator will no longer reconcile the current size of the server class with the recommendations of the Horizontal Pod Autoscaler, and instead the value specified in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-size[`couchbaseclusters.spec.servers.size`] will become the new source of truth.
For example, if the above configuration were to be submitted, it would result in the `index` and `query` server classes each being scaled to `size: 2` from whatever size they had previously been auto-scaled to.
-It's important to note, however, that the {HorizontalPodAutoscaler} resource is not managed by the Autonomous Operator, and therefore does not get deleted along with the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource.
+It's important to note, however, that the {HorizontalPodAutoscaler} resource is not managed by the Kubernetes Operator, and therefore does not get deleted along with the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource.
It will continue to exist in the current namespace until it is manually deleted by the user.
Since the {HorizontalPodAutoscaler} resource can continue to be used if auto-scaling is subsequently re-enabled, it is important to <> of the {HorizontalPodAutoscaler} resource to ensure that it is persisted as expected.
If the desire is to only temporarily disable auto-scaling, the {HorizontalPodAutoscaler} resource can be left to persist until auto-scaling is eventually re-enabled.
-This only works if the names of both the server class and the Couchbase cluster remain the same, because when xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] is set back to `true`, the Autonomous Operator will create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource that is already xref:concept-couchbase-autoscaling.adoc#referencing-the-couchbase-autoscaler[referenced] by the existing {HorizontalPodAutoscaler} resource.
+This only works if the names of both the server class and the Couchbase cluster remain the same, because when xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] is set back to `true`, the Kubernetes Operator will create a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource that is already xref:concept-couchbase-autoscaling.adoc#referencing-the-couchbase-autoscaler[referenced] by the existing {HorizontalPodAutoscaler} resource.
In this case, the cluster will immediately become subject to the recommendations of the Horizontal Pod Autoscaler.
[NOTE]
====
Deleting just the {HorizontalPodAutoscaler} resource will also have the effect of "disabling" auto-scaling.
-In this scenario, the Autonomous Operator continues to maintain the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource, but it will remain at the same size that was last recommended by the Horizontal Pod Autoscaler before it was deleted.
+In this scenario, the Kubernetes Operator continues to maintain the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] resource, but it will remain at the same size that was last recommended by the Horizontal Pod Autoscaler before it was deleted.
====
== Related Links
diff --git a/modules/ROOT/pages/howto-couchbase-create.adoc b/modules/ROOT/pages/howto-couchbase-create.adoc
index 2d9b4d8..ce8ecd1 100644
--- a/modules/ROOT/pages/howto-couchbase-create.adoc
+++ b/modules/ROOT/pages/howto-couchbase-create.adoc
@@ -5,11 +5,11 @@ include::partial$constants.adoc[]
== Prerequisites
-Before you attempt to deploy a Couchbase Server cluster with the Couchbase Autonomous Operator, ensure that you have done the following:
+Before you attempt to deploy a Couchbase Server cluster with the Couchbase Kubernetes Operator, ensure that you have done the following:
* You have reviewed the xref:prerequisite-and-setup.adoc[prerequisites]
-* You have downloaded the https://www.couchbase.com/downloads[Autonomous Operator package^]
-* You have xref:install-kubernetes.adoc[deployed the admission controller and the Autonomous Operator], and both are up and running
+* You have downloaded the https://www.couchbase.com/downloads[Kubernetes Operator package^]
+* You have xref:install-kubernetes.adoc[deployed the admission controller and the Kubernetes Operator], and both are up and running
+
The package contains YAML configuration files that will help you set up a Couchbase cluster.
+
diff --git a/modules/ROOT/pages/howto-couchbase-log-forwarding.adoc b/modules/ROOT/pages/howto-couchbase-log-forwarding.adoc
index 6c1e769..3ffaa44 100644
--- a/modules/ROOT/pages/howto-couchbase-log-forwarding.adoc
+++ b/modules/ROOT/pages/howto-couchbase-log-forwarding.adoc
@@ -10,12 +10,12 @@ Couchbase Server produces a xref:server:manage:manage-logging/manage-logging.ado
By default, these logs cannot be collected from the Couchbase Server container's standard console output.
This limits the ability to integrate Couchbase logging with 3rd-party monitoring or collection technologies deployed within a Kubernetes cluster.
-However, the Autonomous Operator can optionally enable xref:concept-couchbase-logging.adoc#log-forwarding[_log forwarding_] by deploying a third party log processor in a sidecar container on each Couchbase pod, which then reads the log files and forwards them to standard console output.
+However, the Kubernetes Operator can optionally enable xref:concept-couchbase-logging.adoc#log-forwarding[_log forwarding_] by deploying a third party log processor in a sidecar container on each Couchbase pod, which then reads the log files and forwards them to standard console output.
For this purpose, Couchbase supplies a default https://hub.docker.com/r/couchbase/fluent-bit[log processor image^] based on https://fluentbit.io/[Fluent Bit^].
The sections on this page describe how to enable and configure log forwarding using the Couchbase-supplied log processor image.
-NOTE: The Couchbase-supplied log processor container image is only supported on Kubernetes platforms in conjunction with the Couchbase Autonomous Operator.
+NOTE: The Couchbase-supplied log processor container image is only supported on Kubernetes platforms in conjunction with the Couchbase Kubernetes Operator.
NOTE: Log forwarding requires that logs be written to a persistent volume (i.e. the Couchbase deployment's `default` or `logs` volumes are backed by xref:best-practices.adoc#storage[persistent storage]).
Fully-ephemeral clusters are not supported by this feature.
@@ -27,7 +27,7 @@ NOTE: Log forwarding requires that xref:resource/couchbasecluster.adoc#couchbase
Log forwarding is fundamentally provided by two components:
-. the _log processor image_ that is used by the Autonomous Operator to deploy the `logging` sidecar container onto each Couchbase Server pod, and
+. the _log processor image_ that is used by the Kubernetes Operator to deploy the `logging` sidecar container onto each Couchbase Server pod, and
. the _log forwarding configuration_, stored in a Kubernetes Secret, that gets consumed by the `logging` sidecar container and which controls its behavior.
@@ -35,7 +35,7 @@ Enabling log forwarding for a particular Couchbase cluster involves enabling the
The required configuration parameters for enabling log forwarding are described in the example below.
Specified values represent the defaults for their respective fields unless otherwise noted in a callout.
-(The Autonomous Operator will set the default values for any fields that are not specified by the user.)
+(The Kubernetes Operator will set the default values for any fields that are not specified by the user.)
.Basic xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] Parameters That Enable Log Forwarding
[source,yaml,subs="attributes,verbatim"]
@@ -66,9 +66,9 @@ spec:
----
<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-enabled[`couchbaseclusters.spec.logging.server.enabled`]: This is technically the only field that must be changed in order to enable log forwarding.
-Setting this field to `true` (defaults to `false`) instructs the Autonomous Operator to deploy the `logging` sidecar container on each pod.
+Setting this field to `true` (defaults to `false`) instructs the Kubernetes Operator to deploy the `logging` sidecar container on each pod.
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-sidecar-image[`couchbaseclusters.spec.logging.server.sidecar.image`]: This field specifies the container image that the Autonomous Operator will use for deploying the `logging` sidecar container.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-sidecar-image[`couchbaseclusters.spec.logging.server.sidecar.image`]: This field specifies the container image that the Kubernetes Operator will use for deploying the `logging` sidecar container.
This field defaults to `couchbase/{logging-version}`, which pulls the Couchbase-supplied https://hub.docker.com/r/couchbase/fluent-bit[log processor image^].
You may need to modify this field if your Kubernetes nodes can't pull from the Docker public registry.
@@ -86,7 +86,7 @@ $ kubectl logs cb-example-0000 logging
== Configuring Log Forwarding
As described in <>, there are two primary components that provide log forwarding: the _log processor image_ and the _log forwarding configuration_.
-The Autonomous Operator uses Couchbase-provided defaults for both of these components, which can be used _as-is_ (refer to <>).
+The Kubernetes Operator uses Couchbase-provided defaults for both of these components, which can be used _as-is_ (refer to <>).
However, these components can be further customized to better meet the needs of your environment (refer to <>).
=== Enable memory buffer limits
@@ -135,17 +135,17 @@ spec:
memory: 500Mi
----
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`]: When this field is set to `true` (the default), the Autonomous Operator ensures that the `logging` sidecar container always uses the default log forwarding configuration.
-The Autonomous Operator accomplishes this by creating a Secret that contains the default log forwarding configuration, which is then consumed by the `logging` sidecar container.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`]: When this field is set to `true` (the default), the Kubernetes Operator ensures that the `logging` sidecar container always uses the default log forwarding configuration.
+The Kubernetes Operator accomplishes this by creating a Secret that contains the default log forwarding configuration, which is then consumed by the `logging` sidecar container.
The name of the Secret will be whatever name is specified in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`].
-If a Secret already exists with the same name, the Autonomous Operator overwrites it, ensuring that the default log forwarding configuration is maintained.
+If a Secret already exists with the same name, the Kubernetes Operator overwrites it, ensuring that the default log forwarding configuration is maintained.
<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`]: This field defines the name of the Secret that contains the log forwarding configuration that will be consumed by the `logging` sidecar container.
-This field defaults to `fluent-bit-config` and will be the name of the Secret that gets automatically created by the Autonomous Operator when xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`] is set to `true`.
+This field defaults to `fluent-bit-config` and will be the name of the Secret that gets automatically created by the Kubernetes Operator when xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`] is set to `true`.
+
IMPORTANT: If running multiple clusters in the same Kubernetes namespace, make sure to use a different name for the Secret of each cluster rather than attempting to share the same Kubernetes Secret.
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-sidecar-image[`couchbaseclusters.spec.logging.server.sidecar.image`]: This field specifies the container image that the Autonomous Operator will use for deploying the `logging` sidecar container.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-sidecar-image[`couchbaseclusters.spec.logging.server.sidecar.image`]: This field specifies the container image that the Kubernetes Operator will use for deploying the `logging` sidecar container.
This field defaults to `couchbase/{logging-version}`, which pulls the Couchbase-supplied https://hub.docker.com/r/couchbase/fluent-bit[log processor image^].
You may need to modify this field if your Kubernetes nodes can't pull from the Docker public registry.
+
@@ -192,10 +192,10 @@ spec:
memory: 500Mi
----
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`]: When this field is set to `false` (defaults to `true`), the Autonomous Operator allows the Secret specified in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`] to contain a non-default log forwarding configuration.
-This field _must_ be set to `false` in order to use a custom log forwarding configuration, otherwise the Autonomous Operator will always reconcile and update the Secret with the _default_ log forwarding configuration.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`]: When this field is set to `false` (defaults to `true`), the Kubernetes Operator allows the Secret specified in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`] to contain a non-default log forwarding configuration.
+This field _must_ be set to `false` in order to use a custom log forwarding configuration, otherwise the Kubernetes Operator will always reconcile and update the Secret with the _default_ log forwarding configuration.
+
-NOTE: If xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`] was ever set to `true` when you enabled log forwarding, then the Autonomous Operator will have already created the default Secret.
+NOTE: If xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`] was ever set to `true` when you enabled log forwarding, then the Kubernetes Operator will have already created the default Secret.
In this situation, if you were to then set this field to `false`, the `logging` sidecar container would continue to consume the default Secret.
You could then choose to add your custom log forwarding configuration to this same Secret.
However, if you were to change xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`] to point to a different Secret, it won't get picked up until the next time a pod starts.
@@ -221,10 +221,10 @@ This field only needs to be modified if a custom image is specified in xref:reso
Since a log forwarding configuration can contain sensitive information, it is stored in a Kubernetes Secret.
The Secret is used to https://kubernetes.io/docs/concepts/configuration/secret/#restrictions[populate the volume mounted into the sidecar^] so that the actual _configuration files_ get picked up by the log processor.
-When <>, the Autonomous Operator automatically creates a Secret named `fluent-bit-config` that contains the default log forwarding configuration.
+When <>, the Kubernetes Operator automatically creates a Secret named `fluent-bit-config` that contains the default log forwarding configuration.
This configuration, which can be viewed on https://github.com/couchbase/couchbase-fluent-bit/blob/main/conf/fluent-bit.conf[GitHub^], is used by the Couchbase-supplied https://hub.docker.com/r/couchbase/fluent-bit[log processor image^] in order to provide features like automatic _parsing_ of a number of supported logs, automatic _filtering_ for adding useful information to the logs (e.g. pod host names), as well as optional log _redaction_ (another type of filtering).
-When <>, the Autonomous Operator does _not_ automatically create a log forwarding configuration Secret.
+When <>, the Kubernetes Operator does _not_ automatically create a log forwarding configuration Secret.
Instead, the administrator is expected to create a custom Secret themselves, and specify that Secret in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`].
The rest of this section covers some relatively simple ways in which you can customize the default log forwarding configuration, such as limiting the number of forwarded logs, and enabling log redaction.
@@ -267,7 +267,7 @@ stringData:
match couchbase.log.audit # <.>
----
-<.> `metadata.name`: The Autonomous Operator expects the default name `fluent-bit-config`.
+<.> `metadata.name`: The Kubernetes Operator expects the default name `fluent-bit-config`.
If you use a different name, then you will need to specify that name in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`].
<.> `stringData.fluent-bit.conf`: Log forwarding configuration files are defined here in the `stringData` field of the Secret (the `data` field is also https://kubernetes.io/docs/concepts/configuration/secret/[potentially an option^]).
@@ -315,7 +315,7 @@ Therefore, specifying the https://docs.fluentbit.io/manual/concepts/key-concepts
You can copy the <> Secret above and create it in your Kubernetes cluster for use with existing and future Couchbase cluster deployments.
However, please review <>, as there are a few things that you need to be aware of when creating a custom log forwarding configuration Secret.
-In particular, you should make sure that any Couchbase cluster deployment that references the Secret in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`] _also_ has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`] set to `false`, otherwise the Autonomous Operator will overwrite the custom Secret with the _default_ log forwarding configuration.
+In particular, you should make sure that any Couchbase cluster deployment that references the Secret in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-configurationname[`couchbaseclusters.spec.logging.server.configurationName`] _also_ has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-server-manageconfiguration[`couchbaseclusters.spec.logging.server.manageConfiguration`] set to `false`, otherwise the Kubernetes Operator will overwrite the custom Secret with the _default_ log forwarding configuration.
[[updating-existing-custom-log-forwarding-configuration]]
=== Updating an Existing Custom Log Forwarding Configuration
@@ -397,7 +397,7 @@ LUA scripting has an overhead and it is recommended to use at least one extra Fl
<.> `redaction.salt`: You can optionally define a custom salt for the hashing of tagged data.
The salt is specified in the configuration Secret as a separate file/value (in the example, the salt value is `salty`).
-If a salt is not specified, the Autonomous Operator will default to using the cluster name as the salt.
+If a salt is not specified, the Kubernetes Operator will default to using the cluster name as the salt.
After <> the log forwarding configuration with the <> above, you'll notice output similar to the following, showing the redacted strings as hashes:
diff --git a/modules/ROOT/pages/howto-guide-couchbase-user-rbac.adoc b/modules/ROOT/pages/howto-guide-couchbase-user-rbac.adoc
index 385bdf5..035deb5 100644
--- a/modules/ROOT/pages/howto-guide-couchbase-user-rbac.adoc
+++ b/modules/ROOT/pages/howto-guide-couchbase-user-rbac.adoc
@@ -1,5 +1,5 @@
= How-to Guide: Couchbase User RBAC
-:description: A how-to guide on configuring Couchbase user authentication and authorization using the Autonomous Operator.
+:description: A how-to guide on configuring Couchbase user authentication and authorization using the Kubernetes Operator.
:page-toclevels: 2
[abstract]
@@ -7,17 +7,17 @@
== Overview
-This guide will describe how to create authenticated users and bind them to specific roles to provide multiple levels of authorization using the Autonomous Operator.
+This guide will describe how to create authenticated users and bind them to specific roles to provide multiple levels of authorization using the Kubernetes Operator.
Users can be authenticated either by Couchbase's built-in authentication system or by an external authentication system such as OpenLDAP.
-The Autonomous Operator refers to Couchbase Authentication as the `local` domain, and LDAP Authentication as the `external` domain.
+The Kubernetes Operator refers to Couchbase Authentication as the `local` domain, and LDAP Authentication as the `external` domain.
This guide will focus on using the `local` domain for authentication.
== Prerequisites
* If you are new to role-based access control in Couchbase, refer to the xref:server:learn:security/roles.adoc[Roles Page]
-* If you are new to Couchbase Autonomous Operator (CAO), refer to the xref:overview.adoc[Autonomous Operator Introduction]
+* If you are new to Couchbase Kubernetes Operator (CAO), refer to the xref:overview.adoc[Kubernetes Operator Introduction]
* Couchbase Scopes and Collections was added in Version 7.0.
Refer to the xref:concept-scopes-and-collections.adoc[Couchbase Scopes and Collections] page to learn more about these.
diff --git a/modules/ROOT/pages/howto-guide-data-topology-sync.adoc b/modules/ROOT/pages/howto-guide-data-topology-sync.adoc
index af90313..3bbd07a 100644
--- a/modules/ROOT/pages/howto-guide-data-topology-sync.adoc
+++ b/modules/ROOT/pages/howto-guide-data-topology-sync.adoc
@@ -1,16 +1,16 @@
= How-to Guide: Data Topology Synchronization
-:description: A how-to guide on data topology synchronization with Couchbase Autonomous Operator.
+:description: A how-to guide on data topology synchronization with Couchbase Kubernetes Operator.
[abstract]
{description}
== Overview
-In the following guide, we'll show you how to discover the configuration of a Couchbase cluster in the form of Kubernetes resources, and how the Autonomous Operator manages those resources.
+In the following guide, we'll show you how to discover the configuration of a Couchbase cluster in the form of Kubernetes resources, and how the Kubernetes Operator manages those resources.
== Prerequisites
-* If you are new to Couchbase Autonomous Operator (CAO), refer to the xref:overview.adoc[Autonomous Operator Introduction]
+* If you are new to Couchbase Kubernetes Operator (CAO), refer to the xref:overview.adoc[Kubernetes Operator Introduction]
* Couchbase Scopes and Collections was added in Version 7.0.
Refer to the xref:concept-scopes-and-collections.adoc[Couchbase Scopes and Collections] page to learn more about these.
diff --git a/modules/ROOT/pages/howto-guide-save-restore.adoc b/modules/ROOT/pages/howto-guide-save-restore.adoc
index a789ad9..6a5b5aa 100644
--- a/modules/ROOT/pages/howto-guide-save-restore.adoc
+++ b/modules/ROOT/pages/howto-guide-save-restore.adoc
@@ -1,5 +1,5 @@
= How-to Guide: Data Topology Save and Restore
-:description: A how-to guide on using the "Write Once and Create Anywhere" capability of Autonomous Operator.
+:description: A how-to guide on using the "Write Once and Create Anywhere" capability of Kubernetes Operator.
:page-toclevels: 2
[abstract]
@@ -7,11 +7,11 @@
== Overview
-This guide will show you how to save the configuration of one Couchbase cluster, and how to restore it on a different Couchbase cluster using the "Write Once and Create Anywhere" capability of Couchbase Autonomous Operator (CAO).
+This guide will show you how to save the configuration of one Couchbase cluster, and how to restore it on a different Couchbase cluster using the "Write Once and Create Anywhere" capability of Couchbase Kubernetes Operator (CAO).
== Prerequisites
-* If you are new to Couchbase Autonomous Operator (CAO), refer to the xref:overview.adoc[Autonomous Operator Introduction]
+* If you are new to Couchbase Kubernetes Operator (CAO), refer to the xref:overview.adoc[Kubernetes Operator Introduction]
* Couchbase Scopes and Collections was added in Version 7.0.
Refer to the xref:concept-scopes-and-collections.adoc[Couchbase Scopes and Collections] page to learn more about these
diff --git a/modules/ROOT/pages/howto-guide-xdcr-scopes-collections.adoc b/modules/ROOT/pages/howto-guide-xdcr-scopes-collections.adoc
index 040c56a..c3002d8 100644
--- a/modules/ROOT/pages/howto-guide-xdcr-scopes-collections.adoc
+++ b/modules/ROOT/pages/howto-guide-xdcr-scopes-collections.adoc
@@ -1,5 +1,5 @@
= How-to Guide: XDCR with Scopes and Collections
-:description: A how-to guide on configuring cross data center replication (XDCR) using the Autonomous Operator.
+:description: A how-to guide on configuring cross data center replication (XDCR) using the Kubernetes Operator.
:page-toclevels: 2
[abstract]
@@ -16,12 +16,12 @@ This guide will take you through a few examples on how to configure XDCR.
* If you are new to role-based access control in Couchbase, refer to the xref:server:learn:security/roles.adoc[Roles Page]
-* If you are new to Couchbase Autonomous Operator (CAO), refer to the xref:overview.adoc[Autonomous Operator Introduction]
+* If you are new to Couchbase Kubernetes Operator (CAO), refer to the xref:overview.adoc[Kubernetes Operator Introduction]
* Couchbase Scopes and Collections was added in Version 7.0.
Refer to the xref:concept-scopes-and-collections.adoc[Couchbase Scopes and Collections] page to learn more about these
-* To install the Couchbase Autonomous Operator please refer to xref:install-kubernetes.adoc[Install Operator on Kubernetes] or xref:install-openshift.adoc[Install Operator on OpenShift]
+* To install the Couchbase Kubernetes Operator please refer to xref:install-kubernetes.adoc[Install Operator on Kubernetes] or xref:install-openshift.adoc[Install Operator on OpenShift]
== Configure XDCR
diff --git a/modules/ROOT/pages/howto-manage-couchbase-logging.adoc b/modules/ROOT/pages/howto-manage-couchbase-logging.adoc
index 917f23b..a8d3326 100644
--- a/modules/ROOT/pages/howto-manage-couchbase-logging.adoc
+++ b/modules/ROOT/pages/howto-manage-couchbase-logging.adoc
@@ -2,12 +2,12 @@
include::partial$constants.adoc[]
[abstract]
-The Autonomous Operator can be configured to manage certain aspects of Couchbase Server logging, and comes with tools for collecting Couchbase Server logs.
+The Kubernetes Operator can be configured to manage certain aspects of Couchbase Server logging, and comes with tools for collecting Couchbase Server logs.
== Overview
The Couchbase Server application records important events, and saves the details to a xref:server:manage:manage-logging/manage-logging.adoc#log-file-listing[variety] of log files.
-These logs are distinct from the logs that are generated by the Autonomous Operator xref:concept-operator-logging.adoc[itself].
+These logs are distinct from the logs that are generated by the Kubernetes Operator xref:concept-operator-logging.adoc[itself].
Logging is performed continuously within each Couchbase Server container in a Couchbase deployment.
When using xref:concept-persistent-volumes.adoc[persistent volumes] -- as is xref:best-practices.adoc#storage[recommended] for all production deployments -- log files are written to either the `default` or `logs` volume.
@@ -41,7 +41,7 @@ _Without active intervention, rotated audit logs will eventually consume all ava
=== About Managed Audit Logging
_Managed_ audit logging involves the administrator directly enabling audit logging in Couchbase Server, and requires that the administrator actively manage the resultant audit log files.
-Managed audit logging can be implemented after the Couchbase cluster has been successfully deployed by the Autonomous Operator.
+Managed audit logging can be implemented after the Couchbase cluster has been successfully deployed by the Kubernetes Operator.
Once the cluster is deployed, the administrator can enable and configure audit logging directly through the Couchbase UI, CLI, or REST API.
Refer to xref:server:manage:manage-security/manage-auditing.adoc[] in the Couchbase Server documentation for more information.
@@ -54,12 +54,12 @@ It is expected that the administrator will implement an automated system for exp
NOTE: Automated audit logging requires that logs be written to a persistent volume (i.e. the Couchbase deployment's `default` or `logs` volumes are backed by xref:best-practices.adoc#storage[persistent storage]).
Fully-ephemeral clusters are not supported by this feature.
-_Automated_ audit logging involves having the Autonomous Operator handle the audit log configuration and optionally manage the resultant audit log files.
-An audit logging configuration can be specified in the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource specification, allowing the Autonomous Operator to set up audit logging in Couchbase Server, and optionally manage the resultant audit log files.
+_Automated_ audit logging involves having the Kubernetes Operator handle the audit log configuration and optionally manage the resultant audit log files.
+An audit logging configuration can be specified in the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource specification, allowing the Kubernetes Operator to set up audit logging in Couchbase Server, and optionally manage the resultant audit log files.
The required configuration parameters for enabling audit logging are described in the example below.
Specified values represent the defaults for their respective fields unless otherwise noted in a callout.
-(The Autonomous Operator will set the default values for any fields that are not specified by the user.)
+(The Kubernetes Operator will set the default values for any fields that are not specified by the user.)
[source,yaml,subs="attributes,verbatim"]
----
@@ -98,14 +98,14 @@ This is technically the only field that is required to configure garbage collect
in Couchbase Server 7.2.4+. Note, however, that garbage collection can only be enabled if xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-audit-enabled[`couchbaseclusters.spec.logging.audit.enabled`] is also set to `true`.
After enabling automated audit logging, you should take care only to use the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource specification for making further modifications to the audit logging configuration.
-Manual changes that are made to the configuration via the Couchbase UI, CLI, or REST API (such as changing the audit log directory) are not prevented by the Autonomous Operator, and can cause audit logging failures.
+Manual changes that are made to the configuration via the Couchbase UI, CLI, or REST API (such as changing the audit log directory) are not prevented by the Kubernetes Operator, and can cause audit logging failures.
-NOTE: Changing the location of the audit log is not supported, as it would break the ability for the Autonomous Operator to xref:concept-couchbase-logging.adoc#log-forwarding[forward] audit logs.
+NOTE: Changing the location of the audit log is not supported, as it would break the ability for the Kubernetes Operator to xref:concept-couchbase-logging.adoc#log-forwarding[forward] audit logs.
==== Sidecar Garbage Collection
kbd:[deprecated]
-Native audit log cleanup was introduced in Couchbase Server 7.2.4, for earlier versions the Couchbase Autonomous Operator can deploy a sidecar container with the Couchbase Server to cleanup rotated logs.
+Native audit log cleanup was introduced in Couchbase Server 7.2.4, for earlier versions the Couchbase Kubernetes Operator can deploy a sidecar container with the Couchbase Server to cleanup rotated logs.
[source,yaml,subs="attributes,verbatim"]
----
@@ -157,7 +157,7 @@ It is highly recommended that you use, or migrate to, one of the other modes.
== Collecting Logs
-The Autonomous Operator package is distributed with a support tool -- xref:tools/cao.adoc[`cao`] -- which can be used to collect logs from Couchbase Server deployments.
+The Kubernetes Operator package is distributed with a support tool -- xref:tools/cao.adoc[`cao`] -- which can be used to collect logs from Couchbase Server deployments.
The xref:tools/cao.adoc[`cao`] tool performs _explicit logging_, which means it captures a snapshot of the current logs at the time the tool is was run.
Explicit logging can either be performed for all nodes in the cluster, or for one or more individual nodes.
The results are saved as zip files: each zip file contains the log-data generated for an individual node.
@@ -172,10 +172,10 @@ To avoid these limitations, you can choose to configure xref:concept-couchbase-l
[[collect-logs-with-cao]]
=== Collecting Logs with `cao`
-When run without any flags or options, the xref:tools/cao.adoc[`cao`] tool collects a filtered list of the Kubernetes resources associated with the Autonomous Operator in a given namespace.
+When run without any flags or options, the xref:tools/cao.adoc[`cao`] tool collects a filtered list of the Kubernetes resources associated with the Kubernetes Operator in a given namespace.
However, to also collect logs from Couchbase Server deployments, the xref:tools/cao.adoc#cao-collect-logs[`--collectinfo`] flag is required.
-When xref:tools/cao.adoc[`cao`] is run unscoped with the xref:tools/cao.adoc#cao-collect-logs[`--collectinfo`] flag, it will look for logs from all Couchbase Server deployments that are managed by the Autonomous Operator.
+When xref:tools/cao.adoc[`cao`] is run unscoped with the xref:tools/cao.adoc#cao-collect-logs[`--collectinfo`] flag, it will look for logs from all Couchbase Server deployments that are managed by the Kubernetes Operator.
However, you can scope the command to a particular cluster in order to look for just the logs from that cluster.
Run the following command to begin the log collection process for the Couchbase Server deployment named `cb-example`:
@@ -234,7 +234,7 @@ The xref:tools/cao.adoc[`cao`] tool collects logs from all log volumes in a Couc
Detached PVCs can occur more commonly when running xref:best-practices.adoc#ephemeral-clusters[ephemeral clusters].
When a detached PVC is encountered, xref:tools/cao.adoc[`cao`] will automatically create a temporary Couchbase Server pod, mount the log volume to it, and then run xref:server:cli:cbcollect-info-tool.adoc[`cbcollect_info`] to collect the logs.
-Once the logs have been downloaded, the Autonomous Operator will delete the temporary pod (but will _not_ delete the PVC).
+Once the logs have been downloaded, the Kubernetes Operator will delete the temporary pod (but will _not_ delete the PVC).
[TIP]
====
@@ -261,16 +261,16 @@ In most situations, the xref:tools/cao.adoc[`cao`] tool will allow logs to be co
Even in cases where logs are found to exist on detached PersistentVolumeClaims (PVCs), xref:tools/cao.adoc[`cao`] will <> by creating temporary pods that mount the PVCs.
However, a case may arise where xref:tools/cao.adoc[`cao`] fails to collect logs from a detached PVC.
-For example, if a stateful service -- such as Data, Index, or Analytics -- were to crash, and then continue to crash each time the Autonomous Operator recovered the pod, xref:tools/cao.adoc[`cao`] would unlikely be able to collect the Couchbase Server logs from the pod without manual intervention.
+For example, if a stateful service -- such as Data, Index, or Analytics -- were to crash, and then continue to crash each time the Kubernetes Operator recovered the pod, xref:tools/cao.adoc[`cao`] would unlikely be able to collect the Couchbase Server logs from the pod without manual intervention.
This is because in this hypothetical situation, the pod does not remain alive long enough for xref:tools/cao.adoc[`cao`] to perform normal log collection; xref:tools/cao.adoc[`cao`] also doesn't see the PVC as detached, and therefore doesn't automatically create a temporary pod to collect the logs as it normally would for a detached PVC.
The general process for manual log collection is as follows:
-. Pause the Autonomous Operator's management of the Couchbase cluster by setting xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-paused[`couchbaseclusters.spec.paused`] to `true`.
+. Pause the Kubernetes Operator's management of the Couchbase cluster by setting xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-paused[`couchbaseclusters.spec.paused`] to `true`.
+
Pausing management of the Couchbase cluster serves two purposes.
-The first purpose is that it will prevent the Autonomous Operator from attempting to recover the malfunctioning pod after Kubernetes has killed it.
-The second purpose is that it allows you to create a temporary replacement pod and mount the the detached PVCs to it without the Autonomous Operator interfering.
+The first purpose is that it will prevent the Kubernetes Operator from attempting to recover the malfunctioning pod after Kubernetes has killed it.
+The second purpose is that it allows you to create a temporary replacement pod and mount the the detached PVCs to it without the Kubernetes Operator interfering.
. Create a temporary pod resource with the persistent volumes mounted.
The basic template will look like the following:
@@ -379,4 +379,4 @@ $ kubectl cp default/cb-example-0005:/tmp/cbcollectinfo-default-cb-example-0005-
$ kubectl delete pod cb-example-0005
----
-. Resume the Autonomous Operator's management of the Couchbase cluster either by removing xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-paused[`couchbaseclusters.spec.paused`] or setting it to `false`.
+. Resume the Kubernetes Operator's management of the Couchbase cluster either by removing xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-paused[`couchbaseclusters.spec.paused`] or setting it to `false`.
diff --git a/modules/ROOT/pages/howto-manage-operator-logging.adoc b/modules/ROOT/pages/howto-manage-operator-logging.adoc
index 74cb7ac..0a40c4d 100644
--- a/modules/ROOT/pages/howto-manage-operator-logging.adoc
+++ b/modules/ROOT/pages/howto-manage-operator-logging.adoc
@@ -1,35 +1,35 @@
-= Autonomous Operator Troubleshooting
+= Kubernetes Operator Troubleshooting
:page-aliases: logs-troubleshooting.adoc, howto-troubleshooting.adoc
include::partial$constants.adoc[]
[abstract]
-If you run into issues with the Autonomous Operator, you can troubleshoot by examining the logs and events that it generates.
+If you run into issues with the Kubernetes Operator, you can troubleshoot by examining the logs and events that it generates.
-The Autonomous Operator generates logs that can be used for auditing and troubleshooting purposes.
-This page describes logging that is specific to the Autonomous Operator itself.
+The Kubernetes Operator generates logs that can be used for auditing and troubleshooting purposes.
+This page describes logging that is specific to the Kubernetes Operator itself.
For information about Couchbase cluster logging, refer to xref:howto-manage-couchbase-logging.adoc[].
== Overview
-The Autonomous Operator generates xref:concept-operator-logging.adoc[logs] that include information about itself and the various other Kubernetes components that make up the xref:concept-operator.adoc[Operator deployment].
+The Kubernetes Operator generates xref:concept-operator-logging.adoc[logs] that include information about itself and the various other Kubernetes components that make up the xref:concept-operator.adoc[Operator deployment].
These logs are distinct from the logs that are generated by the xref:concept-couchbase-logging.adoc[Couchbase Server application].
-This page provides information about how to collect and scrutinize logging information that is produced by the Autonomous Operator.
-When troubleshooting the Autonomous Operator, it is important to first rule out Kubernetes itself as the root cause of the problem.
+This page provides information about how to collect and scrutinize logging information that is produced by the Kubernetes Operator.
+When troubleshooting the Kubernetes Operator, it is important to first rule out Kubernetes itself as the root cause of the problem.
The Kubernetes https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/[Troubleshooting Guide^] contains a great deal of helpful information about debugging applications within a Kubernetes cluster.
-Familiarity with the xref:reference-operator-configuration.adoc[Operator's configuration settings] can be helpful when troubleshooting the Autonomous Operator.
+Familiarity with the xref:reference-operator-configuration.adoc[Operator's configuration settings] can be helpful when troubleshooting the Kubernetes Operator.
-== Collecting Autonomous Operator Logs
+== Collecting Kubernetes Operator Logs
-Using `kubectl` or `oc`, you can choose to print the Autonomous Operator logs to to standard console output.
+Using `kubectl` or `oc`, you can choose to print the Kubernetes Operator logs to to standard console output.
[{tabs}]
====
Kubernetes::
+
--
-Start by getting the name of the Autonomous Operator pod.
+Start by getting the name of the Kubernetes Operator pod.
[source,console]
----
@@ -55,20 +55,20 @@ time="2018-01-23T22:56:51Z" level=info msg="I'm the leader, attempt to start the
time="2018-01-23T22:56:51Z" level=info msg="Creating the couchbase-operator controller" module=main
----
-Alternatively, you can specify the Autonomous Operator deployment to get the logs.
+Alternatively, you can specify the Kubernetes Operator deployment to get the logs.
[source,console]
----
$ kubectl logs deployment/couchbase-operator
----
-Since there is only one instance of the Autonomous Operator in the deployment, the the underlying command will automatically select the correct pod and print the logs.
+Since there is only one instance of the Kubernetes Operator in the deployment, the the underlying command will automatically select the correct pod and print the logs.
--
OpenShift::
+
--
-Start by getting the name of the Autonomous Operator pod.
+Start by getting the name of the Kubernetes Operator pod.
[source,console]
----
@@ -94,25 +94,25 @@ time="2018-01-23T22:56:51Z" level=info msg="I'm the leader, attempt to start the
time="2018-01-23T22:56:51Z" level=info msg="Creating the couchbase-operator controller" module=main
----
-Alternatively, you can specify the Autonomous Operator deployment to get the logs.
+Alternatively, you can specify the Kubernetes Operator deployment to get the logs.
[source,console]
----
$ oc logs deployment/couchbase-operator
----
-Since there is only one instance of the Autonomous Operator in the deployment, the the underlying command will automatically select the correct pod and print the logs.
+Since there is only one instance of the Kubernetes Operator in the deployment, the the underlying command will automatically select the correct pod and print the logs.
--
====
-If you're troubleshooting the Autonomous Operator, watch for the following messages which indicate that the Operator is unable to reconcile a Couchbase cluster into a desired state:
+If you're troubleshooting the Kubernetes Operator, watch for the following messages which indicate that the Operator is unable to reconcile a Couchbase cluster into a desired state:
* Logs with `level=error`
* Operator is unable to get cluster state after N retries
-== Profiling the Autonomous Operator
+== Profiling the Kubernetes Operator
-For more advanced troubleshooting, the Autonomous Operator supports the Go language https://golang.org/pkg/net/http/pprof/[pprof] feature and serves profiling data on its default listen address `localhost:8080`.
+For more advanced troubleshooting, the Kubernetes Operator supports the Go language https://golang.org/pkg/net/http/pprof/[pprof] feature and serves profiling data on its default listen address `localhost:8080`.
You can access this endpoint by running a remote shell or forwarding the port to your local system.
[{tabs}]
@@ -180,7 +180,7 @@ $ go tool pprof localhost:8080/debug/pprof/heap
Kubernetes Events provide insights into what is happening inside a Kubernetes cluster. They record significant occurrences and changes in the state of resources, such as the creation, deletion, or failure of pods, nodes, services, and other Kubernetes objects.
-They can be used to monitor changes that have occurred in the cluster, and can be helpful when troubleshooting issues with the Autonomous Operator. However, they expire after a certain period of time, typically one hour. You can use the https://github.com/couchbase/couchbase-k8s-event-collector[Kubernetes Event Collector] tool to collect and store events for longer periods of time.
+They can be used to monitor changes that have occurred in the cluster, and can be helpful when troubleshooting issues with the Kubernetes Operator. However, they expire after a certain period of time, typically one hour. You can use the https://github.com/couchbase/couchbase-k8s-event-collector[Kubernetes Event Collector] tool to collect and store events for longer periods of time.
The Kubernetes Event Collector watches for Kubernetes events within a namespace and stores them to a buffer which can be stashed. It can be deployed and configured using helm
diff --git a/modules/ROOT/pages/howto-operator-upgrade.adoc b/modules/ROOT/pages/howto-operator-upgrade.adoc
index de6d687..35c3d5b 100644
--- a/modules/ROOT/pages/howto-operator-upgrade.adoc
+++ b/modules/ROOT/pages/howto-operator-upgrade.adoc
@@ -2,7 +2,7 @@
:page-aliases: upgrading-the-operator
:tabs:
-Upgrading the Couchbase Autonomous Operator is a five-step process:
+Upgrading the Couchbase Kubernetes Operator is a five-step process:
- <>
- <>
diff --git a/modules/ROOT/pages/howto-persistent-volumes.adoc b/modules/ROOT/pages/howto-persistent-volumes.adoc
index 0bcff8d..92b4bad 100644
--- a/modules/ROOT/pages/howto-persistent-volumes.adoc
+++ b/modules/ROOT/pages/howto-persistent-volumes.adoc
@@ -99,7 +99,7 @@ You can also modify the storage class and the Operator will detect and upgrade t
Normally, the Couchbase cluster must undergo a rolling upgrade whenever the volume size is modified.
However, persistent volumes that are already in use by Couchbase clusters can optionally be _expanded in-place_ without needing to perform an upgrade on the underlying storage subsystem.
-The Autonomous Operator achieves this by working in conjunction with https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims[Kubernetes Persistent Volume Expansion^] to claim additional storage for running pods without any downtime.
+The Kubernetes Operator achieves this by working in conjunction with https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims[Kubernetes Persistent Volume Expansion^] to claim additional storage for running pods without any downtime.
[source,yaml]
----
@@ -120,12 +120,12 @@ spec:
storage: 2Gi # <.>
----
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-enableonlinevolumeexpansion[`couchbaseclusters.spec.enableOnlineVolumeExpansion`]: This field must be set to `true` for the Autonomous Operator to allow online volume expansion.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-enableonlinevolumeexpansion[`couchbaseclusters.spec.enableOnlineVolumeExpansion`]: This field must be set to `true` for the Kubernetes Operator to allow online volume expansion.
<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-onlinevolumeexpansiontimeoutinmins[`couchbaseclusters.spec.onlineVolumeExpansionTimeoutInMins`]: This field must be provided as a retry mechanism with a timeout in minutes. It could be in between 0 to 30 minutes, defaulting to 10 minutes. No unit is needed to be provided.
-<.> Modifying the storage size will trigger the Autonomous Operator to detect that existing persistent volume claims do not match the intended size (as it normally would).
-However, if the new value is larger than the previous value, the Autonomous Operator will attempt an online expansion of the volume.
+<.> Modifying the storage size will trigger the Kubernetes Operator to detect that existing persistent volume claims do not match the intended size (as it normally would).
+However, if the new value is larger than the previous value, the Kubernetes Operator will attempt an online expansion of the volume.
You can verify the status of the volume expansion task by checking events on the `CouchbaseCluster` resource:
@@ -160,7 +160,7 @@ It's important to note that setting xref:resource/couchbasecluster.adoc#couchbas
Please review the following notes before attempting online volume expansion:
* The underlying `StorageClass` must be capable of performing https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims[volume expansions^] (`allowVolumeExpansion=true`).
-** The Autonomous Operator has no way of detecting if the underlying storage class supports volume expansion.
+** The Kubernetes Operator has no way of detecting if the underlying storage class supports volume expansion.
Therefore, it is important that you confirm that your volume type supports volume expansion before enabling online volume expansion in the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource.
+
In general, block storage volume types such as `GCE-PD`, `AWS-EBS`, A`zure Disk`, `Cinder`, and `Ceph RBD` typically require a full file system expansion, whereas network attached file systems like `Glusterfs` and `Azure File` can always be expanded online.
@@ -168,12 +168,12 @@ In general, block storage volume types such as `GCE-PD`, `AWS-EBS`, A`zure Disk`
* Volume size can only be _increased_.
** Kubernetes does not currently support online reductions in volume size.
** New storage sizes can only be specified in xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-volumeclaimtemplates[`volumeClaimTemplates.spec.resources.requests.storage`], and must be a larger value than the current size (values smaller than the current size will be blocked by the dynamic admission controller).
-Note that persistent volumes are fully managed by the Autonomous Operator, therefore any manual changes to the PersistentVolumeClaim size made outside of the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] spec will be reverted.
+Note that persistent volumes are fully managed by the Kubernetes Operator, therefore any manual changes to the PersistentVolumeClaim size made outside of the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] spec will be reverted.
** To reduce volume size, set xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-enableonlinevolumeexpansion[`couchbaseclusters.spec.enableOnlineVolumeExpansion`] back to `false` and proceed with <> (which requires rolling upgrade).
-* If online volume expansion fails for some reason, the Autonomous Operator will fall back to a traditional rolling upgrade as a means to expand volumes.
+* If online volume expansion fails for some reason, the Kubernetes Operator will fall back to a traditional rolling upgrade as a means to expand volumes.
-* The Autonomous Operator cannot detect if https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim[resizing in-use PersistentVolumeClaims (`ExpandInUsePersistentVolumes`)^] is enabled on the current Kubernetes cluster.
+* The Kubernetes Operator cannot detect if https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim[resizing in-use PersistentVolumeClaims (`ExpandInUsePersistentVolumes`)^] is enabled on the current Kubernetes cluster.
diff --git a/modules/ROOT/pages/howto-prometheus.adoc b/modules/ROOT/pages/howto-prometheus.adoc
index cd6de5d..e311ecb 100644
--- a/modules/ROOT/pages/howto-prometheus.adoc
+++ b/modules/ROOT/pages/howto-prometheus.adoc
@@ -2,7 +2,7 @@
include::partial$constants.adoc[]
[abstract]
-You can set up the Autonomous Operator to use the Couchbase Server's native support for metrics collection, for Couchbase Server versions newer than Version 7.0 +
+You can set up the Kubernetes Operator to use the Couchbase Server's native support for metrics collection, for Couchbase Server versions newer than Version 7.0 +
Couchbase native support for metric collection exposes a Prometheus compatible endpoint on all Pods without the need for third party tools.
IMPORTANT: Couchbase native support is available for Couchbase Server versions 7.0 or higher and is the recommended way for collecting metrics with Prometheus.
@@ -153,15 +153,15 @@ Use of Prometheus exporter is deprecated and will be removed in a future release
It is highly recommended that you use the native Couchbase Server Metrics endpoint.
====
-The Autonomous Operator provides Prometheus integration for collecting and exposing Couchbase Server metrics via the https://github.com/couchbase/couchbase-exporter[Couchbase Prometheus Exporter^].
+The Kubernetes Operator provides Prometheus integration for collecting and exposing Couchbase Server metrics via the https://github.com/couchbase/couchbase-exporter[Couchbase Prometheus Exporter^].
The Couchbase Exporter is a https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#how-pods-manage-multiple-containers["sidecar" container^] that is injected into each Couchbase Server pod.
Prometheus metrics collection is enabled in the `CouchbaseCluster` resource.
The configuration allows you to specify a Couchbase-provided container image that contains the Prometheus Exporter.
-The Autonomous Operator injects the image as a https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#how-pods-manage-multiple-containers["sidecar" container^] in each Couchbase Server pod.
+The Kubernetes Operator injects the image as a https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#how-pods-manage-multiple-containers["sidecar" container^] in each Couchbase Server pod.
-NOTE: The Couchbase-supplied Prometheus Exporter container image is only supported on Kubernetes platforms in conjunction with the Couchbase Autonomous Operator.
+NOTE: The Couchbase-supplied Prometheus Exporter container image is only supported on Kubernetes platforms in conjunction with the Couchbase Kubernetes Operator.
== Couchbase Exporter Configuration
@@ -182,7 +182,7 @@ spec:
<.> Setting xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-monitoring-prometheus-enabled[`couchbaseclusters.spec.monitoring.prometheus.enabled`] to `true` enables injection of the sidecar into Couchbase Server pods.
-<.> If the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-monitoring-prometheus-image[`couchbaseclusters.spec.monitoring.prometheus.image`] field is left unspecified, then the dynamic admission controller will automatically populate it with the most recent container image that was available when the installed version of the Autonomous Operator was released.
+<.> If the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-monitoring-prometheus-image[`couchbaseclusters.spec.monitoring.prometheus.image`] field is left unspecified, then the dynamic admission controller will automatically populate it with the most recent container image that was available when the installed version of the Kubernetes Operator was released.
The default image for open source Kubernetes comes from https://hub.docker.com/r/couchbase/exporter[Docker Hub^], and the default image for OpenShift comes from https://access.redhat.com/containers/#/vendor/couchbase[Red Hat Container Catalog^].
+
If pulling directly from the the Red Hat Container Catalog, then the path will be something similar to `registry.connect.redhat.com/couchbase/exporter:{prometheus-version}` (you can refer to the catalog for the most recent images).
@@ -237,12 +237,12 @@ For instructions on how to create a custom metrics configuration and build it in
Once configured, active metrics can be collected from each Couchbase Server pod on port 9091.
-The Autonomous Operator does not create or manage resources for third-party software.
+The Kubernetes Operator does not create or manage resources for third-party software.
Prometheus scrape targets must be manually created by the user.
=== Couchbase Cluster Auto-scaling
-The Autonomous Operator supports xref:concept-couchbase-autoscaling.adoc[auto-scaling Couchbase clusters].
+The Kubernetes Operator supports xref:concept-couchbase-autoscaling.adoc[auto-scaling Couchbase clusters].
In order to properly take advantage of this feature, users must xref:concept-couchbase-autoscaling.adoc#couchbase-metrics[expose Couchbase metrics through the Kubernetes custom metrics API].
Discovery of available metrics can be performed through Prometheus https://github.com/couchbase/couchbase-exporter#queries[queries^].
diff --git a/modules/ROOT/pages/howto-tls-passphrase.adoc b/modules/ROOT/pages/howto-tls-passphrase.adoc
index 467e734..37d0747 100644
--- a/modules/ROOT/pages/howto-tls-passphrase.adoc
+++ b/modules/ROOT/pages/howto-tls-passphrase.adoc
@@ -32,7 +32,7 @@ Refer to xref:howto-tls.adoc#creating-secrets[TLS Secret Create] documentation t
== Passphrase registration
-The Autonomous Operator is capable of registering a local script or a rest endpoint to generate the secret passphrase used by a private key.
+The Kubernetes Operator is capable of registering a local script or a rest endpoint to generate the secret passphrase used by a private key.
Passphrase TLS can be enabled from a non-TLS cluster or from a cluster with plain TLS keys.
When enabling passphrase TLS on a Cluster that is already provisioned, the Couchbase Cluster will enter a rolling upgrade of the Server Pods.
diff --git a/modules/ROOT/pages/howto-xdcr.adoc b/modules/ROOT/pages/howto-xdcr.adoc
index 5de4f0b..8dd5d59 100644
--- a/modules/ROOT/pages/howto-xdcr.adoc
+++ b/modules/ROOT/pages/howto-xdcr.adoc
@@ -325,7 +325,7 @@ The hostname is calculated as per the xref:howto-client-sdks.adoc#ip-based-addre
== Scopes and collections support
With Couchbase Server version 7 and greater, scope and collections support is now present for XDCR.
-The Couchbase Autonomous Operator fully supports the various options available to the Couchbase Server version it is running with, full details can be found in the xref:server:manage:manage-xdcr/replicate-using-scopes-and-collections.html[official documentation].
+The Couchbase Kubernetes Operator fully supports the various options available to the Couchbase Server version it is running with, full details can be found in the xref:server:manage:manage-xdcr/replicate-using-scopes-and-collections.html[official documentation].
[NOTE]
====
diff --git a/modules/ROOT/pages/install-kubernetes.adoc b/modules/ROOT/pages/install-kubernetes.adoc
index 0797d51..707e605 100644
--- a/modules/ROOT/pages/install-kubernetes.adoc
+++ b/modules/ROOT/pages/install-kubernetes.adoc
@@ -2,11 +2,11 @@
:page-aliases: install-admission-controller, list-and-describe
[abstract]
-This guide walks through the recommended procedure for installing the Couchbase Autonomous Operator on an open source Kubernetes cluster that has _RBAC enabled_.
+This guide walks through the recommended procedure for installing the Couchbase Kubernetes Operator on an open source Kubernetes cluster that has _RBAC enabled_.
[IMPORTANT]
====
-If you are looking to upgrade an existing installation of the Operator, see xref:howto-operator-upgrade.adoc[Upgrading the Autonomous Operator].
+If you are looking to upgrade an existing installation of the Operator, see xref:howto-operator-upgrade.adoc[Upgrading the Kubernetes Operator].
====
== Helm Installation
@@ -28,6 +28,21 @@ Make sure to `cd` into this directory before you run the commands in this guide.
All commands in this guide are run as a system administrator account; they require the creation of cluster scoped resources or the granting of roles to service accounts (privilege escalation).
+== Certified Kubernetes Platforms
+
+There are numerous certified Kubernetes offerings that are conformant to the https://www.cncf.io/certification/software-conformance/[CNCF Certified Kubernetes program^].
+This program ensures that every vendor’s version of Kubernetes supports the required APIs, as do open source community versions.
+
+Couchbase runs its own certification suites on the most widely used vendors to ensure the Couchbase Kubernetes Operator is fully compatible and supported.
+
+The following certified Kubernetes hosted vendors are supported by the Kubernetes Operator:
+
+* Amazon Elastic Kubernetes Service (EKS)
+* Google Kubernetes Engine (GKE)
+* Microsoft Azure Kubernetes Service (AKS)
+* Red Hat OpenShift Container Platform
+* Rancher Kubernetes Engine (RKE)
+
== Install the CRD
The first step in installing the Operator is to install the custom resource definitions (CRD) that describe the Couchbase resource types.
diff --git a/modules/ROOT/pages/install-openshift.adoc b/modules/ROOT/pages/install-openshift.adoc
index 49a198b..5f2e7da 100644
--- a/modules/ROOT/pages/install-openshift.adoc
+++ b/modules/ROOT/pages/install-openshift.adoc
@@ -1,11 +1,11 @@
= Install the Operator on OpenShift
[abstract]
-This guide walks through the recommended procedure for installing the Couchbase Autonomous Operator on a Red Hat OpenShift project.
+This guide walks through the recommended procedure for installing the Couchbase Kubernetes Operator on a Red Hat OpenShift project.
[IMPORTANT]
====
-If you are looking to upgrade an existing installation of the Operator, see xref:howto-operator-upgrade.adoc[Upgrading the Autonomous Operator].
+If you are looking to upgrade an existing installation of the Operator, see xref:howto-operator-upgrade.adoc[Upgrading the Kubernetes Operator].
====
== Prerequisites
@@ -45,7 +45,7 @@ Refer to the xref:concept-operator.adoc[operator architecture document] for addi
[IMPORTANT]
====
-If you use the Openshift Marketplace UI to deploy the Couchbase Autonomous Operator, the dynamic admission controller (DAC) will not be deployed.
+If you use the Openshift Marketplace UI to deploy the Couchbase Kubernetes Operator, the dynamic admission controller (DAC) will not be deployed.
It is recommended that you use the `cao create admission` command to deploy the DAC after installing the Operator.
====
diff --git a/modules/ROOT/pages/overview.adoc b/modules/ROOT/pages/overview.adoc
index 402997b..a10f594 100644
--- a/modules/ROOT/pages/overview.adoc
+++ b/modules/ROOT/pages/overview.adoc
@@ -1,8 +1,8 @@
= Introduction
-The Couchbase Autonomous Operator provides native integration of Couchbase Server with open source Kubernetes and Red Hat OpenShift.
-It enables you to automate the management of common Couchbase tasks such as the configuration, creation, scaling, and recovery of Couchbase clusters.
-By reducing the complexity of running a Couchbase cluster, it lets you focus on the desired configuration and not worry about the details of manual deployment and life-cycle management.
+The Couchbase Cloud-Native Database is the native integration of Couchbase Server with cloud-native technologies, facilitated by the Couchbase Kubernetes Operator.
+This integration empowers organizations to build and run scalable stateful applications in modern, dynamic environments such as public, private, and hybrid clouds.
+Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
== What Does it Support?
diff --git a/modules/ROOT/pages/prerequisite-and-setup.adoc b/modules/ROOT/pages/prerequisite-and-setup.adoc
index ae1330e..522d7f6 100644
--- a/modules/ROOT/pages/prerequisite-and-setup.adoc
+++ b/modules/ROOT/pages/prerequisite-and-setup.adoc
@@ -3,9 +3,9 @@
include::partial$constants.adoc[]
[abstract]
-The Autonomous Operator supports several popular Kubernetes environments and cloud-native utilities.
+The Kubernetes Operator supports several popular Kubernetes environments and cloud-native utilities.
-To install the Couchbase Autonomous Operator, all you need is a Kubernetes or OpenShift cluster running one of the <>.
+To install the Couchbase Kubernetes Operator, all you need is a Kubernetes or OpenShift cluster running one of the <>.
NOTE: For all supported software versions listed on this page, maintenance/patch releases (x.x**.X**) inherit the same support level, unless noted otherwise.
@@ -146,7 +146,7 @@ This release supports the following managed Kubernetes services and utilities:
== Persistent Volume Compatibility
Persistent volumes are mandatory for production deployments.
-Review the Autonomous Operator xref:best-practices.adoc#persistent-volumes-best-practices[best practices] for more information about cluster supportability requirements.
+Review the Kubernetes Operator xref:best-practices.adoc#persistent-volumes-best-practices[best practices] for more information about cluster supportability requirements.
== Hardware Requirements
@@ -170,7 +170,7 @@ You can read more about pod scheduling in the xref:best-practices.adoc#pod-sched
=== Architecture requirements
-The Autonomous Operator supports both ARM and AMD64 Kubernetes clusters.
+The Kubernetes Operator supports both ARM and AMD64 Kubernetes clusters.
The architecture of each node must be uniform across the cluster as the use of mixed architecture nodes is not supported.
NOTE: The official Couchbase docker repository contains multi-arch images which do not require explicit references to architecture tags when being pulled and deployed.
diff --git a/modules/ROOT/pages/prerequisite-cloud.adoc b/modules/ROOT/pages/prerequisite-cloud.adoc
index f79f553..da881c6 100644
--- a/modules/ROOT/pages/prerequisite-cloud.adoc
+++ b/modules/ROOT/pages/prerequisite-cloud.adoc
@@ -135,7 +135,7 @@ The https://cloud.google.com/sdk/[Google Cloud SDK^] can allow you to authentica
=== Authorization
-By default, Google GKE does not provide the necessary privileges required to deploy the Autonomous Operator.
+By default, Google GKE does not provide the necessary privileges required to deploy the Kubernetes Operator.
The required privileges can be granted to a specific user with the following command:
[source,console]
diff --git a/modules/ROOT/pages/reference-admission-cli.adoc b/modules/ROOT/pages/reference-admission-cli.adoc
index 9789ea9..183f14b 100644
--- a/modules/ROOT/pages/reference-admission-cli.adoc
+++ b/modules/ROOT/pages/reference-admission-cli.adoc
@@ -1,7 +1,7 @@
= Dynamic Admission Controller Deployment Settings
[abstract]
-Command line options for the Autonomous Operator Dynamic Admission Controller.
+Command line options for the Kubernetes Operator Dynamic Admission Controller.
== Dynamic Admission Controller Deployment
diff --git a/modules/ROOT/pages/reference-operator-configuration.adoc b/modules/ROOT/pages/reference-operator-configuration.adoc
index 472efcd..62b30b0 100644
--- a/modules/ROOT/pages/reference-operator-configuration.adoc
+++ b/modules/ROOT/pages/reference-operator-configuration.adoc
@@ -3,7 +3,7 @@
== Operator Deployment
-The Couchbase Autonomous Operator configuration is defined below.
+The Couchbase Kubernetes Operator configuration is defined below.
This is intended as a reference only, and you should prefer the use of the xref:tools/cao.adoc[`cao` utility] or xref:helm-setup-guide.adoc[Helm], as these will handle configuration for you and provide an abstraction layer, less prone to modification.
IMPORTANT: Most of the fields in the Operator configuration should never be changed and it is recommended that you use the configuration as is. However, there are some exceptions noted below.
@@ -140,9 +140,9 @@ Controls at what level a log message generates a stack trace for debugging purpo
== Environment Variables
-When running in different contexts such as Red Hat Marketplace in https://docs.openshift.com/container-platform/4.6/operators/operator_sdk/osdk-generating-csvs.html#olm-enabling-operator-for-restricted-network_osdk-generating-csvs[disconnected mode^], additional environment variables may be added to adjust behavior of the Autonomous Operator.
+When running in different contexts such as Red Hat Marketplace in https://docs.openshift.com/container-platform/4.6/operators/operator_sdk/osdk-generating-csvs.html#olm-enabling-operator-for-restricted-network_osdk-generating-csvs[disconnected mode^], additional environment variables may be added to adjust behavior of the Kubernetes Operator.
-The following example shows supported image variables that can be used to override the default images provided to the Autonomous Operator.
+The following example shows supported image variables that can be used to override the default images provided to the Kubernetes Operator.
[source,yaml]
----
spec:
diff --git a/modules/ROOT/pages/reference-operator-logging.adoc b/modules/ROOT/pages/reference-operator-logging.adoc
index a0c386c..3fe9f43 100644
--- a/modules/ROOT/pages/reference-operator-logging.adoc
+++ b/modules/ROOT/pages/reference-operator-logging.adoc
@@ -1,11 +1,11 @@
-= Autonomous Operator Log Attributes
+= Kubernetes Operator Log Attributes
[abstract]
-Autonomous Operator logs contain some fixed attributes that can be reliably used in your logging infrastructure.
+Kubernetes Operator logs contain some fixed attributes that can be reliably used in your logging infrastructure.
== Introduction
-The Autonomous Operator emits logs on the pod console.
+The Kubernetes Operator emits logs on the pod console.
Logs are structured as JSON for simple machine parsing (though, they are schemaless).
Logs look similar to the following:
@@ -18,7 +18,7 @@ Logs look similar to the following:
{"level":"info","ts":1580377226.7565176,"logger":"cluster","msg":"Creating XDCR remote cluster","cluster":"default/cb-example","remote":"remote"}
----
-The Autonomous Operator uses the https://pkg.go.dev/go.uber.org/zap[zap^] library to generate logging information.
+The Kubernetes Operator uses the https://pkg.go.dev/go.uber.org/zap[zap^] library to generate logging information.
Each entry is organized as a JSON object with key/value pairs of information.
Some attributes are fixed and can be reliably used in your logging infrastructure and are documented in the following sections.
@@ -28,7 +28,7 @@ Some attributes are fixed and can be reliably used in your logging infrastructur
`level`::
This is the log level at which the message was emitted at.
Valid values are `info`, `error`, `debug`, and `-N` (where N is an integer value based on debug level).
-Levels that are emitted depend on the level that is specified in the Autonomous Operator xref:reference-operator-configuration.adoc[deployment settings].
+Levels that are emitted depend on the level that is specified in the Kubernetes Operator xref:reference-operator-configuration.adoc[deployment settings].
`ts`::
This is the message time stamp.
diff --git a/modules/ROOT/pages/reference-operator-rbac.adoc b/modules/ROOT/pages/reference-operator-rbac.adoc
index 4e3c151..a89bc08 100644
--- a/modules/ROOT/pages/reference-operator-rbac.adoc
+++ b/modules/ROOT/pages/reference-operator-rbac.adoc
@@ -11,7 +11,7 @@ A role essentially maps a name to a set of permissions that a user or service is
The key distinction is that cluster-scoped roles can be specified once and used in any name space, whereas namespace-scoped roles have to be defined in each name space they are used in.
-The Couchbase Autonomous Operator is currently scoped to a namespace and needs access to various resources within that namespace in order to function correctly. The resources required by the Operator are:
+The Couchbase Kubernetes Operator is currently scoped to a namespace and needs access to various resources within that namespace in order to function correctly. The resources required by the Operator are:
couchbase.com/couchbaseclusters::
couchbase.com/couchbaseclusters/finalizers::
diff --git a/modules/ROOT/pages/reference-prometheus-metrics.adoc b/modules/ROOT/pages/reference-prometheus-metrics.adoc
index 518366e..b762ed5 100644
--- a/modules/ROOT/pages/reference-prometheus-metrics.adoc
+++ b/modules/ROOT/pages/reference-prometheus-metrics.adoc
@@ -1,7 +1,7 @@
= Prometheus Metrics Reference
[abstract]
-This page captures the metrics supplied to Prometheus by the Couchbase Autonomous Operator and links reference pages of a number of additional metrics that are exported by third party libraries.
+This page captures the metrics supplied to Prometheus by the Couchbase Kubernetes Operator and links reference pages of a number of additional metrics that are exported by third party libraries.
== Operator Metrics
diff --git a/modules/ROOT/pages/release-notes.adoc b/modules/ROOT/pages/release-notes.adoc
index aff2537..fe6f106 100644
--- a/modules/ROOT/pages/release-notes.adoc
+++ b/modules/ROOT/pages/release-notes.adoc
@@ -1,7 +1,7 @@
-= Release Notes for Couchbase Autonomous Operator {operator-version-minor}
+= Release Notes for Couchbase Kubernetes Operator {operator-version-minor}
include::partial$constants.adoc[]
-Autonomous Operator {operator-version-minor} introduces a preview of our new Cluster Migration functionality well as a number of other improvements and minor fixes.
+Kubernetes Operator {operator-version-minor} introduces a preview of our new Cluster Migration functionality well as a number of other improvements and minor fixes.
Take a look at the xref:whats-new.adoc[What's New] page for a list of new features and improvements that are available in this release.
@@ -12,9 +12,9 @@ For installation instructions, refer to:
* xref:install-kubernetes.adoc[]
* xref:install-openshift.adoc[]
-== Upgrading to Autonomous Operator {operator-version-minor}
+== Upgrading to Kubernetes Operator {operator-version-minor}
-The necessary steps needed to upgrade to this release depend on which version of the Autonomous Operator you are upgrading from.
+The necessary steps needed to upgrade to this release depend on which version of the Kubernetes Operator you are upgrading from.
=== Upgrading from 1.x, 2.0, or 2.1
@@ -31,7 +31,7 @@ For further information read the xref:concept-upgrade.adoc[Couchbase Upgrade] co
[#release-v280]
== Release 2.8.0
-Couchbase Autonomous Operator 2.8.0 was released in March 2025.
+Couchbase Kubernetes Operator 2.8.0 was released in March 2025.
[#changes-in-behavior-v280]
=== Changes in Behaviour
diff --git a/modules/ROOT/pages/tutorial-autoscale-data.adoc b/modules/ROOT/pages/tutorial-autoscale-data.adoc
index 97aa690..d5e4030 100644
--- a/modules/ROOT/pages/tutorial-autoscale-data.adoc
+++ b/modules/ROOT/pages/tutorial-autoscale-data.adoc
@@ -2,13 +2,13 @@
include::partial$constants.adoc[]
[abstract]
-Learn how to configure auto-scaling for Data Service nodes using the Autonomous Operator.
+Learn how to configure auto-scaling for Data Service nodes using the Kubernetes Operator.
include::partial$tutorial.adoc[]
== Introduction
-In this tutorial you'll learn how to use the Autonomous Operator to automatically scale the Couchbase Data Service in order to maintain a target memory utilization threshold for an xref:server:learn:buckets-memory-and-storage/buckets.adoc[Ephemeral bucket].
+In this tutorial you'll learn how to use the Kubernetes Operator to automatically scale the Couchbase Data Service in order to maintain a target memory utilization threshold for an xref:server:learn:buckets-memory-and-storage/buckets.adoc[Ephemeral bucket].
You'll also learn more about how the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler^] (HPA) initiates a request to scale the Data Service in order to maintain desired thresholds.
[[before-you-begin]]
@@ -19,7 +19,7 @@ Before you begin this tutorial, you'll need to set up a few things first:
* You'll need a Kubernetes cluster with at least seven available worker nodes.
** Worker nodes should have 4 vCPU and 16 GiB memory in order to exhibit the expected auto-scaling behavior that you'll be initiating later on in this tutorial.
-* You'll need https://helm.sh/docs/intro/install/[Helm version 3.1] or higher for installing the necessary dependencies (e.g. the Autonomous Operator, the Couchbase cluster, etc.)
+* You'll need https://helm.sh/docs/intro/install/[Helm version 3.1] or higher for installing the necessary dependencies (e.g. the Kubernetes Operator, the Couchbase cluster, etc.)
** Once you have Helm installed, you'll need to add the Couchbase chart repository:
+
@@ -115,8 +115,8 @@ $ helm upgrade --install -f autoscale_values.yaml scale couchbase/couchbase-oper
[NOTE]
====
-The Couchbase chart deploys the Autonomous Operator by default.
-If you already have the Autonomous Operator deployed in the current namespace, then you'll need to specify additional overrides during chart installation so that only the Couchbase cluster is deployed:
+The Couchbase chart deploys the Kubernetes Operator by default.
+If you already have the Kubernetes Operator deployed in the current namespace, then you'll need to specify additional overrides during chart installation so that only the Couchbase cluster is deployed:
[source,console]
----
@@ -146,7 +146,7 @@ Events:
Normal EventAutoscalerCreated 22m Autoscaler for config `default` added
----
-The Autonomous Operator automatically creates a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for each server class configuration that has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] set to `true`.
+The Kubernetes Operator automatically creates a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for each server class configuration that has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] set to `true`.
The Operator also keeps the size of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource in sync with the size of its associated server class configuration.
Run the following command to verify that the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource exists and matches the size of its associated server configuration:
@@ -162,10 +162,10 @@ default.scale-couchbase-cluster 2 default <.> <.>
In the console output, you'll see:
-<.> `NAME`: The Autonomous Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`.
-Considering that we enabled auto-scaling for the `default` server class configuration, and the name of our cluster is `scale-couchbase-cluster`, we can determine that the name of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created by the Autonomous Operator will be `default.scale-couchbase-cluster`.
+<.> `NAME`: The Kubernetes Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`.
+Considering that we enabled auto-scaling for the `default` server class configuration, and the name of our cluster is `scale-couchbase-cluster`, we can determine that the name of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created by the Kubernetes Operator will be `default.scale-couchbase-cluster`.
-<.> `SIZE`: This is the current number of Couchbase nodes that the Autonomous Operator is maintaining for the `default` server class.
+<.> `SIZE`: This is the current number of Couchbase nodes that the Kubernetes Operator is maintaining for the `default` server class.
Considering that we set `servers.default.size` to `2` in our cluster configuration, and because the cluster doesn't yet have the ability to automatically scale, we can expect that the `SIZE` listed here will be `2`.
Once we create an HPA for the `default` server class, and the number of `default` nodes begins to scale, the `SIZE` will update to reflect the number of nodes currently being maintained.
@@ -243,10 +243,10 @@ spec:
EOF
----
-<.> `scaleTargetRef.kind`: This field must be set to xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`], which is the `kind` of custom resource that gets automatically created by the Autonomous Operator when you enable auto-scaling for a particular server class.
+<.> `scaleTargetRef.kind`: This field must be set to xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`], which is the `kind` of custom resource that gets automatically created by the Kubernetes Operator when you enable auto-scaling for a particular server class.
<.> `scaleTargetRef.name`: This field needs to reference the `name` of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource.
-Since the Autonomous Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`, the name we'll need to specify is `default.scale-couchbase-cluster`.
+Since the Kubernetes Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`, the name we'll need to specify is `default.scale-couchbase-cluster`.
+
--
[TIP]
@@ -404,7 +404,7 @@ Delete the HPA:
$ kubectl delete hpa data-hpa
----
-Uninstall both the Autonomous Operator and Couchbase cluster by deleting the Helm release:
+Uninstall both the Kubernetes Operator and Couchbase cluster by deleting the Helm release:
[source,console]
----
diff --git a/modules/ROOT/pages/tutorial-autoscale-index.adoc b/modules/ROOT/pages/tutorial-autoscale-index.adoc
index 8727705..1850caf 100644
--- a/modules/ROOT/pages/tutorial-autoscale-index.adoc
+++ b/modules/ROOT/pages/tutorial-autoscale-index.adoc
@@ -2,13 +2,13 @@
include::partial$constants.adoc[]
[abstract]
-Learn how to configure auto-scaling for Index Service nodes using the Autonomous Operator.
+Learn how to configure auto-scaling for Index Service nodes using the Kubernetes Operator.
include::partial$tutorial.adoc[]
== Introduction
-In this tutorial you'll learn how to use the Autonomous Operator to automatically scale the Couchbase Index Service in order to maintain a target memory utilization threshold for indexes.
+In this tutorial you'll learn how to use the Kubernetes Operator to automatically scale the Couchbase Index Service in order to maintain a target memory utilization threshold for indexes.
You'll also learn more about how the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler^] (HPA) initiates a request to scale the Index Service in order to maintain desired thresholds.
[[before-you-begin]]
@@ -19,7 +19,7 @@ Before you begin this tutorial, you'll need to set up a few things first:
* You'll need a Kubernetes cluster with at least eight available worker nodes.
** Worker nodes should have 4 vCPU and 16 GiB memory in order to exhibit the expected auto-scaling behavior that you'll be initiating later on in this tutorial.
-* You'll need https://helm.sh/docs/intro/install/[Helm version 3.1] or higher for installing the necessary dependencies (e.g. the Autonomous Operator, the Couchbase cluster, etc.)
+* You'll need https://helm.sh/docs/intro/install/[Helm version 3.1] or higher for installing the necessary dependencies (e.g. the Kubernetes Operator, the Couchbase cluster, etc.)
** Once you have Helm installed, you'll need to add the Couchbase chart repository:
+
@@ -121,7 +121,7 @@ EOF
<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-cluster-indexservicememoryquota[`couchbaseclusters.spec.cluster.indexServiceMemoryQuota`]: For demonstration purposes, we're setting the default minimum memory quota for the Index Service (`256Mi`) so that we can more quickly and easily induce auto-scaling.
+
-NOTE: This cluster configuration uses the default index storage mode set by the Autonomous Operator, which is `memory_optimized`.
+NOTE: This cluster configuration uses the default index storage mode set by the Kubernetes Operator, which is `memory_optimized`.
This allows us to demonstrate the benefits of auto-scaling in situations when persisting to disk isn't an option.
@@ -141,8 +141,8 @@ $ helm upgrade --install -f autoscale_values.yaml scale couchbase/couchbase-oper
[NOTE]
====
-The Couchbase chart deploys the Autonomous Operator by default.
-If you already have the Autonomous Operator deployed in the current namespace, then you'll need to specify additional overrides during chart installation so that only the Couchbase cluster is deployed:
+The Couchbase chart deploys the Kubernetes Operator by default.
+If you already have the Kubernetes Operator deployed in the current namespace, then you'll need to specify additional overrides during chart installation so that only the Couchbase cluster is deployed:
[source,console]
----
@@ -172,7 +172,7 @@ Events:
Normal EventAutoscalerCreated 22m Autoscaler for config `index` added
----
-The Autonomous Operator automatically creates a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for each server class configuration that has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] set to `true`.
+The Kubernetes Operator automatically creates a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for each server class configuration that has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] set to `true`.
The Operator also keeps the size of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource in sync with the size of its associated server class configuration.
Run the following command to verify that the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource exists and matches the size of its associated server configuration:
@@ -188,10 +188,10 @@ index.scale-couchbase-cluster 1 index <.> <.>
In the console output, you'll see:
-<.> `NAME`: The Autonomous Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`.
-Considering that we enabled auto-scaling for the `index` server class configuration, and the name of our cluster is `scale-couchbase-cluster`, we can determine that the name of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created by the Autonomous Operator will be `index.scale-couchbase-cluster`.
+<.> `NAME`: The Kubernetes Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`.
+Considering that we enabled auto-scaling for the `index` server class configuration, and the name of our cluster is `scale-couchbase-cluster`, we can determine that the name of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created by the Kubernetes Operator will be `index.scale-couchbase-cluster`.
-<.> `SIZE`: This is the current number of Couchbase nodes that the Autonomous Operator is maintaining for the `index` server class.
+<.> `SIZE`: This is the current number of Couchbase nodes that the Kubernetes Operator is maintaining for the `index` server class.
Considering that we set `servers.index.size` to `1` in our cluster configuration, and because the cluster doesn't yet have the ability to automatically scale, we can expect that the `SIZE` listed here will be `1`.
Once we create an HPA for the `index` server class, and the number of `index` nodes begins to scale, the `SIZE` will update to reflect the number of nodes currently being maintained.
@@ -268,10 +268,10 @@ spec:
EOF
----
-<.> `scaleTargetRef.kind`: This field must be set to xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`], which is the `kind` of custom resource that gets automatically created by the Autonomous Operator when you enable auto-scaling for a particular server class.
+<.> `scaleTargetRef.kind`: This field must be set to xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`], which is the `kind` of custom resource that gets automatically created by the Kubernetes Operator when you enable auto-scaling for a particular server class.
<.> `scaleTargetRef.name`: This field needs to reference the `name` of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource.
-Since the Autonomous Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`, the name we'll need to specify is `index.scale-couchbase-cluster`.
+Since the Kubernetes Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`, the name we'll need to specify is `index.scale-couchbase-cluster`.
+
--
[TIP]
@@ -462,7 +462,7 @@ Uninstall the monitoring stack by deleting the Helm release:
$ helm delete monitor
----
-Uninstall both the Autonomous Operator and Couchbase cluster by deleting the Helm release:
+Uninstall both the Kubernetes Operator and Couchbase cluster by deleting the Helm release:
[source,console]
----
diff --git a/modules/ROOT/pages/tutorial-autoscale-query.adoc b/modules/ROOT/pages/tutorial-autoscale-query.adoc
index 5453e6c..e34b197 100644
--- a/modules/ROOT/pages/tutorial-autoscale-query.adoc
+++ b/modules/ROOT/pages/tutorial-autoscale-query.adoc
@@ -2,13 +2,13 @@
:page-aliases: tutorial-autoscale.adoc
[abstract]
-Learn how to configure auto-scaling for Query Service nodes using the Autonomous Operator.
+Learn how to configure auto-scaling for Query Service nodes using the Kubernetes Operator.
include::partial$tutorial.adoc[]
== Introduction
-In this tutorial you'll learn how to use the Autonomous Operator to automatically scale the Couchbase Query Service in order to maintain a target CPU utilization threshold.
+In this tutorial you'll learn how to use the Kubernetes Operator to automatically scale the Couchbase Query Service in order to maintain a target CPU utilization threshold.
You'll also learn more about how the Kubernetes https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler^] (HPA) initiates a request to scale the Query Service in order to maintain desired performance thresholds.
@@ -20,7 +20,7 @@ Before you begin this tutorial, you'll need to set up a few things first:
* You'll need a Kubernetes cluster with at least 10 available worker nodes.
** Worker nodes should have 4 vCPU and 16 GiB memory in order to exhibit the expected auto-scaling behavior that you'll be initiating later on in this tutorial.
-* You'll need https://helm.sh/docs/intro/install/[Helm version 3.1^] or higher for installing the necessary dependencies (e.g. the Autonomous Operator, the Couchbase cluster, etc.)
+* You'll need https://helm.sh/docs/intro/install/[Helm version 3.1^] or higher for installing the necessary dependencies (e.g. the Kubernetes Operator, the Couchbase cluster, etc.)
** Once you have Helm installed, you'll need to add the Couchbase chart repository:
+
@@ -164,8 +164,8 @@ $ helm install -f autoscale_values.yaml scale couchbase/couchbase-operator
[NOTE]
====
-The Couchbase chart deploys the Autonomous Operator by default.
-If you already have the Autonomous Operator deployed in the current namespace, then you'll need to specify additional overrides during chart installation so that only the Couchbase cluster is deployed:
+The Couchbase chart deploys the Kubernetes Operator by default.
+If you already have the Kubernetes Operator deployed in the current namespace, then you'll need to specify additional overrides during chart installation so that only the Couchbase cluster is deployed:
[source,console]
----
@@ -195,7 +195,7 @@ Events:
Normal EventAutoscalerCreated 22m Autoscaler for config `query` added
----
-The Autonomous Operator automatically creates a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for each server class configuration that has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] set to `true`.
+The Kubernetes Operator automatically creates a xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource for each server class configuration that has xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-autoscaleenabled[`couchbaseclusters.spec.servers.autoscaleEnabled`] set to `true`.
The Operator also keeps the size of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource in sync with the size of its associated server class configuration.
Run the following command to verify that the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource exists and matches the size of its associated server configuration:
@@ -211,10 +211,10 @@ query.scale-couchbase-cluster 2 query <.> <.>
In the console output, you'll see:
-<.> `NAME`: The Autonomous Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`.
-Considering that we enabled auto-scaling for the `query` server class configuration, and the name of our cluster is `scale-couchbase-cluster`, we can determine that the name of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created by the Autonomous Operator will be `query.scale-couchbase-cluster`.
+<.> `NAME`: The Kubernetes Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`.
+Considering that we enabled auto-scaling for the `query` server class configuration, and the name of our cluster is `scale-couchbase-cluster`, we can determine that the name of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource created by the Kubernetes Operator will be `query.scale-couchbase-cluster`.
-<.> `SIZE`: This is the current number of Couchbase nodes that the Autonomous Operator is maintaining for the `query` server class.
+<.> `SIZE`: This is the current number of Couchbase nodes that the Kubernetes Operator is maintaining for the `query` server class.
Considering that we set `servers.query.size` to `2` in our cluster configuration, and because the cluster doesn't yet have the ability to automatically scale, we can expect that the `SIZE` listed here will be `2`.
Once we create an HPA for the `query` server class, and the number of `query` nodes begins to scale, the `SIZE` will update to reflect the number of nodes currently being maintained.
@@ -287,10 +287,10 @@ spec:
EOF
----
-<.> `scaleTargetRef.kind`: This field must be set to xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`], which is the `kind` of custom resource that gets automatically created by the Autonomous Operator when you enable auto-scaling for a particular server class.
+<.> `scaleTargetRef.kind`: This field must be set to xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`], which is the `kind` of custom resource that gets automatically created by the Kubernetes Operator when you enable auto-scaling for a particular server class.
<.> `scaleTargetRef.name`: This field needs to reference the `name` of the xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resource.
-Since the Autonomous Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`, the name we'll need specify is `query.scale-couchbase-cluster`.
+Since the Kubernetes Operator creates xref:resource/couchbaseautoscaler.adoc[`CouchbaseAutoscaler`] custom resources with the name format `__.__`, the name we'll need specify is `query.scale-couchbase-cluster`.
+
--
[TIP]
@@ -610,7 +610,7 @@ Delete the HPA:
$ kubectl delete hpa query-cpu-hpa
----
-Uninstall both the Autonomous Operator and Couchbase cluster by deleting the Helm release:
+Uninstall both the Kubernetes Operator and Couchbase cluster by deleting the Helm release:
[source,console]
----
diff --git a/modules/ROOT/pages/tutorial-cert-manager.adoc b/modules/ROOT/pages/tutorial-cert-manager.adoc
index 7ef360a..c15c28d 100644
--- a/modules/ROOT/pages/tutorial-cert-manager.adoc
+++ b/modules/ROOT/pages/tutorial-cert-manager.adoc
@@ -8,7 +8,7 @@ include::partial$tutorial.adoc[]
== What is cert-manager?
-https://cert-manager.io/[cert-manager^] is a Kubernetes controller -- much like the Autonomous Operator -- that defines Kubernetes resource types that represent X.509 primitives.
+https://cert-manager.io/[cert-manager^] is a Kubernetes controller -- much like the Kubernetes Operator -- that defines Kubernetes resource types that represent X.509 primitives.
The key resources we will examine in further depth are:
Issuer::
@@ -27,7 +27,7 @@ Certificates are initially issued by cert-manager, then at a defined time before
Certificates are stored in a Kubernetes `Secret` resource, broadly similar to the standard `kubernetes.io/tls` secret type.
The one major difference is that, along with `tls.crt` and `tls.key` secret data, there is also a `ca.crt` copied in from the `Issuer`.
-(You may notice that this has heavily influenced the design of the Autonomous Operator's TLS interface.)
+(You may notice that this has heavily influenced the design of the Kubernetes Operator's TLS interface.)
[[before-we-begin]]
== Before We Begin
@@ -37,11 +37,11 @@ Before continuing with this tutorial, please ensure the following:
* You have installed cert-manager.
Follow the official installation guides at https://cert-manager.io/docs/installation/[`cert-manager.io`^].
-* You have installed Autonomous Operator 2.2 or higher.
-This tutorial assumes that the installed resources are present, and also leverages the xref:tools/cao.adoc[`cao`] command line tool that comes with the Autonomous Operator binary package.
+* You have installed Kubernetes Operator 2.2 or higher.
+This tutorial assumes that the installed resources are present, and also leverages the xref:tools/cao.adoc[`cao`] command line tool that comes with the Kubernetes Operator binary package.
+
-NOTE: You can actually integrate/perform the steps in this tutorial as part of the process of installing the Autonomous Operator.
-However, for the sake of making the tutorial more straightforward, it is assumed that you've already performed a basic xref:install-kubernetes.adoc[installation] of the Autonomous Operator.
+NOTE: You can actually integrate/perform the steps in this tutorial as part of the process of installing the Kubernetes Operator.
+However, for the sake of making the tutorial more straightforward, it is assumed that you've already performed a basic xref:install-kubernetes.adoc[installation] of the Kubernetes Operator.
Another thing to note is that all of the commands in this tutorial are run from the same, default, namespace.
cert-manager runs cluster scoped, and can see Issuers and Certificates in any namespace, so you can use any namespace you desire.
@@ -120,13 +120,13 @@ $ kubectl apply -f ca-issuer.yaml
== Using cert-manager with the DAC
-The DAC xref:concept-operator.adoc#dynamic-admission-controller[component] of the Autonomous Operator distribution has built-in certificate rotation detection and handling.
+The DAC xref:concept-operator.adoc#dynamic-admission-controller[component] of the Kubernetes Operator distribution has built-in certificate rotation detection and handling.
This makes it a perfect candidate for automation.
[[uninstall-the-existing-dac]]
=== Uninstall the Existing DAC
-As mentioned in the <> section, this tutorial assumes that you've already performed a basic xref:install-kubernetes.adoc[installation] of the Autonomous Operator.
+As mentioned in the <> section, this tutorial assumes that you've already performed a basic xref:install-kubernetes.adoc[installation] of the Kubernetes Operator.
As part of the installation, you would have also installed the DAC.
Before we can continue with the tutorial, we need to _uninstall_ the DAC.
(Don't worry, we'll be re-installing it soon.)
@@ -210,7 +210,7 @@ couchbase-operator-admission True couchbase-operator-admission 4s
=== Update the DAC Configuration Settings
-When you install the Autonomous Operator, the xref:tools/cao.adoc[`cao`] tool automatically creates a default TLS configuration for you.
+When you install the Kubernetes Operator, the xref:tools/cao.adoc[`cao`] tool automatically creates a default TLS configuration for you.
However, we don't have to worry about this default configuration because we removed it entirely when we uninstalled the DAC in the section <>.
Therefore, in this next step, we will generate a brand new DAC configuration, modify it to make use of our managed certificates, and then submit it to Kubernetes to redeploy the DAC.
@@ -289,7 +289,7 @@ If this is the case, check that the correct CA is installed in the webhooks, and
== Using cert-manager with a Couchbase Cluster
-The Autonomous Operator has been capable of rotating Couchbase Server certificates since version 2.0.
+The Kubernetes Operator has been capable of rotating Couchbase Server certificates since version 2.0.
With the 2.2 release, we introduced the ability to use new certificate formats, in particular the standard form used by cert-manager.
This allows the database's TLS configuration to be specified in code, along with security policies.
The key benefits of using this are 1.) oversight -- the ability to peer review configuration; and 2.) auditing -- providing simple, policy driven security constraints.
@@ -368,7 +368,7 @@ couchbase-operator-admission True couchbase-operator-admission 32m
=== Create the Couchbase Cluster
Consuming certificates issued by cert-manager is fairly straightforward.
-Using the `couchbase-cluster.yaml` template from the Autonomous Operator binary package, make the following edits to the `CouchbaseCluster` custom resource:
+Using the `couchbase-cluster.yaml` template from the Kubernetes Operator binary package, make the following edits to the `CouchbaseCluster` custom resource:
[source,console]
----
@@ -409,9 +409,9 @@ spec:
- query
----
-<.> The `secretSource` TLS provider tells the Autonomous Operator that the secrets will be in `kubernetes.io/tls` format.
+<.> The `secretSource` TLS provider tells the Kubernetes Operator that the secrets will be in `kubernetes.io/tls` format.
This also requires a `ca.crt` key, which cert-manager provides automatically.
-<.> The `serverSecretName` tells the Autonomous Operator to use the cert-manager issued secret defined in the previous section <>.
+<.> The `serverSecretName` tells the Kubernetes Operator to use the cert-manager issued secret defined in the previous section <>.
Run the following command to create the `Certificate` resource:
@@ -437,4 +437,4 @@ cb-example 7.2.3 3 Available 8c531b1cc11680c286d04616cc7a8185 3m3
== Summary
-By embracing the `kubernetes.io/tls` format, the Autonomous Operator is able to draw upon great 3rd-party tools, like cert-manager, that offer simplicity, security, and regulation within your Kubernetes environment.
+By embracing the `kubernetes.io/tls` format, the Kubernetes Operator is able to draw upon great 3rd-party tools, like cert-manager, that offer simplicity, security, and regulation within your Kubernetes environment.
diff --git a/modules/ROOT/pages/tutorial-couchbase-log-forwarding.adoc b/modules/ROOT/pages/tutorial-couchbase-log-forwarding.adoc
index 128dbc5..f8b7e40 100644
--- a/modules/ROOT/pages/tutorial-couchbase-log-forwarding.adoc
+++ b/modules/ROOT/pages/tutorial-couchbase-log-forwarding.adoc
@@ -3,7 +3,7 @@ include::partial$constants.adoc[]
:example-prerequisites: This example assumes you've deployed the CouchbaseCluster resource described at the <>. It also assumes that you are familiar with how to <>.
[abstract]
-Learn how to configure the Autonomous Operator to forward Couchbase logs using Fluent Bit.
+Learn how to configure the Kubernetes Operator to forward Couchbase logs using Fluent Bit.
include::partial$tutorial.adoc[]
@@ -11,9 +11,9 @@ include::partial$tutorial.adoc[]
Having a containerized application's logs available on standard console output is desirable in Kubernetes environments, since it allows for https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs[simple debugging^], as well as standards-based integration with centralized log management systems running in a Kubernetes cluster.
Unfortunately, the Couchbase Server container doesn't natively write its logs to standard console output.
-Instead, the xref:concept-couchbase-logging.adoc#default-logging[default behavior] of the Couchbase Server container (in deployments managed by the Autonomous Operator) is to write its xref:server:manage:manage-logging/manage-logging.adoc#log-file-listing[various log files] to the `default` or `logs` persistent volumes.
+Instead, the xref:concept-couchbase-logging.adoc#default-logging[default behavior] of the Couchbase Server container (in deployments managed by the Kubernetes Operator) is to write its xref:server:manage:manage-logging/manage-logging.adoc#log-file-listing[various log files] to the `default` or `logs` persistent volumes.
-However, as of version 2.2, the Autonomous Operator can optionally deploy and manage a third party log processor on each Couchbase pod which enables Couchbase Server logs to be xref:concept-couchbase-logging.adoc#log-forwarding[forwarded] to the log processor's standard console output as well as other destinations.
+However, as of version 2.2, the Kubernetes Operator can optionally deploy and manage a third party log processor on each Couchbase pod which enables Couchbase Server logs to be xref:concept-couchbase-logging.adoc#log-forwarding[forwarded] to the log processor's standard console output as well as other destinations.
This guide will walk you through an example of how to configure log forwarding for a Couchbase deployment using the Couchbase-supplied log processor image based on https://fluentbit.io/[Fluent Bit^].
Examples are provided for forwarding logs to Loki and Elasticsearch, as well as how to target Azure blob storage and Amazon S3 storage for Couchbase Server xref:howto-manage-couchbase-logging.adoc#configuring-audit-logging[audit logs].
@@ -22,8 +22,8 @@ An example for configuring log redaction is also shown to demonstrate how the lo
[[before-you-begin]]
== Before You Begin
-This tutorial assumes that you have already xref:install-kubernetes.adoc[installed] the Autonomous Operator.
-The Autonomous Operator needs to be running in the same namespace where you deploy the Couchbase cluster in the <> section below.
+This tutorial assumes that you have already xref:install-kubernetes.adoc[installed] the Kubernetes Operator.
+The Kubernetes Operator needs to be running in the same namespace where you deploy the Couchbase cluster in the <> section below.
[[configure-the-couchbase-cluster]]
== Configure the Couchbase Cluster
@@ -100,7 +100,7 @@ spec:
This field normally defaults to `false`.
+
This is technically the only field that needs to be modified in order to enable log forwarding.
-The Autonomous Operator will default to pulling the Couchbase-supplied https://hub.docker.com/r/couchbase/fluent-bit[log processor image^] from the Docker public registry.
+The Kubernetes Operator will default to pulling the Couchbase-supplied https://hub.docker.com/r/couchbase/fluent-bit[log processor image^] from the Docker public registry.
<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-logging-audit-enabled[`couchbaseclusters.spec.logging.audit.enabled`]: Setting this field to `true` enables xref:howto-manage-couchbase-logging.adoc#configuring-audit-logging[audit logging] on the Couchbase cluster.
This field normally defaults to `false`.
@@ -125,7 +125,7 @@ Run the following command to deploy it into Kubernetes:
$ kubectl apply -f couchbase-cluster-log-forwarding.yaml
----
-Note that the Autonomous Operator must already be deployed and running in the current namespace in order for the above command to succeed (refer to the <>).
+Note that the Kubernetes Operator must already be deployed and running in the current namespace in order for the above command to succeed (refer to the <>).
Next, verify that the cluster has been deployed successfully.
@@ -173,7 +173,7 @@ Now that we've successfully implemented the default configuration for processing
The _log forwarding configuration_ determines how the `logging` sidecar container processes and forwards Couchbase logs.
Since this configuration can contain sensitive information, it is stored in a Kubernetes Secret.
-When we created the Couchbase cluster in the <>, the Autonomous Operator automatically created a _default_ log forwarding configuration Secret with the name `fluent-bit-config`.
+When we created the Couchbase cluster in the <>, the Kubernetes Operator automatically created a _default_ log forwarding configuration Secret with the name `fluent-bit-config`.
We'll be modifying this Secret in order to implement our own custom configuration.
=== Allow Custom Configurations
@@ -269,7 +269,7 @@ $ kubectl get secret "fluent-bit-config" -o go-template='{{range $k,$v := .data}
=== Next Steps
Now that you've successfully implemented some basic log forwarding customizations, we recommend that you try out some of the other examples in this tutorial.
-These examples should help you get an even better idea of the capabilities of both Fluent Bit and the Autonomous Operator when it comes to processing and forwarding Couchbase logs.
+These examples should help you get an even better idea of the capabilities of both Fluent Bit and the Kubernetes Operator when it comes to processing and forwarding Couchbase logs.
== Example: Loki Stack
diff --git a/modules/ROOT/pages/tutorial-kubernetes-network-policy.adoc b/modules/ROOT/pages/tutorial-kubernetes-network-policy.adoc
index e571eb9..389848f 100644
--- a/modules/ROOT/pages/tutorial-kubernetes-network-policy.adoc
+++ b/modules/ROOT/pages/tutorial-kubernetes-network-policy.adoc
@@ -1,7 +1,7 @@
= Kubernetes Network Policies Using Deny-All Default
[abstract]
-The Autonomous Operator and Couchbase Server can be used with Kubernetes network policies although this is not officially supported currently.
+The Kubernetes Operator and Couchbase Server can be used with Kubernetes network policies although this is not officially supported currently.
include::partial$tutorial.adoc[]
@@ -11,7 +11,7 @@ Refer to the xref:concept-kubernetes-networking.adoc#network-policies[concepts p
[IMPORTANT]
====
-Network policies should work with Couchbase Autonomous Operator deployments but are currently unsupported officially.
+Network policies should work with Couchbase Kubernetes Operator deployments but are currently unsupported officially.
This information is provided to document a functional set up for those that need it prior to official support.
The assumption is standard Kubernetes configuration is used rather than any specific to a particular network plugin.
====
@@ -68,7 +68,7 @@ For the purposes of the example we therefore deploy the DAC to the default names
$ helm upgrade --install couchbase-dac couchbase/couchbase-operator --set install.couchbaseCluster=false,install.couchbaseOperator=false --namespace default --wait
----
-== Couchbase Autonomous Operator
+== Couchbase Kubernetes Operator
We will need to supply the Kubernetes API server details along with the DAC endpoint used by the operator to network policy rules.
We also need to supply the Kubernetes namespace to use, here we simply use `test` as the name.
diff --git a/modules/ROOT/pages/tutorial-prometheus.adoc b/modules/ROOT/pages/tutorial-prometheus.adoc
index fd82adf..5ed52e3 100644
--- a/modules/ROOT/pages/tutorial-prometheus.adoc
+++ b/modules/ROOT/pages/tutorial-prometheus.adoc
@@ -1,11 +1,19 @@
= Quick Start with Prometheus Monitoring
[abstract]
-Enable and setup Prometheus Monitoring for the Couchbase Autonomous Operator.
+Enable and setup Prometheus Monitoring for the Couchbase Kubernetes Operator.
include::partial$tutorial.adoc[]
-This guide walks through recommended procedures for enabling and configuring Prometheus monitoring of the Couchbase Autonomous Operator.
+== Overview
+
+https://prometheus.io/[Prometheus^] is a leading open-source monitoring solution.
+As a https://www.cncf.io/announcements/2018/08/09/prometheus-graduates/[graduated project^] of the Cloud Native Computing Foundation, it has become the de facto standard for metrics collection and alert generation on cloud platforms.
+
+The Couchbase Kubernetes Operator can optionally manage a sidecar container that can provide detailed metrics to Prometheus about the health of a Couchbase cluster.
+
+Prometheus provides significant visibility into the health and operation of a Couchbase cluster.
+Metrics can be used to develop alerts for system issues, identify maintenance windows and other valuable business intelligence insights, as well as provide the basis for xref:operator::concept-couchbase-autoscaling.adoc[auto-scaling Couchbase clusters].
== Prerequisites
@@ -16,7 +24,7 @@ Clone the https://github.com/coreos/kube-prometheus[`kube-prometheus`^] reposito
$ git clone https://github.com/coreos/kube-prometheus
----
-Make sure you have a Kubernetes cluster running the Autonomous Operator with xref:howto-prometheus.adoc[monitoring enabled] and follow the Prerequisites section in the https://github.com/prometheus-operator/kube-prometheus[`kube-prometheus`^] documentation.
+Make sure you have a Kubernetes cluster running the Kubernetes Operator with xref:howto-prometheus.adoc[monitoring enabled] and follow the Prerequisites section in the https://github.com/prometheus-operator/kube-prometheus[`kube-prometheus`^] documentation.
== Manifests Setup
@@ -24,7 +32,7 @@ Currently, the `kube-prometheus` project includes a folder called `manifests` th
run the Prometheus Operator. The Prometheus Operator creates our Prometheus deployment which scrapes endpoints continuously for Prometheus metrics.
We will be creating these manifests in a xref:tutorial-prometheus.adoc#Create-the-manifests[later step].
-The Autonomous Operator, with monitoring enabled, exposes the Couchbase Prometheus metrics using Couchbase Server native support for metrics collection. Couchbase native support is available for Couchbase Server versions 7.0 or higher
+The Kubernetes Operator, with monitoring enabled, exposes the Couchbase Prometheus metrics using Couchbase Server native support for metrics collection. Couchbase native support is available for Couchbase Server versions 7.0 or higher
Our task is then to get Prometheus to discover and scrape these endpoints in order to monitor the overall cluster through the Prometheus UI and with custom Grafana dashboards.
In order for our Prometheus deployment to recognize and scrape Couchbase endpoints, we need to create a Couchbase specific service monitor, and a Couchbase metrics specific service.
diff --git a/modules/ROOT/pages/tutorial-rbac-auth.adoc b/modules/ROOT/pages/tutorial-rbac-auth.adoc
index 2e13012..a46e2ed 100644
--- a/modules/ROOT/pages/tutorial-rbac-auth.adoc
+++ b/modules/ROOT/pages/tutorial-rbac-auth.adoc
@@ -2,16 +2,16 @@
= Couchbase User Authentication
[abstract]
-A tutorial for configuring Couchbase user authentication and authorization using the Autonomous Operator.
+A tutorial for configuring Couchbase user authentication and authorization using the Kubernetes Operator.
include::partial$tutorial.adoc[]
== Overview
-This tutorial describes how to use the Autonomous Operator to create authenticated users and bind them to specific roles to provide different levels of authorization.
+This tutorial describes how to use the Kubernetes Operator to create authenticated users and bind them to specific roles to provide different levels of authorization.
User authentication can be provided by Couchbase itself or an external LDAP service (such as OpenLDAP).
-The Autonomous Operator refers to <> as the `local` domain, and <> as the `external` domain.
+The Kubernetes Operator refers to <> as the `local` domain, and <> as the `external` domain.
[#couchbase-authentication]
== Couchbase Authentication
diff --git a/modules/ROOT/pages/tutorial-sync-gateway.adoc b/modules/ROOT/pages/tutorial-sync-gateway.adoc
index 170cd9a..3d49e81 100644
--- a/modules/ROOT/pages/tutorial-sync-gateway.adoc
+++ b/modules/ROOT/pages/tutorial-sync-gateway.adoc
@@ -10,14 +10,14 @@ Sync Gateway is a synchronization server that is responsible for secure data syn
It is an integral component of the xref:sync-gateway::index.adoc[Couchbase Mobile] platform.
This tutorial defines best practices for deploying Sync Gateway on Kubernetes.
-It covers basic connectivity between Sync Gateway and a Couchbase Cluster deployed by the Autonomous Operator.
+It covers basic connectivity between Sync Gateway and a Couchbase Cluster deployed by the Kubernetes Operator.
== Prerequisites
* A Couchbase Server cluster deployed and running.
While it is _recommended_ that Couchbase Server is deployed on Kubernetes, it is _not required_.
It is possible to deploy Sync Gateway on Kubernetes and connect it to a Couchbase Server cluster that is not on Kubernetes.
-However, this guide and all of its instructions assume that Couchbase Server is deployed on Kubernetes using the Autonomous Operator.
+However, this guide and all of its instructions assume that Couchbase Server is deployed on Kubernetes using the Kubernetes Operator.
[#configuring-sync-gateway]
== Configuring Sync Gateway
@@ -94,7 +94,7 @@ _Requires Sync Gateway 2.8.2 or higher; otherwise omit this parameter._
--
+
NOTE: The `server` property can technically accept connection endpoints as defined in the Sync Gateway xref:sync-gateway::configuration-properties.adoc#databases-this_db-server[documentation].
-However, many of these connection endpoints are not recommended (some not even supported) when connecting to Couchbase Server deployments that are managed by the Autonomous Operator.
+However, many of these connection endpoints are not recommended (some not even supported) when connecting to Couchbase Server deployments that are managed by the Kubernetes Operator.
In particular, there are xref:concept-couchbase-networking.adoc#sync-gateway-exposed-features-limitations[limitations when using exposed features] that affect which connection methods can be used with certain versions of Sync Gateway.
<.> `databases.cb-example.bucket` defines the `metadata.name` to connect to and use to store mobile data.
@@ -182,8 +182,8 @@ If not using client certificate authentication, secure the system further by usi
In doing so, the scope of operations that can be performed in the event that the Sync Gateway configuration is compromised is limited.
The RBAC user corresponding to Sync Gateway can be manually configured as specified in the Sync Gateway xref:sync-gateway::get-started-prepare.adoc#configure-server[Getting Started Guide].
-Alternatively, you can take advantage of the xref:concept-user-rbac.adoc[Couchbase user RBAC management] capability introduced in Autonomous Operator 2.0.
-With this method, the Autonomous Operator takes care of creating the relevant Sync Gateway user and binds it to a specified role.
+Alternatively, you can take advantage of the xref:concept-user-rbac.adoc[Couchbase user RBAC management] capability introduced in Kubernetes Operator 2.0.
+With this method, the Kubernetes Operator takes care of creating the relevant Sync Gateway user and binds it to a specified role.
The following subsections assume that you will be using this feature.
==== Enabling RBAC Management
@@ -198,7 +198,7 @@ security:
managed: true
----
-NOTE: Enabling RBAC management by the Autonomous Operator will remove any RBAC users that were manually created on the cluster.
+NOTE: Enabling RBAC management by the Kubernetes Operator will remove any RBAC users that were manually created on the cluster.
Therefore, you will need to specify the relevant users and role bindings for those RBAC users if you want to continue using those identities after enabling RBAC management.
==== Creating a Sync Gateway RBAC User
@@ -230,7 +230,7 @@ This creates a sync gateway user with the specified password.
==== Role Binding of Sync Gateway User
Next, define the `sync-gateway` user and bind it to a group that allows it the "application access" role as defined by Couchbase Server.
-The Autonomous Operator will create the required user and group on any Couchbase cluster that selects that user.
+The Kubernetes Operator will create the required user and group on any Couchbase cluster that selects that user.
[source,yaml]
----
@@ -271,7 +271,7 @@ This is the recommended method of deployment as defined in the xref:concept-labe
<.> The `CouchbaseUser` uses the `local` authentication domain, meaning that a password will be locally installed on the Couchbase cluster for authentication.
-<.> The `CouchbaseUser` references the secret we previously created so that the Autonomous Operator can associate the `sync-gateway` user with its password.
+<.> The `CouchbaseUser` references the secret we previously created so that the Kubernetes Operator can associate the `sync-gateway` user with its password.
<.> The `CouchbaseGroup` that the `sync-gateway` user will be a member of needs the `mobile_sync_gateway` role.
@@ -325,7 +325,7 @@ $ kubectl apply -f sync-gateway.yaml
[#deploying-sync-gateway]
== Deploying Sync Gateway
-Sync Gateway is a simple stateless application, and therefore does not require any application-specific controller like the Autonomous Operator.
+Sync Gateway is a simple stateless application, and therefore does not require any application-specific controller like the Kubernetes Operator.
The following is a typical `Deployment` resource configuration for Sync Gateway:
[source,yaml,subs="attributes,verbatim"]
@@ -435,13 +435,13 @@ You can now move on to xref:tutorial-sync-gateway-clients.adoc[connecting Couchb
== Templates
-Template files are provided in the Autonomous Operator binary distribution available at https://www.couchbase.com/downloads[couchbase.com/downloads^].
+Template files are provided in the Kubernetes Operator binary distribution available at https://www.couchbase.com/downloads[couchbase.com/downloads^].
These template files are provided as *evaluation examples* -- they should be modified to suit production deployments.
* `sync-gateway.yaml`: Sample Deployment Controller for Sync Gateway using RBAC for server connectivity.
This template creates a sync gateway RBAC user per procedures outlined in the previous section <>.
-* `couchbase-custer.yaml`: Sample `CouchbaseCluster` custom resource deployment to be used with the Autonomous Operator.
+* `couchbase-custer.yaml`: Sample `CouchbaseCluster` custom resource deployment to be used with the Kubernetes Operator.
+
NOTE: The `couchbase-custer.yaml` template has RBAC management set to `false` by default.
So before you deploy the Sync Gateway cluster using the `sync-gateway.yaml` template, you must enable RBAC management as specified in the previous section <>.
diff --git a/modules/ROOT/pages/tutorial-tls.adoc b/modules/ROOT/pages/tutorial-tls.adoc
index 204735c..f5a5b08 100644
--- a/modules/ROOT/pages/tutorial-tls.adoc
+++ b/modules/ROOT/pages/tutorial-tls.adoc
@@ -122,7 +122,7 @@ Enter pass phrase for tls.key:
=== Private Key Formatting (Legacy)
Due to an https://issues.couchbase.com/browse/MB-24404[issue^] with Couchbase Server's private key handling, server keys may need to be PKCS#1 formatted.
-This was addressed in Autonomous Operator 2.2 (and this tutorial) with the implementation of xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-networking-tls-secretsource[`couchbaseclusters.spec.networking.tls.secretSource`].
+This was addressed in Kubernetes Operator 2.2 (and this tutorial) with the implementation of xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-networking-tls-secretsource[`couchbaseclusters.spec.networking.tls.secretSource`].
However, if you are using legacy TLS configuration with xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-networking-tls-static[`couchbaseclusters.spec.networking.tls.static`], you will need to format the server keys in PKCS#1:
diff --git a/modules/ROOT/pages/tutorial-velero-backup.adoc b/modules/ROOT/pages/tutorial-velero-backup.adoc
index 012b601..50f470d 100644
--- a/modules/ROOT/pages/tutorial-velero-backup.adoc
+++ b/modules/ROOT/pages/tutorial-velero-backup.adoc
@@ -25,7 +25,7 @@ This tutorial was tested using https://velero.io/[Velero^] version 1.2.0, using
It is important to correctly configure https://velero.io/[Velero^] so that it has the ability to perform volume snapshots.
A list of providers are provided in the https://velero.io/docs/v1.2.0/supported-providers/[Velero support matrix^].
-== Installing Couchbase Autonomous Operator
+== Installing Couchbase Kubernetes Operator
For this tutorial we will install the dynamic admission controller (DAC) in the default namespace:
diff --git a/modules/ROOT/pages/tutorial-volume-expansion.adoc b/modules/ROOT/pages/tutorial-volume-expansion.adoc
index 4a938f1..5fc5dd6 100644
--- a/modules/ROOT/pages/tutorial-volume-expansion.adoc
+++ b/modules/ROOT/pages/tutorial-volume-expansion.adoc
@@ -2,14 +2,14 @@
include::partial$constants.adoc[]
[abstract]
-Learn how to use the Autonomous Operator to perform online persistent volume expansion for Couchbase Server deployments in Kubernetes.
+Learn how to use the Kubernetes Operator to perform online persistent volume expansion for Couchbase Server deployments in Kubernetes.
include::partial$tutorial.adoc[]
== Introduction
-In this tutorial you'll learn how to use the Autonomous Operator to expand persistent volumes that are already in use by Couchbase clusters without needing to perform an upgrade on the underlying storage subsystem.
-The Autonomous Operator performs storage upgrades by working in conjunction with https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims[Kubernetes Persistent Volume Expansion^] to claim additional storage for running pods without any downtime.
+In this tutorial you'll learn how to use the Kubernetes Operator to expand persistent volumes that are already in use by Couchbase clusters without needing to perform an upgrade on the underlying storage subsystem.
+The Kubernetes Operator performs storage upgrades by working in conjunction with https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims[Kubernetes Persistent Volume Expansion^] to claim additional storage for running pods without any downtime.
[[before-you-begin]]
== Before You Begin
@@ -26,7 +26,7 @@ Refer to xref:howto-persistent-volumes.adoc#online-volume-expansion[Online Volum
** This tutorial references the `azurefile` storage class which is provided by the Azure Kubernetes Service (AKS).
You'll need to use the name of your particular storage class if installing in a non-AKS Kubernetes environment.
-* You'll need https://helm.sh/docs/intro/install/[Helm version 3.1] or higher for installing the necessary dependencies (e.g. the Autonomous Operator, the Couchbase cluster, etc.)
+* You'll need https://helm.sh/docs/intro/install/[Helm version 3.1] or higher for installing the necessary dependencies (e.g. the Kubernetes Operator, the Couchbase cluster, etc.)
** Once you have Helm installed, you'll need to add the Couchbase chart repository:
+
@@ -95,7 +95,7 @@ EOF
<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-enableonlinevolumeexpansion[`couchbaseclusters.spec.enableOnlineVolumeExpansion`]: Setting this field to `true` enables online expansion of persistent volumes.
-<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-volumemounts[`couchbaseclusters.spec.servers.volumeMounts`]: With this configuration we're telling the Autonomous Operator to provision persistent volume claims for the `data` mount path according to the `data-expanding` claim template.
+<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-volumemounts[`couchbaseclusters.spec.servers.volumeMounts`]: With this configuration we're telling the Kubernetes Operator to provision persistent volume claims for the `data` mount path according to the `data-expanding` claim template.
<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-volumeclaimtemplates[`couchbaseclusters.spec.volumeClaimTemplates`]: This configuration defines the `data-expanding` claim template.
+
@@ -116,8 +116,8 @@ $ helm install -f pvc_resize_values.yaml expand couchbase/couchbase-operator
[NOTE]
====
-The Couchbase chart deploys the Autonomous Operator by default.
-If you already have the Autonomous Operator deployed in the current namespace, then you'll need to specify additional overrides during chart installation so that only the Couchbase cluster is deployed:
+The Couchbase chart deploys the Kubernetes Operator by default.
+If you already have the Kubernetes Operator deployed in the current namespace, then you'll need to specify additional overrides during chart installation so that only the Couchbase cluster is deployed:
[source,console]
----
@@ -198,7 +198,7 @@ Normal ExpandVolumeSucceeded 6s Successfully expanded volume scale-c
Running the commands in this section will uninstall all of the resources that were created during the course of this tutorial.
-Uninstall both the Autonomous Operator and Couchbase cluster by deleting the Helm release:
+Uninstall both the Kubernetes Operator and Couchbase cluster by deleting the Helm release:
[source,console]
----
diff --git a/modules/ROOT/pages/whats-new.adoc b/modules/ROOT/pages/whats-new.adoc
index ba545d1..9431421 100644
--- a/modules/ROOT/pages/whats-new.adoc
+++ b/modules/ROOT/pages/whats-new.adoc
@@ -1,11 +1,11 @@
= What's New?
include::partial$constants.adoc[]
-Autonomous Operator {operator-version-minor} introduces a preview of our new Cluster Migration functionality well as a number of other improvements and minor fixes.
+Kubernetes Operator {operator-version-minor} introduces a preview of our new Cluster Migration functionality well as a number of other improvements and minor fixes.
== Cluster Migration
-Released as a preview here, with a full GA planned for Autonomous Operator 2.8.1, Cluster Migration allows you to transfer a currently-unmanaged Couchbase Server cluster over to being managed by the Operator, with zero downtime.
+Released as a preview here, with a full GA planned for Kubernetes Operator 2.8.1, Cluster Migration allows you to transfer a currently-unmanaged Couchbase Server cluster over to being managed by the Operator, with zero downtime.
See xref:concept-migration.adoc[Couchbase Cluster Migration] for more details.