diff --git a/docs/self-managed/reference-architecture/manual/manual.md b/docs/self-managed/reference-architecture/manual/manual.md index f82e348be42..794e0c31ad3 100644 --- a/docs/self-managed/reference-architecture/manual/manual.md +++ b/docs/self-managed/reference-architecture/manual/manual.md @@ -15,7 +15,7 @@ This method of deployment requires a solid understanding of infrastructure, netw ## Key features -- **Single application JAR**: Starting from Camunda 8.7, all core components (Zeebe, Tasklist, Operate, Optimize, and Identity) are bundled into a single JAR file. This simplifies deployment by reducing the number of artifacts to manage. +- **Single application JAR**: Starting from Camunda 8.8, all core components (Zeebe, Tasklist, Operate, Optimize, and Identity) are bundled into a single JAR file. This simplifies deployment by reducing the number of artifacts to manage. - **Full control**: Users are responsible for all aspects of deployment, including installation, configuration, scaling, and maintenance. This offers maximum flexibility for custom environments. Other deployment options, such as containerized deployments or managed services, might offer more convenience and automation. However, VM based deployment gives you the flexibility to tailor the deployment to your exact needs, which can be beneficial for regulated or highly customized environments. diff --git a/versioned_docs/version-8.7/self-managed/reference-architecture/img/orchestration-cluster.jpg b/versioned_docs/version-8.7/self-managed/reference-architecture/img/orchestration-cluster.jpg index 276984d34ae..8d5e4c4774b 100644 Binary files a/versioned_docs/version-8.7/self-managed/reference-architecture/img/orchestration-cluster.jpg and b/versioned_docs/version-8.7/self-managed/reference-architecture/img/orchestration-cluster.jpg differ diff --git a/versioned_docs/version-8.7/self-managed/reference-architecture/manual/img/manual-ha.jpg b/versioned_docs/version-8.7/self-managed/reference-architecture/manual/img/manual-ha.jpg deleted file mode 100644 index 5d06e9a1146..00000000000 Binary files a/versioned_docs/version-8.7/self-managed/reference-architecture/manual/img/manual-ha.jpg and /dev/null differ diff --git a/versioned_docs/version-8.7/self-managed/reference-architecture/manual/img/manual-single.jpg b/versioned_docs/version-8.7/self-managed/reference-architecture/manual/img/manual-single.jpg deleted file mode 100644 index 2d54eb33b3f..00000000000 Binary files a/versioned_docs/version-8.7/self-managed/reference-architecture/manual/img/manual-single.jpg and /dev/null differ diff --git a/versioned_docs/version-8.7/self-managed/reference-architecture/manual/manual.md b/versioned_docs/version-8.7/self-managed/reference-architecture/manual/manual.md deleted file mode 100644 index f82e348be42..00000000000 --- a/versioned_docs/version-8.7/self-managed/reference-architecture/manual/manual.md +++ /dev/null @@ -1,123 +0,0 @@ ---- -id: manual -title: "Manual JAR deployment overview" -sidebar_label: Manual JAR -description: "Camunda 8 Manual (Java) deployment Reference architecture home " ---- - - - -This reference architecture provides guidance on deploying Camunda 8 Self-Managed as a standalone Java application. This deployment method is ideal for users who prefer manual deployment on bare metal servers or virtual machines (VMs), offering full control over the environment and configuration. It is particularly suited for scenarios with specific infrastructure requirements or highly customized setups. - -:::note -This method of deployment requires a solid understanding of infrastructure, networking, and application management. Consider evaluating your [deployment platform options](../reference-architecture.md) based on your familiarity and need. If you prefer a simpler and managed solution, [Camunda 8 SaaS](https://camunda.com/platform/) can significantly reduce maintenance efforts, allowing you to focus on your core business needs. -::: - -## Key features - -- **Single application JAR**: Starting from Camunda 8.7, all core components (Zeebe, Tasklist, Operate, Optimize, and Identity) are bundled into a single JAR file. This simplifies deployment by reducing the number of artifacts to manage. -- **Full control**: Users are responsible for all aspects of deployment, including installation, configuration, scaling, and maintenance. This offers maximum flexibility for custom environments. - -Other deployment options, such as containerized deployments or managed services, might offer more convenience and automation. However, VM based deployment gives you the flexibility to tailor the deployment to your exact needs, which can be beneficial for regulated or highly customized environments. - -For documentation on the orchestration cluster, Web Modeler and Console separation, refer to the [reference architecture overview](/self-managed/reference-architecture/reference-architecture.md#orchestration-cluster-vs-web-modeler-and-console). - -## Reference implementations - -This section includes deployment reference architectures for manual setups: - -- [Amazon EC2 deployment](/self-managed/setup/deploy/amazon/aws-ec2.md) - a standard production setup with support for high availability. - -## Considerations - -- This overview page focuses on deploying the [orchestration cluster](/self-managed/reference-architecture/reference-architecture.md#orchestration-cluster), the single JAR compromised of Identity, Operate, Optimize, Tasklist, and Zeebe, as well as the Connectors runtime. Web Modeler and Console deployments are not included. -- General guidance and examples focuses on **unix** users, but can be adapted by Windows users with options like [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) or included `batch` files. -- The Optimize importer is not highly available and must only run once within the whole setup. - -## Architecture - -![Single JAR](./img/manual-single.jpg) - -This above diagram illustrates a single-machine deployment using the single JAR package. While simple and effective for lightweight setups, scaling to multiple machines requires careful planning. - -Compared to the generalized architecture depicted in the [reference architecture](/self-managed/reference-architecture/reference-architecture.md#architecture), the `Optimize importer` can be enabled as part of the single JAR. - -### High Availability (HA) - -:::caution Non-HA Optimize importer -When scaling from a single machine to multiple machines, ensure that the `Optimize importer` is enabled on only one machine and disabled on the others. Enabling it on multiple machines will cause data inconsistencies. This limitation is known and will be addressed in future updates. -::: - -![HA JAR](./img/manual-ha.jpg) - -For high availability, a minimum of three machines is recommended to ensure fault tolerance and enable master election in case of failures. Refer to the [clustering documentation](/components/zeebe/technical-concepts/clustering.md) to learn more about the raft protocol and clustering concepts. - -### Components - -The orchestration core is packaged as a single JAR file and includes the following components: - -- **Zeebe** -- **Operate** -- **Tasklist** -- **Optimize** -- **Identity** - -The core facilitates: - -1. **gRPC communication**: For client workers. -2. **HTTP endpoints**: Used by the REST API and Web UI. - -Both types of endpoints can be routed through a load balancer to maintain availability, ensuring that the system remains accessible even if a machine becomes unavailable. While using a load balancer is optional, it is recommended for enhanced availability and security. Alternatively, you can expose static machines, ports, and IPs directly. However, direct exposure is generally discouraged due to security concerns. - -Connectors expose additional HTTP(s) endpoints for handling incoming webhooks, which can also be routed through the same http load balancer. - -The orchestration components rely on **Elasticsearch** or **OpenSearch** as their data store. - -Components within the orchestration core communicate seamlessly, particularly: - -- **Zeebe brokers** exchange data over gRPC endpoints for efficient inter-broker communication. - -## Requirements - -Before implementing a reference architecture, review the requirements and guidance outlined below. We are differentiating between `Infrastructure` and `Application` requirements. - -### Infrastructure - -Any of the following are just suggestions for the minimum viable setup, the sizing heavily depends on your use cases and usage. It is recommended to understand the documentation on [sizing your environment](/components/best-practices/architecture/sizing-your-environment.md) and run benchmarking to confirm your required needs. - -#### Minimum Requirements Per Host - -- Modern CPU: 4 cores -- Memory: 8 GB RAM -- Storage: 32 GB SSD (**1,000** IOPS recommended; avoid burstable disk types) - -Suggested instance types from cloud providers: - -- AWS: [m7i](https://aws.amazon.com/ec2/instance-types/m7i/) series -- GCP: [n1](https://cloud.google.com/compute/docs/general-purpose-machines#n1_machines) series - -#### Networking - -- Stable and high-speed network connection -- Configured firewall rules to allow necessary traffic: - - **8080**: Web UI / REST endpoint. - - **9090**: Connector port. - - **9600**: Metrics endpoint. - - **26500**: gRPC endpoint. - - **26501**: Gateway-to-broker communication. - - **26502**: Inter-broker communication. -- Load balancer for distributing traffic (if required) - -:::info Customizing ports -Some ports can be overwritten and are not definitive, you may conduct the documentation of each component to see how it can be done, in case you want to use a different port. Or in our example `Connectors` and `Web UIs` overlap on 8080 due to which we moved connectors to a different port. -::: - -### Application - -- Java Virtual Machine, see [supported environments](/reference/supported-environments.md) for version details. - -### Database - -- Elasticsearch / OpenSearch, see [supported environments](/reference/supported-environments.md) for version details. - -Our recommendation is to use an external managed offer as we will not go into detail on how to manage and maintain your database. diff --git a/versioned_docs/version-8.7/self-managed/reference-architecture/reference-architecture.md b/versioned_docs/version-8.7/self-managed/reference-architecture/reference-architecture.md index da8b632d49c..d0be1a42855 100644 --- a/versioned_docs/version-8.7/self-managed/reference-architecture/reference-architecture.md +++ b/versioned_docs/version-8.7/self-managed/reference-architecture/reference-architecture.md @@ -67,7 +67,7 @@ Additionally, Web Modeler and Console require the following: - [Identity](/self-managed/identity/what-is-identity.md): A service for managing user authentication and authorization. -Unlike the orchestration cluster, Web Modeler and Console run a separate and dedicated Identity deployment. For production environments, using an external [identity provider](/self-managed/setup/guides/connect-to-an-oidc-provider.md) is recommended. +The Identity deployment is typically a shared entity between the orchestration cluster and Web Modeler and Console. For production environments, using an external [identity provider](/self-managed/setup/guides/connect-to-an-oidc-provider.md) is recommended. ### Databases @@ -86,7 +86,7 @@ By decoupling databases from Camunda, you gain greater control and customization High availability (HA) ensures that a system remains operational and accessible even in the event of component failures. While all components are equipped to be run in a highly available manner, some components need extra considerations when run in HA mode. - +For Operate, Optimize, and Tasklist, which include an importer and an archiver module, it's important to note that these modules are not highly available. When scaling, ensure that these modules are disabled for each additional instance, effectively allowing only the Web UI to be scaled. While high availability is one part of the increased fault tolerance and resilience, you should also consider regional or zonal placement of your workloads. @@ -97,7 +97,7 @@ If running a single instance is preferred, make sure to implement [regular backu ## Available reference architectures :::note Documentation Update in Progress -This is a work in progress as the existing documentation is updated to provide better general guidance on the topic. The Kubernetes and Docker documentation may point to older guides. +This is a work in progress as the existing documentation is updated to provide better general guidance on the topic. The Docker and manual documentation may point to older guides. ::: Choosing the right reference architecture depends on various factors such as the organization's goals, existing infrastructure, and specific requirements. The following guides are available to help choose and guide deployments: @@ -135,9 +135,7 @@ For organizations that prefer traditional infrastructure, reference architecture - Applicable for high availability but requires more detailed planning. - Best for teams with expertise in managing physical servers or virtual machines. -For more information and guides, see the reference for [manual setups](./manual/manual.md). - - +For more information and guides, see the reference for [manual setups](/self-managed/setup/deploy/local/manual.md). ### Local development diff --git a/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/aws-ec2.md b/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/aws-ec2.md deleted file mode 100644 index e84d23ee40f..00000000000 --- a/versioned_docs/version-8.7/self-managed/setup/deploy/amazon/aws-ec2.md +++ /dev/null @@ -1,344 +0,0 @@ ---- -id: aws-ec2 -title: "Amazon EC2" -description: "Learn how to install Camunda 8 on AWS EC2 instances." ---- - -This guide provides a detailed walkthrough for installing the Camunda 8 single JAR on AWS EC2 instances. It focuses on managed services by AWS and their cloud offering. Finally, you will verify that the connection to your Self-Managed Camunda 8 environment is working. - -This guide focuses on setting up the [orchestration cluster](/self-managed/reference-architecture/reference-architecture.md#orchestration-cluster-vs-web-modeler-and-console) for Camunda 8. The Web Modeler and Console are not covered in this manual deployment approach. These components are supported on Kubernetes and should be [deployed using Kubernetes](/self-managed/setup/install.md#install-web-modeler). - -:::note Using other Cloud providers -This guide is built around the available tools and services that AWS offers, but is not limited to AWS. The scripts and ideas included can be adjusted for any other cloud provider and use case. - -When using this guide with a different cloud provider, note that you will be responsible for configuring and maintaining the resulting infrastructure. Our support is limited to questions related to the guide itself, not to the specific tools and services of the chosen cloud provider. -::: - -:::danger Cost management -Following this guide will incur costs on your Cloud provider account, namely for the EC2 instances, and OpenSearch. More information can be found on AWS and their [pricing calculator](https://calculator.aws/#/) as the total cost varies per region. - -To get an estimate, you can refer to this [example calculation](https://calculator.aws/#/estimate?id=8ce855e2d02d182c4910ec8b4ea2dbf42ea5fd1d), which can be further optimized to suit your specific use cases. -::: - -## Architecture - -The architecture as depicted focuses on a standard deployment consisting of a three-node setup distributed over 3 [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) within an AWS region, as well as an OpenSearch domain with the same conditions. The focus is on a highly available setup and redundancy in case a zone should fail. - - - - -_Infrastructure diagram for a 3 node EC2 architecture (click on the image to open the PDF version)_ -[![AWS EC2 Architecture](./assets/aws-ec2-arch.jpg)](./assets/aws-ec2-arch.pdf) - -The setup consists of: - -- [Virtual Private Cloud](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) (VPC) is a logically isolated virtual network. - - a [Private Subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html), which does not have direct access to the internet and cannot be easily reached. - - three [EC2](https://aws.amazon.com/ec2/) instances using Ubuntu, one within each availability zone, which will run Camunda 8. - - a [managed OpenSearch](https://aws.amazon.com/what-is/opensearch/) cluster stretched over the three availability zones. - - a [Public Subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html), which allows direct access to the Internet via an [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html). - - (optional) an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) (ALB) is used to expose the WebUIs like Operate, Tasklist, and Connectors, as well as the REST API to the outside world. This is done using sticky sessions, as generally requests are distributed round-robin across all EC2 instances. - - (optional) a [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) (NLB) is used to expose the gRPC endpoint of the Zeebe Gateway, in case external applications require it. - - (optional) a [Bastion Host](https://en.wikipedia.org/wiki/Bastion_host) to allow access to the private EC2 instances since they're not publicly exposed. - - Alternatively, utilize the [AWS Client VPN](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html) instead to reach the private subnet within the VPC. The setup requires extra work and certificates, but can be set up by following the [getting started tutorial by AWS](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-getting-started.html). - - a NAT Gateway that allows the private EC2 instances to reach the internet to download and update software packages. This cannot be used to access the EC2 instances. -- [Security Groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) to handle traffic flow to the VMs. -- an [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) to allow traffic between the VPC and the Internet. - -Both types of subnets are distributed over three availability zones of a single AWS region, allowing for a highly available setup. - -:::note Single Deployment -Alternatively, the same setup can run with a single AWS EC2 instance, but be aware that in case of a zone failure, the whole setup would be unreachable. -::: - -## Requirements - -- An AWS account to create any resources within AWS. - - On a high level, permissions are required on the **ec2**, **iam**, **elasticloadbalancing**, **kms**, **logs**, and **es** level. - - For a more fine-grained view of the permissions, check this [example policy](https://github.com/camunda/camunda-deployment-references/blob/main/aws/ec2/example/policy.json). -- Terraform (1.7+) -- Unix based Operating System (OS) with ssh and sftp - - Windows may be used with [Cygwin](https://www.cygwin.com/) or [Windows WSL](https://learn.microsoft.com/en-us/windows/wsl/install) but has not been tested - -### Considerations - -- The Optimize importer is not highly available and must only run once within the whole setup. - -### Outcome - -The outcome is a fully working Camunda orchestration cluster running in a high availability setup using AWS EC2 and utilizing a managed OpenSearch domain. -The EC2 instances come with an extra disk meant for Camunda to ensure that the content is separated from the operating system. - -## 1. Create the required infrastructure - -:::note Terraform infrastructure example -We do not recommend using the below Terraform related infrastructure as module as we do not guarantee compatibility. -Therefore, we recommend extending or reusing some elements of the Terraform example to ensure compatibility with your environments. -::: - -### Download the reference architecture GitHub repository - -The provided reference architecture repository allows you to directly reuse and extend the existing Terraform example base. This sample implementation is flexible to extend to your own needs without the potential limitations of a Terraform module. - -```sh -wget https://github.com/camunda/camunda-deployment-references/archive/refs/heads/main.zip -``` - -### Update the configuration files - -1. Navigate to the new directory: - -```sh -cd camunda-deployment-references-main/aws/ec2/terraform -``` - -2. Edit the `variables.tf` file to customize the settings, such as the prefix for resource names and CIDR blocks: - -```hcl -variable "prefix" { - default = "example" -} - -variable "cidr_blocks" { - default = "10.0.1.0/24" -} -``` - -3. In `config.tf`, configure a new Terraform backend by updating `backend "local"` to [AWS 3](https://developer.hashicorp.com/terraform/language/backend/s3) (or any other non-`local` backend that fits your organization). - -:::note -`local` is meant for testing and development purposes. The state is saved locally, and does not allow to easily share it with colleagues. More information on alternatives can be found in the [Terraform documentation](https://developer.hashicorp.com/terraform/language/backend). -::: - -### Configure the Terraform AWS provider - -1. Add the [Terraform AWS provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) in the `config.tf`: - -```hcl -provider "aws" {} -``` - -This can be done via a simple script or manually: - -```sh -echo 'provider "aws" {}' >> config.tf -``` - -:::note -This is a current technical limitation, as the same files are used for testing. Terraform does not allow defining the provider twice. -::: - -1. Configure authentication to allow the [AWS Terraform provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) to create resources in AWS. You must configure the provider with the proper credentials before using it. You can further change the region and other preferences and explore different authentication methods. - -There are several ways to authenticate the AWS provider: - - - **Testing/development**: Use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) to configure access. Terraform will automatically default to AWS CLI configuration when present. - - **CI/CD**: Set the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, which can be retrieved from the [AWS Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). - - **Enterprise grade security**: Use an [AWS IAM role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#assuming-an-iam-role). - -Ensure you have set the `AWS_REGION` either as environment variable or in the Terraform AWS provider to deploy the infrastructure in your desired region. AWS resources are region bound on creation. - -:::note Secret management -We strongly recommend managing sensitive information using a secure secrets management solution like HashiCorp Vault. For details on how to inject secrets directly into Terraform via Vault, see the [Terraform Vault secrets injection guide](https://developer.hashicorp.com/terraform/tutorials/secrets/secrets-vault). -::: - -### Initialize and deploy Terraform - -1. Initialize the Terraform working directory. This step downloads the necessary provider plugins: - -```sh -terraform init -``` - -1. Plan the configuration files: - -```sh -terraform plan -out infrastructure.plan # describe what will be created -``` - -1. After reviewing the plan, confirm and apply the changes: - -```sh -terraform apply infrastructure.plan # apply the creation -``` - -The execution takes roughly 30 minutes. Around 25 minutes is solely for the creation of a managed highly available OpenSearch cluster. - -1. After the infrastructure is created, access the outputs defined in `outputs.tf` using `terraform output`. - -For example, to retrieve the OpenSearch endpoint: - -```sh -terraform output aws_opensearch_domain -``` - -### Connect to remote machines via Bastion host (optional) - -The EC2 instances are not public and have to be reached via a Bastion host. Alternatively, utilize the [AWS VPN Client](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html) to connect securely to a private VPC. This step is not described, as setup requires specific manual user interaction. - -```sh -export BASTION_HOST=$(terraform output -raw bastion_ip) -# retrieves the first IP from the camunda_ips array -export CAMUNDA_IP=$(tf output -json camunda_ips | jq -r '.[0]') - -ssh -J admin@${BASTION_HOST} admin@${CAMUNDA_IP} -``` - -## 2. Deploy Camunda 8 - -### Configure and run the installation script - -1. Navigate to the script directory: - -```sh -cd camunda-deployment-references-main/aws/ec2/scripts -``` - -The script directory contains bash scripts that can be used to install and configure Camunda 8. - -2. Configure any script features using the following environment variables: - - - `CLOUDWATCH_ENABLED`: The default is false. If set to true will install the CloudWatch agent on each EC2 instance and export Camunda logs and Prometheus metrics to AWS CloudWatch. - - `SECURITY`: The default is false. If set to true will use self-signed certificates to secure cluster communication, based on the procedure described in the [documentation](/self-managed/zeebe-deployment/security/secure-cluster-communication.md). This requires a manual step as a prerequisite as described below in step 3. - -3. Configure any variables in the `camunda-install.sh` script to overwrite the default for Camunda and Java versions: - - - `OPENJDK_VERSION`: The Temurin Java version. - - `CAMUNDA_VERSION`: The Camunda 8 version. - - `CAMUNDA_CONNECTORS_VERSION`: The Camunda 8 connectors version. - - :::note - The above variables must be set in `camunda-install.sh` . They cannot be set as environment variables. - ::: - -4. Execute the `SECURITY` script (optional): - -If `SECURITY` was enabled in step 2, execute the `generate-self-signed-cert-authority.sh` script to create a certificate authority. - -This certificate should be saved somewhere securely, as it will be required to upgrade or change configuations in an automated way. If the certificate is lost, recreate the certificate authority via the script and all manually created client certificates. - -:::note Self-signed certificates for testing -Self-signed certificates are advocated for development and testing purposes. Check the [documentation](/self-managed/zeebe-deployment/security/secure-cluster-communication.md) on secure cluster communication to learn more about PEM certificates. -::: - -1. Execute the `all-in-one-install.sh` script. - -This script installs all required dependencies. Additionally, it configures Camunda 8 to run in a highly available setup by using a managed OpenSearch instance. - -The script will pull all required IPs and other information from the Terraform state via Terraform outputs. - -During the first installation, you will be asked to confirm the connection to each EC2 instance by typing `yes`. - -### Connect and use Camunda 8 - -The Application Load Balancer (ALB) and the Network Load Balancer (NLB) can be accessed via the following Terraform outputs: - -- `terraform output alb_endpoint`: Access Operate (or the Connectors instance on port `9090`). The ALB is designed for handling Web UIs, such as Operate, Tasklist, Optimize, and Connectors. -- `terraform output nlb_endpoint`: Access the gRPC endpoint of the Zeebe gateway. The NLB is intended for managing the gRPC endpoint of the Zeebe Gateway. This is due to the difference of protocols with ALB focusing on HTTP and NLB on TCP. - -The two endpoints above use the publicly assigned hostname of AWS. Add your domain via CNAME records or use [Route53](https://aws.amazon.com/route53/) to map to the load balancers, allowing them to easily enable SSL. This will require extra work in the Terraform blueprint as it listens to HTTP by default. - -Alternatively, if you have decided not to expose your environment, you can use the jump host to access relevant services on your local machine via port-forwarding. - -For an enterprise grade solution, you can utilize the [AWS Client VPN](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html) instead to reach the private subnet within the VPC. The setup requires extra work and certificates, described in the [getting started tutorial by AWS](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/cvpn-getting-started.html). - -The following can be executed from within the Terraform folder to bind the remote ports to your local machine: - -```sh -export BASTION_HOST=$(terraform output -raw bastion_ip) -# retrieves the first IP from the camunda_ips array -export CAMUNDA_IP=$(tf output -json camunda_ips | jq -r '.[0]') - -# 26500 - gRPC; 8080 - WebUI; 9090 - Connectors -ssh -L 26500:${CAMUNDA_IP}:26500 -L 8080:${CAMUNDA_IP}:8080 -L 9090:${CAMUNDA_IP}:9090 admin@${BASTION_HOST} -``` - -### Turn off bastion host (optional) - -If you used the [bastion host](#turn-off-bastion-host-optional) for access, it can be turned off when longer needed for direct access to the EC2 instances. - -To turn off the bastion host, set the `enable_jump_host` variable to `false` in the `variables.tf` file, and reapply Terraform. - -## 3. Verify connectivity to Camunda 8 - -Using Terraform, you can obtain the HTTP endpoint of the Application Load Balancer and interact with Camunda through the [REST API](/apis-tools/camunda-api-rest/camunda-api-rest-overview.md). - -1. Navigate to the Terraform folder: - -```sh -cd camunda-deployment-references-main/aws/ec2/terraform -``` - -2. Retrieve the Application Load Balancer output: - -```sh -terraform output -raw alb_endpoint -``` - -3. Use the REST API to communicate with Camunda: - -Follow the example in the [REST API documentation](/apis-tools/camunda-api-rest/camunda-api-rest-authentication.md) to authenticate and retrieve the cluster topology. - -## Manage Camunda 8 - -### Upgrade Camunda 8 - -:::info Direct upgrade not supported -Upgrading directly from a Camunda 8.6 release to 8.7 is not supported and cannot be performed. -::: - -To update to a new patch release, the recommended approach is as follows: - -1. Remove the `jars` folder: This step ensures that outdated dependencies from previous versions are completely removed. -2. Overwrite remaining files: Replace the existing files with those from the downloaded patch release package. -3. Restart Camunda 8. - -The update process can be automated using the `all-in-one-install.sh` script, which performs the following steps: - -- Detects an existing Camunda 8 installation. -- Deletes the jars folder to clear outdated dependencies. -- Overwrites the remaining files with the updated version. -- Regenerates configuration files. -- Restarts the application to apply the updates. - -### Monitoring - -Our default way of exposing metrics is in the Prometheus format, please conduct the general [metrisc related documentation](/self-managed/zeebe-deployment/operations/metrics.md) to learn more how to scrape Camunda 8. - -In an AWS environment, you can leverage CloudWatch not only for log collection but also for gathering [Prometheus metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-metrics.html). It's important to note that while Camunda natively supports Grafana and Prometheus, integrating CloudWatch for metric visualization is possible but requires additional configuration. - -### Backups - -Please conduct the general topic of backups in the [documentation](/self-managed/operational-guides/backup-restore/backup-and-restore.md). - -With AWS as chosen platform you can utilize [S3](https://aws.amazon.com/s3/) for the backups both for Zeebe and Elasticsearch. - -If you are using a managed OpenSearch domain instead, you should check out the [official documentation](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/managedomains-snapshots.html) on creating backups and snapshots in OpenSearch. - -## Troubleshooting - -Please conduct the general topic of troubleshooting in the [documentation](/self-managed/operational-guides/troubleshooting/troubleshooting.md). - - - - diff --git a/versioned_sidebars/version-8.7-sidebars.json b/versioned_sidebars/version-8.7-sidebars.json index 2238370b719..93112eae3a2 100644 --- a/versioned_sidebars/version-8.7-sidebars.json +++ b/versioned_sidebars/version-8.7-sidebars.json @@ -1930,8 +1930,7 @@ "id": "self-managed/setup/deploy/amazon/aws-marketplace" }, "items": [] - }, - "self-managed/setup/deploy/amazon/aws-ec2" + } ], "Microsoft (Azure)": [ "self-managed/setup/deploy/azure/microsoft-aks" @@ -1973,8 +1972,7 @@ }, { "Reference architecture": [ - "self-managed/reference-architecture/reference-architecture", - "self-managed/reference-architecture/manual/manual" + "self-managed/reference-architecture/reference-architecture" ] }, {