diff --git a/404.md b/404.md new file mode 100644 index 000000000..5fe1bb5c1 --- /dev/null +++ b/404.md @@ -0,0 +1,5 @@ +--- +layout: not-found +--- + +# Page not found diff --git a/deploy-manage/api-keys.md b/deploy-manage/api-keys.md index 68bd64539..8ec5242ff 100644 --- a/deploy-manage/api-keys.md +++ b/deploy-manage/api-keys.md @@ -1,6 +1,5 @@ --- applies_to: - stack: ga deployment: eck: ga ess: ga @@ -9,14 +8,7 @@ applies_to: serverless: ga --- -# Manage API keys - -% What needs to be done: Write from scratch - -% GitHub issue: https://github.com/elastic/docs-projects/issues/349 - -% Scope notes: Elasticsearch & Kibana authentication API Keys - +# API keys API keys are security mechanisms used to authenticate and authorize access to your deployments and {{es}} resources. diff --git a/deploy-manage/api-keys/serverless-project-api-keys.md b/deploy-manage/api-keys/serverless-project-api-keys.md index 1808fecf2..010932367 100644 --- a/deploy-manage/api-keys/serverless-project-api-keys.md +++ b/deploy-manage/api-keys/serverless-project-api-keys.md @@ -16,7 +16,7 @@ You can manage your keys in **{{project-settings}} → {{manage-app}} → {{api- :::{image} ../../images/serverless-api-key-management.png :alt: API keys UI -:class: screenshot +:screenshot: ::: @@ -26,7 +26,7 @@ In **{{api-keys-app}}**, click **Create API key**: :::{image} ../../images/serverless-create-personal-api-key.png :alt: Create API key UI -:class: screenshot +:screenshot: :width: 50% ::: diff --git a/deploy-manage/autoscaling.md b/deploy-manage/autoscaling.md index 090269c45..755f38f96 100644 --- a/deploy-manage/autoscaling.md +++ b/deploy-manage/autoscaling.md @@ -1,70 +1,54 @@ --- mapped_urls: - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-autoscaling.html - - https://www.elastic.co/guide/en/cloud/current/ec-autoscaling.html - - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-autoscaling.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-autoscaling.html +applies_to: + deployment: + ece: ga + ess: ga + eck: ga + serverless: all --- # Autoscaling -% What needs to be done: Refine +The autoscaling feature adjusts resources based on demand. A deployment can use autoscaling to scale resources as needed, ensuring sufficient capacity to meet workload requirements. In {{ece}}, {{eck}}, and {{ech}} deployments, autoscaling follows predefined policies, while in {{serverless-short}}, it is fully managed and automatic. -% GitHub issue: https://github.com/elastic/docs-projects/issues/344 +:::{{tip}} - Serverless handles autoscaling for you +By default, {{serverless-full}} automatically scales your {{es}} resources based on your usage. You don't need to enable autoscaling. +::: -% Scope notes: Creating a new landing page and subheadings/pages for different deployment types. Merge content when appropriate +## Cluster autoscaling -% Use migrated content from existing pages that map to this page: +::::{admonition} Indirect use only +This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported. +:::: -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md -% Notes: 1 child -% - [ ] ./raw-migrated-files/cloud/cloud/ec-autoscaling.md -% Notes: 2 children -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md -% Notes: 2 children -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/xpack-autoscaling.md +Cluster autoscaling allows an operator to create tiers of nodes that monitor themselves and determine if scaling is needed based on an operator-defined policy. An Elasticsearch cluster can use the autoscaling API to report when additional resources are required. For example, an operator can define a policy that scales a warm tier based on available disk space. Elasticsearch monitors disk space in the warm tier. If it predicts low disk space for current and future shard copies, the autoscaling API reports that the cluster needs to scale. It remains the responsibility of the operator to add the additional resources that the cluster signals it requires. -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +A policy is composed of a list of roles and a list of deciders. The policy governs the nodes matching the roles. The deciders provide independent estimates of the capacity required. See [Autoscaling deciders](../deploy-manage/autoscaling/autoscaling-deciders.md) for details on available deciders. -$$$ec-autoscaling-intro$$$ +Cluster autoscaling supports: +* Scaling machine learning nodes up and down. +* Scaling data nodes up based on storage. -$$$ec-autoscaling-factors$$$ +## Trained model autoscaling -$$$ec-autoscaling-notifications$$$ +:::{admonition} Trained model auto-scaling for self-managed deployments +The available resources of self-managed deployments are static, so trained model autoscaling is not applicable. However, available resources are still segmented based on the settings described in this section. +::: -$$$ec-autoscaling-restrictions$$$ +Trained model autoscaling automatically adjusts the resources allocated to trained model deployments based on demand. This feature is available on all cloud deployments (ECE, ECK, ECH) and {{serverless-short}}. See [Trained model autoscaling](/deploy-manage/autoscaling/trained-model-autoscaling.md) for details. -$$$ec-autoscaling-enable$$$ +Trained model autoscaling supports: +* Scaling trained model deployments -$$$ec-autoscaling-update$$$ +::::{note} +Autoscaling is not supported on Debian 8. +:::: -$$$ece-autoscaling-intro$$$ +Find instructions on setting up and managing autoscaling, including supported environments, configuration options, and examples: -$$$ece-autoscaling-factors$$$ - -$$$ece-autoscaling-notifications$$$ - -$$$ece-autoscaling-restrictions$$$ - -$$$ece-autoscaling-enable$$$ - -$$$ece-autoscaling-update$$$ - -$$$ech-autoscaling-intro$$$ - -$$$ech-autoscaling-factors$$$ - -$$$ech-autoscaling-notifications$$$ - -$$$ech-autoscaling-restrictions$$$ - -$$$ech-autoscaling-enable$$$ - -$$$ech-autoscaling-update$$$ - -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: - -* [/raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md](/raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md) -* [/raw-migrated-files/cloud/cloud/ec-autoscaling.md](/raw-migrated-files/cloud/cloud/ec-autoscaling.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md](/raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/xpack-autoscaling.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/xpack-autoscaling.md) \ No newline at end of file +* [Autoscaling in {{ece}} and {{ech}}](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md) +* [Autoscaling in {{eck}}](/deploy-manage/autoscaling/autoscaling-in-eck.md) +* [Autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md) +* [Trained model autoscaling](/deploy-manage/autoscaling/trained-model-autoscaling.md) diff --git a/deploy-manage/autoscaling/autoscaling-deciders.md b/deploy-manage/autoscaling/autoscaling-deciders.md index 1289aa165..90f94fd94 100644 --- a/deploy-manage/autoscaling/autoscaling-deciders.md +++ b/deploy-manage/autoscaling/autoscaling-deciders.md @@ -8,36 +8,216 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-frozen-existence-decider.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-machine-learning-decider.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-fixed-decider.html +applies_to: + ece: + eck: + ess: --- -# Autoscaling deciders +# Autoscaling deciders [autoscaling-deciders] -% What needs to be done: Refine +[Autoscaling](/deploy-manage/autoscaling.md) in Elasticsearch enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention. -% GitHub issue: https://github.com/elastic/docs-projects/issues/344 +::::{admonition} Indirect use only +This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported. +:::: -% Scope notes: Collapse to a single page, explain what deciders are +[Reactive storage decider](#autoscaling-reactive-storage-decider) +: Estimates required storage capacity of current data set. Available for policies governing data nodes. -% Use migrated content from existing pages that map to this page: +[Proactive storage decider](#autoscaling-proactive-storage-decider) +: Estimates required storage capacity based on current ingestion into hot nodes. Available for policies governing hot data nodes. -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-deciders.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-reactive-storage-decider.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-proactive-storage-decider.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-shards-decider.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-storage-decider.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-existence-decider.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-machine-learning-decider.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-fixed-decider.md +[Frozen shards decider](#autoscaling-frozen-shards-decider) +: Estimates required memory capacity based on the number of partially mounted shards. Available for policies governing frozen data nodes. -⚠️ **This page is a work in progress.** ⚠️ +[Frozen storage decider](#autoscaling-frozen-storage-decider) +: Estimates required storage capacity as a percentage of the total data set of partially mounted indices. Available for policies governing frozen data nodes. -The documentation team is working to combine content pulled from the following pages: +[Frozen existence decider](#autoscaling-frozen-existence-decider) +: Estimates a minimum require frozen memory and storage capacity when any index is in the frozen [ILM](../../manage-data/lifecycle/index-lifecycle-management.md) phase. -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-deciders.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-deciders.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-reactive-storage-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-reactive-storage-decider.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-proactive-storage-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-proactive-storage-decider.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-shards-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-shards-decider.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-storage-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-storage-decider.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-existence-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-frozen-existence-decider.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-machine-learning-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-machine-learning-decider.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-fixed-decider.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/autoscaling-fixed-decider.md) \ No newline at end of file +[Machine learning decider](#autoscaling-machine-learning-decider) +: Estimates required memory capacity based on machine learning jobs. Available for policies governing machine learning nodes. + +[Fixed decider](#autoscaling-fixed-decider) +: Responds with a fixed required capacity. This decider is intended for testing only. + +## Reactive storage decider [autoscaling-reactive-storage-decider] + +The [autoscaling](../../deploy-manage/autoscaling.md) reactive storage decider (`reactive_storage`) calculates the storage required to contain the current data set. It signals that additional storage capacity is necessary when existing capacity has been exceeded (reactively). + +The reactive storage decider is enabled for all policies governing data nodes and has no configuration options. + +The decider relies partially on using [data tier preference](../../manage-data/lifecycle/data-tiers.md#data-tier-allocation) allocation rather than node attributes. In particular, scaling a data tier into existence (starting the first node in a tier) will result in starting a node in any data tier that is empty if not using allocation based on data tier preference. Using the [ILM migrate](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-lifecycle-actions/ilm-migrate.md) action to migrate between tiers is the preferred way of allocating to tiers and fully supports scaling a tier into existence. + +## Proactive storage decider [autoscaling-proactive-storage-decider] + +The [autoscaling](../../deploy-manage/autoscaling.md) proactive storage decider (`proactive_storage`) calculates the storage required to contain the current data set plus an estimated amount of expected additional data. + +The proactive storage decider is enabled for all policies governing nodes with the `data_hot` role. + +The estimation of expected additional data is based on past indexing that occurred within the `forecast_window`. Only indexing into data streams contributes to the estimate. + +### Configuration settings [autoscaling-proactive-storage-decider-settings] + +`forecast_window` +: (Optional, [time value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units)) The window of time to use for forecasting. Defaults to 30 minutes. + + +### {{api-examples-title}} [autoscaling-proactive-storage-decider-examples] + +This example puts an autoscaling policy named `my_autoscaling_policy`, overriding the proactive decider’s `forecast_window` to be 10 minutes. + +```console +PUT /_autoscaling/policy/my_autoscaling_policy +{ + "roles" : [ "data_hot" ], + "deciders": { + "proactive_storage": { + "forecast_window": "10m" + } + } +} +``` + +The API returns the following result: + +```console-result +{ + "acknowledged": true +} +``` + +## Frozen shards decider [autoscaling-frozen-shards-decider] + +The [autoscaling](../../deploy-manage/autoscaling.md) frozen shards decider (`frozen_shards`) calculates the memory required to search the current set of partially mounted indices in the frozen tier. Based on a required memory amount per shard, it calculates the necessary memory in the frozen tier. + +### Configuration settings [autoscaling-frozen-shards-decider-settings] + +`memory_per_shard` +: (Optional, [byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the Elasticsearch default heap sizing mechanism is used and that nodes are not bigger than 64 GB. + +## Frozen storage decider [autoscaling-frozen-storage-decider] + +The [autoscaling](../../deploy-manage/autoscaling.md) frozen storage decider (`frozen_storage`) calculates the local storage required to search the current set of partially mounted indices based on a percentage of the total data set size of such indices. It signals that additional storage capacity is necessary when existing capacity is less than the percentage multiplied by total data set size. + +The frozen storage decider is enabled for all policies governing frozen data nodes and has no configuration options. + +### Configuration settings [autoscaling-frozen-storage-decider-settings] + +`percentage` +: (Optional, number value) Percentage of local storage relative to the data set size. Defaults to 5. + +## Frozen existence decider [autoscaling-frozen-existence-decider] + +The [autoscaling](../../deploy-manage/autoscaling.md) frozen existence decider (`frozen_existence`) ensures that once the first index enters the frozen ILM phase, the frozen tier is scaled into existence. + +The frozen existence decider is enabled for all policies governing frozen data nodes and has no configuration options. + +## Machine learning decider [autoscaling-machine-learning-decider] + +The [autoscaling](../../deploy-manage/autoscaling.md) {{ml}} decider (`ml`) calculates the memory and CPU requirements to run {{ml}} jobs and trained models. + +The {{ml}} decider is enabled for policies governing `ml` nodes. + +::::{note} +For {{ml}} jobs to open when the cluster is not appropriately scaled, set `xpack.ml.max_lazy_ml_nodes` to the largest number of possible {{ml}} nodes (refer to [Advanced machine learning settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/machine-learning-settings.md#advanced-ml-settings) for more information). In {{ess}}, this is automatically set. +:::: + + +### Configuration settings [autoscaling-machine-learning-decider-settings] + +Both `num_anomaly_jobs_in_queue` and `num_analytics_jobs_in_queue` are designed to delay a scale-up event. If the cluster is too small, these settings indicate how many jobs of each type can be unassigned from a node. Both settings are only considered for jobs that can be opened given the current scale. If a job is too large for any node size or if a job can’t be assigned without user intervention (for example, a user calling `_stop` against a real-time {{anomaly-job}}), the numbers are ignored for that particular job. + +`num_anomaly_jobs_in_queue` +: (Optional, integer) Specifies the number of queued {{anomaly-jobs}} to allow. Defaults to `0`. + +`num_analytics_jobs_in_queue` +: (Optional, integer) Specifies the number of queued {{dfanalytics-jobs}} to allow. Defaults to `0`. + +`down_scale_delay` +: (Optional, [time value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#time-units)) Specifies the time to delay before scaling down. Defaults to 1 hour. If a scale down is possible for the entire time window, then a scale down is requested. If the cluster requires a scale up during the window, the window is reset. + + +### {{api-examples-title}} [autoscaling-machine-learning-decider-examples] + +This example creates an autoscaling policy named `my_autoscaling_policy` that overrides the default configuration of the {{ml}} decider. + +```console +PUT /_autoscaling/policy/my_autoscaling_policy +{ + "roles" : [ "ml" ], + "deciders": { + "ml": { + "num_anomaly_jobs_in_queue": 5, + "num_analytics_jobs_in_queue": 3, + "down_scale_delay": "30m" + } + } +} +``` + +The API returns the following result: + +```console-result +{ + "acknowledged": true +} +``` + +## Fixed decider [autoscaling-fixed-decider] + +::::{warning} +This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. +:::: + + +::::{warning} +The fixed decider is intended for testing only. Do not use this decider in production. +:::: + + +The [autoscaling](../../deploy-manage/autoscaling.md) `fixed` decider responds with a fixed required capacity. It is not enabled by default but can be enabled for any policy by explicitly configuring it. + +### Configuration settings [_configuration_settings] + +`storage` +: (Optional, [byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) Required amount of node-level storage. Defaults to `-1` (disabled). + +`memory` +: (Optional, [byte value](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) Required amount of node-level memory. Defaults to `-1` (disabled). + +`processors` +: (Optional, float) Required number of processors. Defaults to disabled. + +`nodes` +: (Optional, integer) Number of nodes to use when calculating capacity. Defaults to `1`. + + +### {{api-examples-title}} [autoscaling-fixed-decider-examples] + +This example puts an autoscaling policy named `my_autoscaling_policy`, enabling and configuring the fixed decider. + +```console +PUT /_autoscaling/policy/my_autoscaling_policy +{ + "roles" : [ "data_hot" ], + "deciders": { + "fixed": { + "storage": "1tb", + "memory": "32gb", + "processors": 2.3, + "nodes": 8 + } + } +} +``` + +The API returns the following result: + +```console-result +{ + "acknowledged": true +} +``` \ No newline at end of file diff --git a/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md b/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md new file mode 100644 index 000000000..ce5621a8d --- /dev/null +++ b/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md @@ -0,0 +1,660 @@ +--- +mapped_urls: + - https://www.elastic.co/guide/en/cloud-heroku/current/ech-autoscaling.html + - https://www.elastic.co/guide/en/cloud/current/ec-autoscaling.html + - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-autoscaling.html + - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-autoscaling-example.html + - https://www.elastic.co/guide/en/cloud/current/ec-autoscaling-example.html + - https://www.elastic.co/guide/en/cloud-heroku/current/ech-autoscaling-example.html + - https://www.elastic.co/guide/en/cloud/current/ec-autoscaling-api-example.html + - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-autoscaling-api-example.html +applies_to: + deployment: + ece: ga + ess: ga +navigation_title: "In ECE and ECH" +--- + +# Autoscaling in {{ece}} and {{ech}} + +## Overview [ec-autoscaling-intro] +When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention. + +To learn more about configuring and managing autoscaling, check the following sections: + +* [Overview](#ec-autoscaling-intro) +* [When does autoscaling occur?](#ec-autoscaling-factors) +* [Notifications](#ec-autoscaling-notifications) +* [Restrictions and limitations](#ec-autoscaling-restrictions) +* [Enable or disable autoscaling](#ec-autoscaling-enable) +* [Update your autoscaling settings](#ec-autoscaling-update) + +You can also have a look at our [autoscaling example](#ec-autoscaling-example), as well as a sample request to [create an autoscaled deployment through the API](#ec-autoscaling-api-example). + +::::{note} +Autoscaling is enabled for the Machine Learning tier by default for new deployments. +:::: + +Currently, autoscaling behavior is as follows: + +* **Data tiers** + + * Each Elasticsearch [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. + * In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your [index lifecycle management policies](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-index-management.html). + * To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field. + * Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually. + +* **Machine learning nodes** + + * Machine learning nodes can scale upward and downward based on the configured machine learning jobs. + * When a machine learning job is opened, or a machine learning trained model is deployed, if there are no machine learning nodes in your deployment, the autoscaling mechanism will automatically add machine learning nodes. Similarly, after a period of no active machine learning jobs, any enabled machine learning nodes are disabled automatically. + * To control the maximum size of your machine learning nodes and ensure they will not scale above a certain size, you can use the maximum size per zone field. + * To control the minimum size of your machine learning nodes and ensure the autoscaling mechanism will not scale machine learning below a certain size, you can use the minimum size per zone field. + * The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models. + +::::{note} +For any Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. +:::: + +## When does autoscaling occur?[ec-autoscaling-factors] + +Several factors determine when data tiers or machine learning nodes are scaled. + +For a data tier, an autoscaling event can be triggered in the following cases: + +* Based on an assessment of how shards are currently allocated, and the amount of storage and buffer space currently available. + +* When past behavior on a hot tier indicates that the influx of data can increase significantly in the near future. Refer to [Reactive storage decider](autoscaling-deciders.md) and [Proactive storage decider](autoscaling-deciders.md) for more detail. + +* Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. + +On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md)for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](../../explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-create-job) + +On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. + +## Notifications[ec-autoscaling-notifications] +In the event that a data tier or machine learning node scales up to its maximum possible size, you’ll receive an email, and a notice also appears on the deployment overview page prompting you to adjust your autoscaling settings to ensure optimal performance. + +In {{ece}} deployments, a warning is also issued in the ECE `service-constructor` logs with the field `labels.autoscaling_notification_type` and a value of `data-tier-at-limit` (for a fully scaled data tier) or `ml-tier-at-limit` (for a fully scaled machine learning node). The warning is indexed in the `logging-and-metrics` deployment, so you can use that event to [configure an email notification](../../explore-analyze/alerts-cases/watcher.md). + +## Restrictions and limitations[ec-autoscaling-restrictions] + +The following are known limitations and restrictions with autoscaling: + +* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. + +In {{ech}} the following additional limitations apply: + +* Trial deployments cannot be configured to autoscale beyond the normal Trial deployment size limits. The maximum size per zone is increased automatically from the Trial limit when you convert to a paid subscription. +* ELSER deployments do not scale automatically. For more information, refer to [ELSER](../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md) and [Trained model autoscaling](../autoscaling/trained-model-autoscaling.md). + +In {{ece}}, the following additional limitations apply: + +* In the event that an override is set for the instance size or disk quota multiplier for an instance by means of the [Instance Overrides API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-all-instances-settings-overrides), autoscaling will be effectively disabled. It’s recommended to avoid adjusting the instance size or disk quota multiplier for an instance that uses autoscaling, since the setting prevents autoscaling. + +## Enable or disable autoscaling[ec-autoscaling-enable] + +To enable or disable autoscaling on a deployment: + +1. Log in to the ECE [Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md) or [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). + +2. On the **Deployments** page, select your deployment. + + Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + + +3. In your deployment menu, select **Edit**. +4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. +5. Select **Confirm** to have the autoscaling change and any other settings take effect. All plan changes are shown on the Deployment **Activity** page. + +When autoscaling has been enabled, the autoscaled nodes resize according to the [autoscaling settings](#ec-autoscaling-update). Current sizes are shown on the deployment overview page. + +When autoscaling has been disabled, you need to adjust the size of data tiers and machine learning nodes manually. + +## Update your autoscaling settings[ec-autoscaling-update] + +Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: + +1. Log in to the ECE [Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md) or [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). + +2. On the **Deployments** page, select your deployment. + + Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. In your deployment menu, select **Edit**. +4. To update a data tier: + + 1. Use the dropdown box to set the **Maximum size per zone** to the largest amount of resources that should be allocated to the data tier automatically. The resources will not scale above this value. + 2. You can also update the **Current size per zone**. If you update this setting to match the **Maximum size per zone**, the data tier will remain fixed at that size. + 3. For a hot data tier you can also adjust the **Forecast window**. This is the duration of time, up to the present, for which past storage usage is assessed in order to predict when additional storage is needed. + 4. Select **Save** to apply the changes to your deployment. + +5. To update machine learning nodes: + + 1. Use the dropdown box to set the **Minimum size per zone** and **Maximum size per zone** to the smallest and largest amount of resources, respectively, that should be allocated to the nodes automatically. The resources allocated to machine learning will not exceed these values. If you set these two settings to the same value, the machine learning node will remain fixed at that size. + 2. Select **Save** to apply the changes to your deployment. + +% ECE NOTE +::::{note} - {{ece}} +On Elastic Cloud Enterprise, system-owned deployment templates include the default values for all deployment autoscaling settings. +:::: + +## Autoscaling example [ec-autoscaling-example] + +To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample Elastic Cloud Enterprise or {{ech}} deployment. + +1. Enable autoscaling: + + * On an **existing deployment**, open the deployment **Edit** page to find the option to turn on autoscaling. + * When you create a new deployment, you can find the autoscaling option under **Advanced settings**. + + Once you confirm your changes or create a new deployment, autoscaling is activated with system default settings that you can adjust as needed (though for most use cases the default settings will likely suffice). + +2. View and adjust autoscaling settings on data tiers: + + 1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each Elasticsearch data tier. In this example, the hot data and content tier has the following settings: + + | | | | + | --- | --- | --- | + | **Current size per zone** | **Maximum size per zone** | | + | 45GB storage | 1.41TB storage | | + | 1GB RAM | 32GB RAM | | + | Up to 2.5 vCPU | 5 vCPU | | + + The fault tolerance for the data tier is set to 2 availability zones. + + :::{image} ../../images/cloud-enterprise-ec-ce-autoscaling-data-summary2.png + :alt: A screenshot showing sizing information for the autoscaled data tier + ::: + + 2. Use the dropdown boxes to adjust the current and/or the maximum size of the data tier. Capacity will be added to the hot content and data tier when required, based on its past and present storage usage, until it reaches the maximum size per zone. Any scaling events are applied simultaneously across availability zones. In this example, the tier has plenty of room to scale relative to its current size, and it will not scale above the maximum size setting. There is no minimum size setting since downward scaling is currently not supported on data tiers. + +3. View and adjust autoscaling settings on a machine learning instance: + + 1. From the deployment **Edit** page you can check the minimum and maximum size of your deployment’s machine learning instances. In this example, the machine learning instance has the following settings: + + | | | | + | --- | --- | --- | + | **Minimum size per zone** | **Maximum size per zone** | | + | 1GB RAM | 64GB RAM | | + | 0.5 vCPU up to 8 vCPU | 32 vCPU | | + + The fault tolerance for the machine learning instance is set to 1 availability zone. + + :::{image} ../../images/cloud-enterprise-ec-ce-autoscaling-ml-summary2.png + :alt: A screenshot showing sizing information for the autoscaled machine learning node + ::: + + 2. Use the dropdown boxes to adjust the minimum and/or the maximum size of the data tier. Capacity will be added to or removed from the machine learning instances as needed. The need for a scaling event is determined by the expected memory and vCPU requirements for the currently configured machine learning job. Any scaling events are applied simultaneously across availability zones. Note that unlike data tiers, machine learning nodes do not have a **Current size per zone** setting. That setting is not needed since machine learning nodes support both upward and downward scaling. + +4. Over time, the volume of data and the size of any machine learning jobs in your deployment are likely to grow. Let’s assume that to meet storage requirements your hot data tier has scaled up to its maximum allowed size of 64GB RAM and 32 vCPU. At this point, a notification appears on the deployment overview page indicating that the tier has scaled to capacity. +5. If you expect a continued increase in either storage, memory, or vCPU requirements, you can use the **Maximum size per zone** dropdown box to adjust the maximum capacity settings for your data tiers and machine learning instances, as appropriate. And, you can always re-adjust these levels downward if the requirements change. + +As you can see, autoscaling greatly reduces the manual work involved to manage a deployment. The deployment capacity adjusts automatically as demands change, within the boundaries that you define. Check our main [Deployment autoscaling](../autoscaling.md) page for more information. + +## Autoscaling through the API [ec-autoscaling-api-example] + +This example demonstrates how to use the {{ecloud}} RESTful API to create a deployment with autoscaling enabled. + +The example deployment has a hot data and content tier, warm data tier, cold data tier, and a machine learning node, all of which will scale within the defined parameters. To learn about the autoscaling settings, check [Deployment autoscaling](../autoscaling.md) and [Autoscaling example](#ec-autoscaling-example). + +To learn more about the {{ece}} API, see the [RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/restful-api.md) documentation. For details on the {{ech}} API, check [RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-restful.md). + +### Requirements [ec_requirements] + +Note the following requirements when you run this API request: + +* All Elasticsearch components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example. +* The request requires a format that supports data tiers. Specifically, all Elasticsearch components must contain the following properties: + + * `id` + * `node_attributes` + * `node_roles` + +* The `size`, `autoscaling_min`, and `autoscaling_max` properties must be specified according to the following rules. This is because: + + * On data tiers only upward scaling is currently supported. + * On machine learning nodes both upward and downward scaling is supported. + * On all other components autoscaling is not currently supported. +* On {{ece}}, autoscaling is supported for custom deployment templates on version 2.12 and above. To learn more, refer to [Updating custom templates to support `node_roles` and autoscaling](../deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md). + +$$$ece-autoscaling-api-example-requirements-table$$$ + +| | | | | +| --- | --- | --- | --- | +| | `size` | `autoscaling_min` | `autoscaling_max` | +| data tier | ✓ | ✕ | ✓ | +| machine learning node | ✕ | ✓ | ✓ | +| coordinating and master nodes | ✓ | ✕ | ✕ | +| Kibana | ✓ | ✕ | ✕ | +| APM | ✓ | ✕ | ✕ | + +* ✓ = Include the property. +* ✕ = Do not include the property. + +* These rules match the behavior of the {{ech}} and {{ece}} user console. + +* The `elasticsearch` object must contain the property `"autoscaling_enabled": true`. + +### API request example [ec_api_request_example] + +::::{note} +Although autoscaling can scale some tiers by CPU, the primary measurement of tier size is memory. Limits on tier size are in terms of memory. +:::: + +Run this example API request to create a deployment with autoscaling: + +::::{tab-set} + +:::{tab-item} {{ece}} + +```sh +curl -k -X POST -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/deployments -H 'content-type: application/json' -d ' +{ + "name": "my-first-autoscaling-deployment", + "resources": { + "elasticsearch": [ + { + "ref_id": "main-elasticsearch", + "region": "ece-region", + "plan": { + "autoscaling_enabled": true, + "cluster_topology": [ + { + "id": "hot_content", + "node_roles": [ + "master", + "ingest", + "remote_cluster_client", + "data_hot", + "transform", + "data_content" + ], + "zone_count": 1, + "elasticsearch": { + "node_attributes": { + "data": "hot" + }, + "enabled_built_in_plugins": [] + }, + "instance_configuration_id": "data.default", + "size": { + "value": 4096, + "resource": "memory" + }, + "autoscaling_max": { + "value": 2097152, + "resource": "memory" + } + }, + { + "id": "warm", + "node_roles": [ + "data_warm", + "remote_cluster_client" + ], + "zone_count": 1, + "elasticsearch": { + "node_attributes": { + "data": "warm" + }, + "enabled_built_in_plugins": [] + }, + "instance_configuration_id": "data.highstorage", + "size": { + "value": 0, + "resource": "memory" + }, + "autoscaling_max": { + "value": 2097152, + "resource": "memory" + } + }, + { + "id": "cold", + "node_roles": [ + "data_cold", + "remote_cluster_client" + ], + "zone_count": 1, + "elasticsearch": { + "node_attributes": { + "data": "cold" + }, + "enabled_built_in_plugins": [] + }, + "instance_configuration_id": "data.highstorage", + "size": { + "value": 0, + "resource": "memory" + }, + "autoscaling_max": { + "value": 2097152, + "resource": "memory" + } + }, + { + "id": "coordinating", + "node_roles": [ + "ingest", + "remote_cluster_client" + ], + "zone_count": 1, + "instance_configuration_id": "coordinating", + "size": { + "value": 0, + "resource": "memory" + }, + "elasticsearch": { + "enabled_built_in_plugins": [] + } + }, + { + "id": "master", + "node_roles": [ + "master" + ], + "zone_count": 1, + "instance_configuration_id": "master", + "size": { + "value": 0, + "resource": "memory" + }, + "elasticsearch": { + "enabled_built_in_plugins": [] + } + }, + { + "id": "ml", + "node_roles": [ + "ml", + "remote_cluster_client" + ], + "zone_count": 1, + "instance_configuration_id": "ml", + "autoscaling_min": { + "value": 0, + "resource": "memory" + }, + "autoscaling_max": { + "value": 2097152, + "resource": "memory" + }, + "elasticsearch": { + "enabled_built_in_plugins": [] + } + } + ], + "elasticsearch": { + "version": "8.13.1" + }, + "deployment_template": { + "id": "default" + } + }, + "settings": { + "dedicated_masters_threshold": 6 + } + } + ], + "kibana": [ + { + "ref_id": "main-kibana", + "elasticsearch_cluster_ref_id": "main-elasticsearch", + "region": "ece-region", + "plan": { + "zone_count": 1, + "cluster_topology": [ + { + "instance_configuration_id": "kibana", + "size": { + "value": 1024, + "resource": "memory" + }, + "zone_count": 1 + } + ], + "kibana": { + "version": "8.13.1" + } + } + } + ], + "apm": [ + { + "ref_id": "main-apm", + "elasticsearch_cluster_ref_id": "main-elasticsearch", + "region": "ece-region", + "plan": { + "cluster_topology": [ + { + "instance_configuration_id": "apm", + "size": { + "value": 512, + "resource": "memory" + }, + "zone_count": 1 + } + ], + "apm": { + "version": "8.13.1" + } + } + } + ], + "enterprise_search": [] + } +} +' +``` + +::: + +:::{tab-item} {{ech}} + +```sh +curl -XPOST \ +-H 'Content-Type: application/json' \ +-H "Authorization: ApiKey $EC_API_KEY" \ +"https://api.elastic-cloud.com/api/v1/deployments" \ +-d ' +{ + "name": "my-first-autoscaling-deployment", + "resources": { + "elasticsearch": [ + { + "ref_id": "main-elasticsearch", + "region": "us-east-1", + "plan": { + "autoscaling_enabled": true, + "cluster_topology": [ + { + "id": "hot_content", + "node_roles": [ + "remote_cluster_client", + "data_hot", + "transform", + "data_content", + "master", + "ingest" + ], + "zone_count": 2, + "elasticsearch": { + "node_attributes": { + "data": "hot" + }, + "enabled_built_in_plugins": [] + }, + "instance_configuration_id": "aws.data.highio.i3", + "size": { + "resource": "memory", + "value": 8192 + }, + "autoscaling_max": { + "value": 118784, + "resource": "memory" + } + }, + { + "id": "warm", + "node_roles": [ + "data_warm", + "remote_cluster_client" + ], + "zone_count": 2, + "elasticsearch": { + "node_attributes": { + "data": "warm" + }, + "enabled_built_in_plugins": [] + }, + "instance_configuration_id": "aws.data.highstorage.d3", + "size": { + "value": 0, + "resource": "memory" + }, + "autoscaling_max": { + "value": 118784, + "resource": "memory" + } + }, + { + "id": "cold", + "node_roles": [ + "data_cold", + "remote_cluster_client" + ], + "zone_count": 1, + "elasticsearch": { + "node_attributes": { + "data": "cold" + }, + "enabled_built_in_plugins": [] + }, + "instance_configuration_id": "aws.data.highstorage.d3", + "size": { + "value": 0, + "resource": "memory" + }, + "autoscaling_max": { + "value": 59392, + "resource": "memory" + } + }, + { + "id": "coordinating", + "zone_count": 2, + "node_roles": [ + "ingest", + "remote_cluster_client" + ], + "instance_configuration_id": "aws.coordinating.m5d", + "size": { + "value": 0, + "resource": "memory" + }, + "elasticsearch": { + "enabled_built_in_plugins": [] + } + }, + { + "id": "master", + "node_roles": [ + "master" + ], + "zone_count": 3, + "instance_configuration_id": "aws.master.r5d", + "size": { + "value": 0, + "resource": "memory" + }, + "elasticsearch": { + "enabled_built_in_plugins": [] + } + }, + { + "id": "ml", + "node_roles": [ + "ml", + "remote_cluster_client" + ], + "zone_count": 1, + "instance_configuration_id": "aws.ml.m5d", + "autoscaling_min": { + "value": 0, + "resource": "memory" + }, + "autoscaling_max": { + "value": 61440, + "resource": "memory" + }, + "elasticsearch": { + "enabled_built_in_plugins": [] + } + } + ], + "elasticsearch": { + "version": "7.11.0" + }, + "deployment_template": { + "id": "aws-io-optimized-v2" + } + }, + "settings": { + "dedicated_masters_threshold": 6 + } + } + ], + "kibana": [ + { + "elasticsearch_cluster_ref_id": "main-elasticsearch", + "region": "us-east-1", + "plan": { + "cluster_topology": [ + { + "instance_configuration_id": "aws.kibana.r5d", + "zone_count": 1, + "size": { + "resource": "memory", + "value": 1024 + } + } + ], + "kibana": { + "version": "7.11.0" + } + }, + "ref_id": "main-kibana" + } + ], + "apm": [ + { + "elasticsearch_cluster_ref_id": "main-elasticsearch", + "region": "us-east-1", + "plan": { + "cluster_topology": [ + { + "instance_configuration_id": "aws.apm.r5d", + "zone_count": 1, + "size": { + "resource": "memory", + "value": 512 + } + } + ], + "apm": { + "version": "7.11.0" + } + }, + "ref_id": "main-apm" + } + ], + "enterprise_search": [] + } +} +' +``` + +::: + +:::: \ No newline at end of file diff --git a/deploy-manage/autoscaling/deployments-autoscaling-on-eck.md b/deploy-manage/autoscaling/autoscaling-in-eck.md similarity index 79% rename from deploy-manage/autoscaling/deployments-autoscaling-on-eck.md rename to deploy-manage/autoscaling/autoscaling-in-eck.md index 421d109b7..cd27a422a 100644 --- a/deploy-manage/autoscaling/deployments-autoscaling-on-eck.md +++ b/deploy-manage/autoscaling/autoscaling-in-eck.md @@ -1,9 +1,17 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-autoscaling.html + - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-stateless-autoscaling.html +applies_to: + deployment: + eck: ga +navigation_title: "In ECK" --- +# Autoscaling in {{eck}} -# Deployments autoscaling on ECK [k8s-autoscaling] +Configure autoscaling for Elasticsearch deployments in {{eck}}. Learn how to enable autoscaling, define policies, manage resource limits, and monitor scaling. Includes details on autoscaling stateless applications like Kibana, APM Server, and Elastic Maps Server. + +## Deployments autoscaling on ECK [k8s-autoscaling] ::::{note} Elasticsearch autoscaling requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../license/manage-your-license-in-eck.md) for more details about managing licenses. @@ -13,12 +21,12 @@ Elasticsearch autoscaling requires a valid Enterprise license or Enterprise tria ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes. -## Enable autoscaling [k8s-enable] +### Enable autoscaling [k8s-enable] To enable autoscaling on an Elasticsearch cluster, you need to define one or more autoscaling policies. Each autoscaling policy applies to one or more NodeSets which share the same set of roles specified in the `node.roles` setting in the Elasticsearch configuration. -### Define autoscaling policies [k8s-autoscaling-policies] +#### Define autoscaling policies [k8s-autoscaling-policies] Autoscaling policies can be defined in an `ElasticsearchAutoscaler` resource. Each autoscaling policy must have the following fields: @@ -90,7 +98,7 @@ In the case of storage the following restrictions apply: * Scaling up (vertically) is only supported if the available capacity in a PersistentVolume matches the capacity claimed in the PersistentVolumeClaim. Refer to the next section for more information. -### Scale Up and Scale Out [k8s-autoscaling-algorithm] +#### Scale Up and Scale Out [k8s-autoscaling-algorithm] In order to adapt the resources to the workload, the operator first attempts to scale up the resources (cpu, memory, and storage) allocated to each node in the NodeSets. The operator always ensures that the requested resources are within the limits specified in the autoscaling policy. If each individual node has reached the limits specified in the autoscaling policy, but more resources are required to handle the load, then the operator adds some nodes to the NodeSets. Nodes are added up to the `max` value specified in the `nodeCount` of the policy. @@ -126,7 +134,7 @@ spec: ``` -### Set the limits [k8s-autoscaling-resources] +#### Set the limits [k8s-autoscaling-resources] The value set for memory and CPU limits are computed by applying a ratio to the calculated resource request. The default ratio between the request and the limit for both CPU and memory is 1. This means that request and limit have the same value. You can change the default ratio between the request and the limit for both the CPU and memory ranges by using the `requestsToLimitsRatio` field. @@ -162,7 +170,7 @@ spec: You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/2.16/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md). -### Change the polling interval [k8s-autoscaling-polling-interval] +#### Change the polling interval [k8s-autoscaling-polling-interval] The Elasticsearch autoscaling capacity endpoint is polled every minute by the operator. This interval duration can be controlled using the `pollingPeriod` field in the autoscaling specification: @@ -194,10 +202,10 @@ spec: ``` -## Monitoring [k8s-monitoring] +### Monitoring [k8s-monitoring] -### Autoscaling status [k8s-autoscaling-status] +#### Autoscaling status [k8s-autoscaling-status] In addition to the logs generated by the operator, an autoscaling status is maintained in the `ElasticsearchAutoscaler` resource. This status holds several `Conditions` to summarize the health and the status of the autoscaling mechanism. For example, dedicated `Conditions` may report if the controller cannot connect to the Elasticsearch cluster, or if a resource limit has been reached: @@ -234,7 +242,7 @@ kubectl get elasticsearchautoscaler autoscaling-sample \ ``` -### Expected resources [k8s-autoscaling-expected-resources] +#### Expected resources [k8s-autoscaling-expected-resources] The autoscaler status also contains a `policies` section which describes the expected resources for each NodeSet managed by an autoscaling policy. @@ -270,7 +278,7 @@ kubectl get elasticsearchautoscaler.autoscaling.k8s.elastic.co/autoscaling-sampl ``` -### Events [k8s-events] +#### Events [k8s-events] Important events are also reported through Kubernetes events, for example when the maximum autoscaling size limit is reached: @@ -281,7 +289,7 @@ Important events are also reported through Kubernetes events, for example when t ``` -## Disable autoscaling [k8s-disable] +### Disable autoscaling [k8s-disable] You can disable autoscaling at any time by deleting the `ElasticsearchAutoscaler` resource. For machine learning the following settings are not automatically reset: @@ -291,3 +299,50 @@ You can disable autoscaling at any time by deleting the `ElasticsearchAutoscaler You should adjust those settings manually to match the size of your deployment when you disable autoscaling. +## Autoscaling stateless applications on ECK [k8s-stateless-autoscaling] + +::::{note} +This section only applies to stateless applications. Check [Elasticsearch autoscaling](#k8s-autoscaling) for more details about scaling automatically Elasticsearch. +:::: + + +The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale) can be used to automatically scale the deployments of the following resources: + +* Kibana +* APM Server +* Elastic Maps Server + +These resources expose the `scale` subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an `HorizontalPodAutoscaler` resource to adjust the replicas of a Kibana deployment according to the CPU load: + +```yaml +apiVersion: elasticsearch.k8s.elastic.co/v1 +kind: Elasticsearch +metadata: + name: elasticsearch-sample +spec: + version: 8.16.1 + nodeSets: + - name: default + count: 1 + config: + node.store.allow_mmap: false + +apiVersion: autoscaling/v2beta2 +kind: HorizontalPodAutoscaler +metadata: + name: kb +spec: + scaleTargetRef: + apiVersion: kibana.k8s.elastic.co/v1 + kind: Kibana + name: kibana-sample + minReplicas: 1 + maxReplicas: 4 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 50 +``` \ No newline at end of file diff --git a/deploy-manage/autoscaling/autoscaling-stateless-applications-on-eck.md b/deploy-manage/autoscaling/autoscaling-stateless-applications-on-eck.md deleted file mode 100644 index 1a618acb0..000000000 --- a/deploy-manage/autoscaling/autoscaling-stateless-applications-on-eck.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-stateless-autoscaling.html ---- - -# Autoscaling stateless applications on ECK [k8s-stateless-autoscaling] - -::::{note} -This section only applies to stateless applications. Check [Elasticsearch autoscaling](deployments-autoscaling-on-eck.md) for more details about scaling automatically Elasticsearch. -:::: - - -The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale) can be used to automatically scale the deployments of the following resources: - -* Kibana -* APM Server -* Elastic Maps Server - -These resources expose the `scale` subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an `HorizontalPodAutoscaler` resource to adjust the replicas of a Kibana deployment according to the CPU load: - -```yaml -apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch -metadata: - name: elasticsearch-sample -spec: - version: 8.16.1 - nodeSets: - - name: default - count: 1 - config: - node.store.allow_mmap: false - -apiVersion: autoscaling/v2beta2 -kind: HorizontalPodAutoscaler -metadata: - name: kb -spec: - scaleTargetRef: - apiVersion: kibana.k8s.elastic.co/v1 - kind: Kibana - name: kibana-sample - minReplicas: 1 - maxReplicas: 4 - metrics: - - type: Resource - resource: - name: cpu - target: - type: Utilization - averageUtilization: 50 -``` - diff --git a/deploy-manage/autoscaling/ec-autoscaling-api-example.md b/deploy-manage/autoscaling/ec-autoscaling-api-example.md deleted file mode 100644 index 92e391558..000000000 --- a/deploy-manage/autoscaling/ec-autoscaling-api-example.md +++ /dev/null @@ -1,266 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud/current/ec-autoscaling-api-example.html ---- - -# Autoscaling through the API [ec-autoscaling-api-example] - -This example demonstrates how to use the {{ecloud}} RESTful API to create a deployment with autoscaling enabled. - -The example deployment has a hot data and content tier, warm data tier, cold data tier, and a machine learning node, all of which will scale within the defined parameters. To learn about the autoscaling settings, check [Deployment autoscaling](../autoscaling.md) and [Autoscaling example](ec-autoscaling-example.md). For more information about using the {{ecloud}} API in general, check [RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-restful.md). - - -## Requirements [ec_requirements] - -Note the following requirements when you run this API request: - -* All Elasticsearch components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example. -* The request requires a format that supports data tiers. Specifically, all Elasticsearch components must contain the following properties: - - * `id` - * `node_attributes` - * `node_roles` - -* The `size`, `autoscaling_min`, and `autoscaling_max` properties must be specified according to the following rules. This is because: - - * On data tiers only upward scaling is currently supported. - * On machine learning nodes both upward and downward scaling is supported. - * On all other components autoscaling is not currently supported. - - -$$$ec-autoscaling-api-example-requirements-table$$$ -+ - -| | | | | -| --- | --- | --- | --- | -| | `size` | `autoscaling_min` | `autoscaling_max` | -| data tier | ✓ | ✕ | ✓ | -| machine learning node | ✕ | ✓ | ✓ | -| coordinating and master nodes | ✓ | ✕ | ✕ | -| Kibana | ✓ | ✕ | ✕ | -| APM | ✓ | ✕ | ✕ | - -+ - -+ ✓ = Include the property. - -+ ✕ = Do not include the property. - -+ These rules match the behavior of the {{ecloud}} Console. - -+ * The `elasticsearch` object must contain the property `"autoscaling_enabled": true`. - - -## API request example [ec_api_request_example] - -Run this example API request to create a deployment with autoscaling: - - -```sh -curl -XPOST \ --H 'Content-Type: application/json' \ --H "Authorization: ApiKey $EC_API_KEY" \ -"https://api.elastic-cloud.com/api/v1/deployments" \ --d ' -{ - "name": "my-first-autoscaling-deployment", - "resources": { - "elasticsearch": [ - { - "ref_id": "main-elasticsearch", - "region": "us-east-1", - "plan": { - "autoscaling_enabled": true, - "cluster_topology": [ - { - "id": "hot_content", - "node_roles": [ - "remote_cluster_client", - "data_hot", - "transform", - "data_content", - "master", - "ingest" - ], - "zone_count": 2, - "elasticsearch": { - "node_attributes": { - "data": "hot" - }, - "enabled_built_in_plugins": [] - }, - "instance_configuration_id": "aws.data.highio.i3", - "size": { - "resource": "memory", - "value": 8192 - }, - "autoscaling_max": { - "value": 118784, - "resource": "memory" - } - }, - { - "id": "warm", - "node_roles": [ - "data_warm", - "remote_cluster_client" - ], - "zone_count": 2, - "elasticsearch": { - "node_attributes": { - "data": "warm" - }, - "enabled_built_in_plugins": [] - }, - "instance_configuration_id": "aws.data.highstorage.d3", - "size": { - "value": 0, - "resource": "memory" - }, - "autoscaling_max": { - "value": 118784, - "resource": "memory" - } - }, - { - "id": "cold", - "node_roles": [ - "data_cold", - "remote_cluster_client" - ], - "zone_count": 1, - "elasticsearch": { - "node_attributes": { - "data": "cold" - }, - "enabled_built_in_plugins": [] - }, - "instance_configuration_id": "aws.data.highstorage.d3", - "size": { - "value": 0, - "resource": "memory" - }, - "autoscaling_max": { - "value": 59392, - "resource": "memory" - } - }, - { - "id": "coordinating", - "zone_count": 2, - "node_roles": [ - "ingest", - "remote_cluster_client" - ], - "instance_configuration_id": "aws.coordinating.m5d", - "size": { - "value": 0, - "resource": "memory" - }, - "elasticsearch": { - "enabled_built_in_plugins": [] - } - }, - { - "id": "master", - "node_roles": [ - "master" - ], - "zone_count": 3, - "instance_configuration_id": "aws.master.r5d", - "size": { - "value": 0, - "resource": "memory" - }, - "elasticsearch": { - "enabled_built_in_plugins": [] - } - }, - { - "id": "ml", - "node_roles": [ - "ml", - "remote_cluster_client" - ], - "zone_count": 1, - "instance_configuration_id": "aws.ml.m5d", - "autoscaling_min": { - "value": 0, - "resource": "memory" - }, - "autoscaling_max": { - "value": 61440, - "resource": "memory" - }, - "elasticsearch": { - "enabled_built_in_plugins": [] - } - } - ], - "elasticsearch": { - "version": "7.11.0" - }, - "deployment_template": { - "id": "aws-io-optimized-v2" - } - }, - "settings": { - "dedicated_masters_threshold": 6 - } - } - ], - "kibana": [ - { - "elasticsearch_cluster_ref_id": "main-elasticsearch", - "region": "us-east-1", - "plan": { - "cluster_topology": [ - { - "instance_configuration_id": "aws.kibana.r5d", - "zone_count": 1, - "size": { - "resource": "memory", - "value": 1024 - } - } - ], - "kibana": { - "version": "7.11.0" - } - }, - "ref_id": "main-kibana" - } - ], - "apm": [ - { - "elasticsearch_cluster_ref_id": "main-elasticsearch", - "region": "us-east-1", - "plan": { - "cluster_topology": [ - { - "instance_configuration_id": "aws.apm.r5d", - "zone_count": 1, - "size": { - "resource": "memory", - "value": 512 - } - } - ], - "apm": { - "version": "7.11.0" - } - }, - "ref_id": "main-apm" - } - ], - "enterprise_search": [] - } -} -' -``` - -::::{note} -Although autoscaling can scale some tiers by CPU, the primary measurement of tier size is memory. Limits on tier size are in terms of memory. -:::: - - diff --git a/deploy-manage/autoscaling/ec-autoscaling-example.md b/deploy-manage/autoscaling/ec-autoscaling-example.md deleted file mode 100644 index 7ca8f25a5..000000000 --- a/deploy-manage/autoscaling/ec-autoscaling-example.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud/current/ec-autoscaling-example.html ---- - -# Autoscaling example [ec-autoscaling-example] - -To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample {{ech}} deployment. - -1. Enable autoscaling: - - * On an **existing deployment**, open the deployment **Edit** page to find the option to turn on autoscaling. - * When you create a new deployment, you can find the autoscaling option under **Advanced settings**. - - Once you confirm your changes or create a new deployment, autoscaling is activated with system default settings that you can adjust as needed (though for most use cases the default settings will likely suffice). - -2. View and adjust autoscaling settings on data tiers: - - 1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each Elasticsearch data tier. In this example, the hot data and content tier has the following settings: - - | | | | - | --- | --- | --- | - | **Current size per zone** | **Maximum size per zone** | | - | 45GB storage | 1.41TB storage | | - | 1GB RAM | 32GB RAM | | - | Up to 2.5 vCPU | 5 vCPU | | - - The fault tolerance for the data tier is set to 2 availability zones. - - :::{image} ../../images/cloud-ec-ce-autoscaling-data-summary2.png - :alt: A screenshot showing sizing information for the autoscaled data tier - ::: - - 2. Use the dropdown boxes to adjust the current and/or the maximum size of the data tier. Capacity will be added to the hot content and data tier when required, based on its past and present storage usage, until it reaches the maximum size per zone. Any scaling events are applied simultaneously across availability zones. In this example, the tier has plenty of room to scale relative to its current size, and it will not scale above the maximum size setting. There is no minimum size setting since downward scaling is currently not supported on data tiers. - -3. View and adjust autoscaling settings on a machine learning instance: - - 1. From the deployment **Edit** page you can check the minimum and maximum size of your deployment’s machine learning instances. In this example, the machine learning instance has the following settings: - - | | | | - | --- | --- | --- | - | **Minimum size per zone** | **Maximum size per zone** | | - | 1GB RAM | 64GB RAM | | - | 0.5 vCPU up to 8 vCPU | 32 vCPU | | - - The fault tolerance for the machine learning instance is set to 1 availability zone. - - :::{image} ../../images/cloud-ec-ce-autoscaling-ml-summary2.png - :alt: A screenshot showing sizing information for the autoscaled machine learning node - ::: - - 2. Use the dropdown boxes to adjust the minimum and/or the maximum size of the data tier. Capacity will be added to or removed from the machine learning instances as needed. The need for a scaling event is determined by the expected memory and vCPU requirements for the currently configured machine learning job. Any scaling events are applied simultaneously across availability zones. Note that unlike data tiers, machine learning nodes do not have a **Current size per zone** setting. That setting is not needed since machine learning nodes support both upward and downward scaling. - -4. Over time, the volume of data and the size of any machine learning jobs in your deployment are likely to grow. Let’s assume that to meet storage requirements your hot data tier has scaled up to its maximum allowed size of 64GB RAM and 32 vCPU. At this point, a notification appears on the deployment overview page letting you know that the tier has scaled to capacity. You’ll also receive an alert by email. -5. If you expect a continued increase in either storage, memory, or vCPU requirements, you can use the **Maximum size per zone** dropdown box to adjust the maximum capacity settings for your data tiers and machine learning instances, as appropriate. And, you can always re-adjust these levels downward if the requirements change. - -As you can see, autoscaling greatly reduces the manual work involved to manage a deployment. The deployment capacity adjusts automatically as demands change, within the boundaries that you define. Check our main [Deployment autoscaling](../autoscaling.md) page for more information. - diff --git a/deploy-manage/autoscaling/ec-autoscaling.md b/deploy-manage/autoscaling/ec-autoscaling.md deleted file mode 100644 index ace565330..000000000 --- a/deploy-manage/autoscaling/ec-autoscaling.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud/current/ec-autoscaling.html ---- - -# Deployment autoscaling [ec-autoscaling] - -Autoscaling helps you to more easily manage your deployments by adjusting their available resources automatically, and currently supports scaling for both data and machine learning nodes, or machine learning nodes only. Check the following sections to learn more: - -* [Overview](../autoscaling.md#ec-autoscaling-intro) -* [When does autoscaling occur?](../autoscaling.md#ec-autoscaling-factors) -* [Notifications](../autoscaling.md#ec-autoscaling-notifications) -* [Restrictions and limitations](../autoscaling.md#ec-autoscaling-restrictions) -* [Enable or disable autoscaling](../autoscaling.md#ec-autoscaling-enable) -* [Update your autoscaling settings](../autoscaling.md#ec-autoscaling-update) - -You can also have a look at our [autoscaling example](ec-autoscaling-example.md), as well as a sample request to [create an autoscaled deployment through the API](ec-autoscaling-api-example.md). - - -## Overview [ec-autoscaling-intro] - -When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention. - -::::{note} -Autoscaling is enabled for the Machine Learning tier by default for new deployments. -:::: - - -Currently, autoscaling behavior is as follows: - -* **Data tiers** - - * Each Elasticsearch [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. - * In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your [index lifecycle management policies](../../manage-data/lifecycle/index-lifecycle-management.md). - * To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field. - * Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually. - -* **Machine learning nodes** - - * Machine learning nodes can scale upward and downward based on the configured machine learning jobs. - * When a machine learning job is opened, or a machine learning trained model is deployed, if there are no machine learning nodes in your deployment, the autoscaling mechanism will automatically add machine learning nodes. Similarly, after a period of no active machine learning jobs, any enabled machine learning nodes are disabled automatically. - * To control the maximum size of your machine learning nodes and ensure they will not scale above a certain size, you can use the maximum size per zone field. - * To control the minimum size of your machine learning nodes and ensure the autoscaling mechanism will not scale machine learning below a certain size, you can use the minimum size per zone field. - * The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models. - - -::::{note} -The number of availability zones for each component of your {{ech}} deployments is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. -:::: - - - -## When does autoscaling occur? [ec-autoscaling-factors] - -Several factors determine when data tiers or machine learning nodes are scaled. - -For a data tier, an autoscaling event can be triggered in the following cases: - -* Based on an assessment of how shards are currently allocated, and the amount of storage and buffer space currently available. - -When past behavior on a hot tier indicates that the influx of data can increase significantly in the near future. Refer to [Reactive storage decider](autoscaling-deciders.md) and [Proactive storage decider](autoscaling-deciders.md) for more detail. - -* Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. - -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md). - -On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. - - -## Notifications [ec-autoscaling-notifications] - -In the event that a data tier or machine learning node scales up to its maximum possible size, you’ll receive an email, and a notice also appears on the deployment overview page prompting you to adjust your autoscaling settings to ensure optimal performance. - - -## Restrictions and limitations [ec-autoscaling-restrictions] - -The following are known limitations and restrictions with autoscaling: - -* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. -* Trial deployments cannot be configured to autoscale beyond the normal Trial deployment size limits. The maximum size per zone is increased automatically from the Trial limit when you convert to a paid subscription. -* ELSER deployments do not scale automatically. For more information, refer to [ELSER](../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md) and [Trained model autoscaling](../../explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md). - - -## Enable or disable autoscaling [ec-autoscaling-enable] - -To enable or disable autoscaling on a deployment: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Edit**. -4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. -5. Select **Confirm** to have the autoscaling change and any other settings take effect. All plan changes are shown on the Deployment **Activity** page. - -When autoscaling has been enabled, the autoscaled nodes resize according to the [autoscaling settings](../autoscaling.md#ec-autoscaling-update). Current sizes are shown on the deployment overview page. - -When autoscaling has been disabled, you need to adjust the size of data tiers and machine learning nodes manually. - - -## Update your autoscaling settings [ec-autoscaling-update] - -Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Edit**. -4. To update a data tier: - - 1. Use the dropdown box to set the **Maximum size per zone** to the largest amount of resources that should be allocated to the data tier automatically. The resources will not scale above this value. - 2. You can also update the **Current size per zone**. If you update this setting to match the **Maximum size per zone**, the data tier will remain fixed at that size. - 3. For a hot data tier you can also adjust the **Forecast window**. This is the duration of time, up to the present, for which past storage usage is assessed in order to predict when additional storage is needed. - 4. Select **Save** to apply the changes to your deployment. - -5. To update machine learning nodes: - - 1. Use the dropdown box to set the **Minimum size per zone** and **Maximum size per zone** to the smallest and largest amount of resources, respectively, that should be allocated to the nodes automatically. The resources allocated to machine learning will not exceed these values. If you set these two settings to the same value, the machine learning node will remain fixed at that size. - 2. Select **Save** to apply the changes to your deployment. - - -You can also view our [example](ec-autoscaling-example.md) of how the autoscaling settings work. diff --git a/deploy-manage/autoscaling/ece-autoscaling-api-example.md b/deploy-manage/autoscaling/ece-autoscaling-api-example.md deleted file mode 100644 index 881ac18ff..000000000 --- a/deploy-manage/autoscaling/ece-autoscaling-api-example.md +++ /dev/null @@ -1,264 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-autoscaling-api-example.html ---- - -# Autoscaling through the API [ece-autoscaling-api-example] - -This example demonstrates how to use the Elastic Cloud Enterprise RESTful API to create a deployment with autoscaling enabled. - -The example deployment has a hot data and content tier, warm data tier, cold data tier, and a machine learning node, all of which will scale within the defined parameters. To learn about the autoscaling settings, check [Deployment autoscaling](../autoscaling.md) and [Autoscaling example](ece-autoscaling-example.md). For more information about using the Elastic Cloud Enterprise API in general, check [RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/restful-api.md). - - -## Requirements [ece_requirements_3] - -Note the following requirements when you run this API request: - -* On Elastic Cloud Enterprise, autoscaling is supported for custom deployment templates on version 2.12 and above. To learn more, refer to [Updating custom templates to support `node_roles` and autoscaling](../deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md). -* All Elasticsearch components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example. -* The request requires a format that supports data tiers. Specifically, all Elasticsearch components must contain the following properties: - - * `id` - * `node_attributes` - * `node_roles` - -* The `size`, `autoscaling_min`, and `autoscaling_max` properties must be specified according to the following rules. This is because: - - * On data tiers only upward scaling is currently supported. - * On machine learning nodes both upward and downward scaling is supported. - * On all other components autoscaling is not currently supported. - - -$$$ece-autoscaling-api-example-requirements-table$$$ -+ - -| | | | | -| --- | --- | --- | --- | -| | `size` | `autoscaling_min` | `autoscaling_max` | -| data tier | ✓ | ✕ | ✓ | -| machine learning node | ✕ | ✓ | ✓ | -| coordinating and master nodes | ✓ | ✕ | ✕ | -| Kibana | ✓ | ✕ | ✕ | -| APM | ✓ | ✕ | ✕ | - -+ - -+ ✓ = Include the property. - -+ ✕ = Do not include the property. - -+ These rules match the behavior of the Elastic Cloud Enterprise user console. - -+ * The `elasticsearch` object must contain the property `"autoscaling_enabled": true`. - - -## API request example [ece_api_request_example] - -Run this example API request to create a deployment with autoscaling: - -```sh -curl -k -X POST -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/deployments -H 'content-type: application/json' -d ' -{ - "name": "my-first-autoscaling-deployment", - "resources": { - "elasticsearch": [ - { - "ref_id": "main-elasticsearch", - "region": "ece-region", - "plan": { - "autoscaling_enabled": true, - "cluster_topology": [ - { - "id": "hot_content", - "node_roles": [ - "master", - "ingest", - "remote_cluster_client", - "data_hot", - "transform", - "data_content" - ], - "zone_count": 1, - "elasticsearch": { - "node_attributes": { - "data": "hot" - }, - "enabled_built_in_plugins": [] - }, - "instance_configuration_id": "data.default", - "size": { - "value": 4096, - "resource": "memory" - }, - "autoscaling_max": { - "value": 2097152, - "resource": "memory" - } - }, - { - "id": "warm", - "node_roles": [ - "data_warm", - "remote_cluster_client" - ], - "zone_count": 1, - "elasticsearch": { - "node_attributes": { - "data": "warm" - }, - "enabled_built_in_plugins": [] - }, - "instance_configuration_id": "data.highstorage", - "size": { - "value": 0, - "resource": "memory" - }, - "autoscaling_max": { - "value": 2097152, - "resource": "memory" - } - }, - { - "id": "cold", - "node_roles": [ - "data_cold", - "remote_cluster_client" - ], - "zone_count": 1, - "elasticsearch": { - "node_attributes": { - "data": "cold" - }, - "enabled_built_in_plugins": [] - }, - "instance_configuration_id": "data.highstorage", - "size": { - "value": 0, - "resource": "memory" - }, - "autoscaling_max": { - "value": 2097152, - "resource": "memory" - } - }, - { - "id": "coordinating", - "node_roles": [ - "ingest", - "remote_cluster_client" - ], - "zone_count": 1, - "instance_configuration_id": "coordinating", - "size": { - "value": 0, - "resource": "memory" - }, - "elasticsearch": { - "enabled_built_in_plugins": [] - } - }, - { - "id": "master", - "node_roles": [ - "master" - ], - "zone_count": 1, - "instance_configuration_id": "master", - "size": { - "value": 0, - "resource": "memory" - }, - "elasticsearch": { - "enabled_built_in_plugins": [] - } - }, - { - "id": "ml", - "node_roles": [ - "ml", - "remote_cluster_client" - ], - "zone_count": 1, - "instance_configuration_id": "ml", - "autoscaling_min": { - "value": 0, - "resource": "memory" - }, - "autoscaling_max": { - "value": 2097152, - "resource": "memory" - }, - "elasticsearch": { - "enabled_built_in_plugins": [] - } - } - ], - "elasticsearch": { - "version": "8.13.1" - }, - "deployment_template": { - "id": "default" - } - }, - "settings": { - "dedicated_masters_threshold": 6 - } - } - ], - "kibana": [ - { - "ref_id": "main-kibana", - "elasticsearch_cluster_ref_id": "main-elasticsearch", - "region": "ece-region", - "plan": { - "zone_count": 1, - "cluster_topology": [ - { - "instance_configuration_id": "kibana", - "size": { - "value": 1024, - "resource": "memory" - }, - "zone_count": 1 - } - ], - "kibana": { - "version": "8.13.1" - } - } - } - ], - "apm": [ - { - "ref_id": "main-apm", - "elasticsearch_cluster_ref_id": "main-elasticsearch", - "region": "ece-region", - "plan": { - "cluster_topology": [ - { - "instance_configuration_id": "apm", - "size": { - "value": 512, - "resource": "memory" - }, - "zone_count": 1 - } - ], - "apm": { - "version": "8.13.1" - } - } - } - ], - "enterprise_search": [] - } -} -' -``` - - -::::{note} -Although autoscaling can scale some tiers by CPU, the primary measurement of tier size is memory. Limits on tier size are in terms of memory. -:::: - - diff --git a/deploy-manage/autoscaling/ece-autoscaling-example.md b/deploy-manage/autoscaling/ece-autoscaling-example.md deleted file mode 100644 index b0a905359..000000000 --- a/deploy-manage/autoscaling/ece-autoscaling-example.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-autoscaling-example.html ---- - -# Autoscaling example [ece-autoscaling-example] - -To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample Elastic Cloud Enterprise deployment. - -1. Enable autoscaling: - - * On an **existing deployment**, open the deployment **Edit** page to find the option to turn on autoscaling. - * When you create a new deployment, you can find the autoscaling option under **Advanced settings**. - - Once you confirm your changes or create a new deployment, autoscaling is activated with system default settings that you can adjust as needed (though for most use cases the default settings will likely suffice). - -2. View and adjust autoscaling settings on data tiers: - - 1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each Elasticsearch data tier. In this example, the hot data and content tier has the following settings: - - | | | | - | --- | --- | --- | - | **Current size per zone** | **Maximum size per zone** | | - | 45GB storage | 1.41TB storage | | - | 1GB RAM | 32GB RAM | | - | Up to 2.5 vCPU | 5 vCPU | | - - The fault tolerance for the data tier is set to 2 availability zones. - - :::{image} ../../images/cloud-enterprise-ec-ce-autoscaling-data-summary2.png - :alt: A screenshot showing sizing information for the autoscaled data tier - ::: - - 2. Use the dropdown boxes to adjust the current and/or the maximum size of the data tier. Capacity will be added to the hot content and data tier when required, based on its past and present storage usage, until it reaches the maximum size per zone. Any scaling events are applied simultaneously across availability zones. In this example, the tier has plenty of room to scale relative to its current size, and it will not scale above the maximum size setting. There is no minimum size setting since downward scaling is currently not supported on data tiers. - -3. View and adjust autoscaling settings on a machine learning instance: - - 1. From the deployment **Edit** page you can check the minimum and maximum size of your deployment’s machine learning instances. In this example, the machine learning instance has the following settings: - - | | | | - | --- | --- | --- | - | **Minimum size per zone** | **Maximum size per zone** | | - | 1GB RAM | 64GB RAM | | - | 0.5 vCPU up to 8 vCPU | 32 vCPU | | - - The fault tolerance for the machine learning instance is set to 1 availability zone. - - :::{image} ../../images/cloud-enterprise-ec-ce-autoscaling-ml-summary2.png - :alt: A screenshot showing sizing information for the autoscaled machine learning node - ::: - - 2. Use the dropdown boxes to adjust the minimum and/or the maximum size of the data tier. Capacity will be added to or removed from the machine learning instances as needed. The need for a scaling event is determined by the expected memory and vCPU requirements for the currently configured machine learning job. Any scaling events are applied simultaneously across availability zones. Note that unlike data tiers, machine learning nodes do not have a **Current size per zone** setting. That setting is not needed since machine learning nodes support both upward and downward scaling. - -4. Over time, the volume of data and the size of any machine learning jobs in your deployment are likely to grow. Let’s assume that to meet storage requirements your hot data tier has scaled up to its maximum allowed size of 64GB RAM and 32 vCPU. At this point, a notification appears on the deployment overview page indicating that the tier has scaled to capacity. -5. If you expect a continued increase in either storage, memory, or vCPU requirements, you can use the **Maximum size per zone** dropdown box to adjust the maximum capacity settings for your data tiers and machine learning instances, as appropriate. And, you can always re-adjust these levels downward if the requirements change. - -As you can see, autoscaling greatly reduces the manual work involved to manage a deployment. The deployment capacity adjusts automatically as demands change, within the boundaries that you define. Check our main [Deployment autoscaling](../autoscaling.md) page for more information. - diff --git a/deploy-manage/autoscaling/ece-autoscaling.md b/deploy-manage/autoscaling/ece-autoscaling.md deleted file mode 100644 index 682e76e05..000000000 --- a/deploy-manage/autoscaling/ece-autoscaling.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-autoscaling.html ---- - -# Deployment autoscaling [ece-autoscaling] - -Autoscaling helps you to more easily manage your deployments by adjusting their available resources automatically, and currently supports scaling for both data and machine learning nodes, or machine learning nodes only. Check the following sections to learn more: - -* [Overview](../autoscaling.md#ece-autoscaling-intro) -* [When does autoscaling occur?](../autoscaling.md#ece-autoscaling-factors) -* [Notifications](../autoscaling.md#ece-autoscaling-notifications) -* [Restrictions and limitations](../autoscaling.md#ece-autoscaling-restrictions) -* [Enable or disable autoscaling](../autoscaling.md#ece-autoscaling-enable) -* [Update your autoscaling settings](../autoscaling.md#ece-autoscaling-update) - -You can also have a look at our [autoscaling example](ece-autoscaling-example.md), as well as a sample request to [create an autoscaled deployment through the API](ece-autoscaling-api-example.md). - - -## Overview [ece-autoscaling-intro] - -When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention. - -::::{note} -Autoscaling is enabled for the Machine Learning tier by default for new deployments. -:::: - - -Currently, autoscaling behavior is as follows: - -* **Data tiers** - - * Each Elasticsearch [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. - * In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your [index lifecycle management policies](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-index-management.html). - * To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field. - * Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually. - -* **Machine learning nodes** - - * Machine learning nodes can scale upward and downward based on the configured machine learning jobs. - * When a machine learning job is opened, or a machine learning trained model is deployed, if there are no machine learning nodes in your deployment, the autoscaling mechanism will automatically add machine learning nodes. Similarly, after a period of no active machine learning jobs, any enabled machine learning nodes are disabled automatically. - * To control the maximum size of your machine learning nodes and ensure they will not scale above a certain size, you can use the maximum size per zone field. - * To control the minimum size of your machine learning nodes and ensure the autoscaling mechanism will not scale machine learning below a certain size, you can use the minimum size per zone field. - * The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models. - - -::::{note} -For any Elastic Cloud Enterprise Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. -:::: - - - -## When does autoscaling occur? [ece-autoscaling-factors] - -Several factors determine when data tiers or machine learning nodes are scaled. - -For a data tier, an autoscaling event can be triggered in the following cases: - -* Based on an assessment of how shards are currently allocated, and the amount of storage and buffer space currently available. - -When past behavior on a hot tier indicates that the influx of data can increase significantly in the near future. Refer to [Reactive storage decider](autoscaling-deciders.md) and [Proactive storage decider](autoscaling-deciders.md) for more detail. - -* Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. - -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md). - -On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. - - -## Notifications [ece-autoscaling-notifications] - -In the event that a data tier or machine learning node scales up to its maximum possible size, a notice appears on the deployment overview page prompting you to adjust your autoscaling settings in order to ensure optimal performance. - -A warning is also issued in the ECE `service-constructor` logs with the field `labels.autoscaling_notification_type` and a value of `data-tier-at-limit` (for a fully scaled data tier) or `ml-tier-at-limit` (for a fully scaled machine learning node). The warning is indexed in the `logging-and-metrics` deployment, so you can use that event to [configure an email notification](../../explore-analyze/alerts-cases/watcher/actions-email.md). - - -## Restrictions and limitations [ece-autoscaling-restrictions] - -The following are known limitations and restrictions with autoscaling: - -* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. -* In the event that an override is set for the instance size or disk quota multiplier for an instance by means of the [Instance Overrides API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-all-instances-settings-overrides), autoscaling will be effectively disabled. It’s recommended to avoid adjusting the instance size or disk quota multiplier for an instance that uses autoscaling, since the setting prevents autoscaling. - - -## Enable or disable autoscaling [ece-autoscaling-enable] - -To enable or disable autoscaling on a deployment: - -1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. In your deployment menu, select **Edit**. -4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. -5. Select **Confirm** to have the autoscaling change and any other settings take effect. All plan changes are shown on the Deployment **Activity** page. - -When autoscaling has been enabled, the autoscaled nodes resize according to the [autoscaling settings](../autoscaling.md#ece-autoscaling-update). Current sizes are shown on the deployment overview page. - -When autoscaling has been disabled, you need to adjust the size of data tiers and machine learning nodes manually. - - -## Update your autoscaling settings [ece-autoscaling-update] - -Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: - -1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. In your deployment menu, select **Edit**. -4. To update a data tier: - - 1. Use the dropdown box to set the **Maximum size per zone** to the largest amount of resources that should be allocated to the data tier automatically. The resources will not scale above this value. - 2. You can also update the **Current size per zone**. If you update this setting to match the **Maximum size per zone**, the data tier will remain fixed at that size. - 3. For a hot data tier you can also adjust the **Forecast window**. This is the duration of time, up to the present, for which past storage usage is assessed in order to predict when additional storage is needed. - 4. Select **Save** to apply the changes to your deployment. - -5. To update machine learning nodes: - - 1. Use the dropdown box to set the **Minimum size per zone** and **Maximum size per zone** to the smallest and largest amount of resources, respectively, that should be allocated to the nodes automatically. The resources allocated to machine learning will not exceed these values. If you set these two settings to the same value, the machine learning node will remain fixed at that size. - 2. Select **Save** to apply the changes to your deployment. - - -You can also view our [example](ece-autoscaling-example.md) of how the autoscaling settings work. - -::::{note} -On Elastic Cloud Enterprise, system-owned deployment templates include the default values for all deployment autoscaling settings. -:::: diff --git a/deploy-manage/autoscaling/ech-autoscaling-example.md b/deploy-manage/autoscaling/ech-autoscaling-example.md deleted file mode 100644 index 556d0f1a3..000000000 --- a/deploy-manage/autoscaling/ech-autoscaling-example.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-autoscaling-example.html ---- - -# Autoscaling example [ech-autoscaling-example] - -To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample Elasticsearch Add-On for Heroku deployment. - -1. Enable autoscaling: - - * On an **existing deployment**, open the deployment **Edit** page to find the option to turn on autoscaling. - * When you create a new deployment, you can find the autoscaling option under **Advanced settings**. - - Once you confirm your changes or create a new deployment, autoscaling is activated with system default settings that you can adjust as needed (though for most use cases the default settings will likely suffice). - -2. View and adjust autoscaling settings on data tiers: - - 1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each Elasticsearch data tier. In this example, the hot data and content tier has the following settings: - - | | | | - | --- | --- | --- | - | **Current size per zone** | **Maximum size per zone** | | - | 45GB storage | 1.41TB storage | | - | 1GB RAM | 32GB RAM | | - | Up to 2.5 vCPU | 5 vCPU | | - - The fault tolerance for the data tier is set to 2 availability zones. - - :::{image} ../../images/cloud-heroku-ec-ce-autoscaling-data-summary2.png - :alt: A screenshot showing sizing information for the autoscaled data tier - ::: - - 2. Use the dropdown boxes to adjust the current and/or the maximum size of the data tier. Capacity will be added to the hot content and data tier when required, based on its past and present storage usage, until it reaches the maximum size per zone. Any scaling events are applied simultaneously across availability zones. In this example, the tier has plenty of room to scale relative to its current size, and it will not scale above the maximum size setting. There is no minimum size setting since downward scaling is currently not supported on data tiers. - -3. View and adjust autoscaling settings on a machine learning instance: - - 1. From the deployment **Edit** page you can check the minimum and maximum size of your deployment’s machine learning instances. In this example, the machine learning instance has the following settings: - - | | | | - | --- | --- | --- | - | **Minimum size per zone** | **Maximum size per zone** | | - | 1GB RAM | 64GB RAM | | - | 0.5 vCPU up to 8 vCPU | 32 vCPU | | - - The fault tolerance for the machine learning instance is set to 1 availability zone. - - :::{image} ../../images/cloud-heroku-ec-ce-autoscaling-ml-summary2.png - :alt: A screenshot showing sizing information for the autoscaled machine learning node - ::: - - 2. Use the dropdown boxes to adjust the minimum and/or the maximum size of the data tier. Capacity will be added to or removed from the machine learning instances as needed. The need for a scaling event is determined by the expected memory and vCPU requirements for the currently configured machine learning job. Any scaling events are applied simultaneously across availability zones. Note that unlike data tiers, machine learning nodes do not have a **Current size per zone** setting. That setting is not needed since machine learning nodes support both upward and downward scaling. - -4. Over time, the volume of data and the size of any machine learning jobs in your deployment are likely to grow. Let’s assume that to meet storage requirements your hot data tier has scaled up to its maximum allowed size of 64GB RAM and 32 vCPU. At this point, a notification appears on the deployment overview page letting you know that the tier has scaled to capacity. You’ll also receive an alert by email. -5. If you expect a continued increase in either storage, memory, or vCPU requirements, you can use the **Maximum size per zone** dropdown box to adjust the maximum capacity settings for your data tiers and machine learning instances, as appropriate. And, you can always re-adjust these levels downward if the requirements change. - -As you can see, autoscaling greatly reduces the manual work involved to manage a deployment. The deployment capacity adjusts automatically as demands change, within the boundaries that you define. Check our main [Deployment autoscaling](../autoscaling.md) page for more information. - diff --git a/deploy-manage/autoscaling/ech-autoscaling.md b/deploy-manage/autoscaling/ech-autoscaling.md deleted file mode 100644 index eeef25ab3..000000000 --- a/deploy-manage/autoscaling/ech-autoscaling.md +++ /dev/null @@ -1,123 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-autoscaling.html ---- - -# Deployment autoscaling [ech-autoscaling] - -Autoscaling helps you to more easily manage your deployments by adjusting their available resources automatically, and currently supports scaling for both data and machine learning nodes, or machine learning nodes only. Check the following sections to learn more: - -* [Overview](../autoscaling.md#ech-autoscaling-intro) -* [When does autoscaling occur?](../autoscaling.md#ech-autoscaling-factors) -* [Notifications](../autoscaling.md#ech-autoscaling-notifications) -* [Restrictions and limitations](../autoscaling.md#ech-autoscaling-restrictions) -* [Enable or disable autoscaling](../autoscaling.md#ech-autoscaling-enable) -* [Update your autoscaling settings](../autoscaling.md#ech-autoscaling-update) - -You can also have a look at our [autoscaling example](ech-autoscaling-example.md). - - -## Overview [ech-autoscaling-intro] - -When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention. - -::::{note} -Autoscaling is enabled for the Machine Learning tier by default for new deployments. -:::: - - -Currently, autoscaling behavior is as follows: - -* **Data tiers** - - * Each Elasticsearch [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. - * In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your index lifecycle management policies. - * To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field. - * Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually. - -* **Machine learning nodes** - - * Machine learning nodes can scale upward and downward based on the configured machine learning jobs. - * When a machine learning job is opened, or a machine learning trained model is deployed, if there are no machine learning nodes in your deployment, the autoscaling mechanism will automatically add machine learning nodes. Similarly, after a period of no active machine learning jobs, any enabled machine learning nodes are disabled automatically. - * To control the maximum size of your machine learning nodes and ensure they will not scale above a certain size, you can use the maximum size per zone field. - * To control the minimum size of your machine learning nodes and ensure the autoscaling mechanism will not scale machine learning below a certain size, you can use the minimum size per zone field. - * The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models. - - -::::{note} -For any Elasticsearch Add-On for Heroku Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. -:::: - - - -## When does autoscaling occur? [ech-autoscaling-factors] - -Several factors determine when data tiers or machine learning nodes are scaled. - -For a data tier, an autoscaling event can be triggered in the following cases: - -* Based on an assessment of how shards are currently allocated, and the amount of storage and buffer space currently available. - -When past behavior on a hot tier indicates that the influx of data can increase significantly in the near future. Refer to [Reactive storage decider](autoscaling-deciders.md) and [Proactive storage decider](autoscaling-deciders.md) for more detail. - -* Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. - -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md). - -On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. - - -## Notifications [ech-autoscaling-notifications] - -In the event that a data tier or machine learning node scales up to its maximum possible size, you’ll receive an email, and a notice also appears on the deployment overview page prompting you to adjust your autoscaling settings to ensure optimal performance. - - -## Restrictions and limitations [ech-autoscaling-restrictions] - -The following are known limitations and restrictions with autoscaling: - -* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. - - -## Enable or disable autoscaling [ech-autoscaling-enable] - -To enable or disable autoscaling on a deployment: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Edit**. -4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. -5. Select **Confirm** to have the autoscaling change and any other settings take effect. All plan changes are shown on the Deployment **Activity** page. - -When autoscaling has been enabled, the autoscaled nodes resize according to the [autoscaling settings](../autoscaling.md#ech-autoscaling-update). Current sizes are shown on the deployment overview page. - -When autoscaling has been disabled, you need to adjust the size of data tiers and machine learning nodes manually. - - -## Update your autoscaling settings [ech-autoscaling-update] - -Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Edit**. -4. To update a data tier: - - 1. Use the dropdown box to set the **Maximum size per zone** to the largest amount of resources that should be allocated to the data tier automatically. The resources will not scale above this value. - 2. You can also update the **Current size per zone**. If you update this setting to match the **Maximum size per zone**, the data tier will remain fixed at that size. - 3. For a hot data tier you can also adjust the **Forecast window**. This is the duration of time, up to the present, for which past storage usage is assessed in order to predict when additional storage is needed. - 4. Select **Save** to apply the changes to your deployment. - -5. To update machine learning nodes: - - 1. Use the dropdown box to set the **Minimum size per zone** and **Maximum size per zone** to the smallest and largest amount of resources, respectively, that should be allocated to the nodes automatically. The resources allocated to machine learning will not exceed these values. If you set these two settings to the same value, the machine learning node will remain fixed at that size. - 2. Select **Save** to apply the changes to your deployment. - - -You can also view our [example](ech-autoscaling-example.md) of how the autoscaling settings work. diff --git a/deploy-manage/autoscaling/trained-model-autoscaling.md b/deploy-manage/autoscaling/trained-model-autoscaling.md index ce46bd974..ac4d2b1ca 100644 --- a/deploy-manage/autoscaling/trained-model-autoscaling.md +++ b/deploy-manage/autoscaling/trained-model-autoscaling.md @@ -1,29 +1,215 @@ --- mapped_urls: - https://www.elastic.co/guide/en/serverless/current/general-ml-nlp-auto-scale.html - - https://www.elastic.co/guide/en/serverless/current/general-ml-nlp-auto-scale.html + - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-auto-scale.html +applies_to: + deployment: + ess: + eck: + ece: + serverless: --- # Trained model autoscaling -% What needs to be done: Align serverless/stateful +You can enable autoscaling for each of your trained model deployments. Autoscaling allows {{es}} to automatically adjust the resources the model deployment can use based on the workload demand. + +There are two ways to enable autoscaling: + +* through APIs by enabling adaptive allocations +* in {{kib}} by enabling adaptive resources + +::::{important} +To fully leverage model autoscaling in {{ech}}, {{ece}}, and {{eck}}, it is highly recommended to enable [{{es}} deployment autoscaling](../../deploy-manage/autoscaling.md). +:::: + +Trained model autoscaling is available for {{serverless-short}}, {{ech}}, {{ece}}, and {{eck}} deployments. In serverless deployments, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits. + +:::{admonition} Trained model auto-scaling for self-managed deployments +The available resources of self-managed deployments are static, so trained model autoscaling is not applicable. However, available resources are still segmented based on the settings described in this section. +::: + +{{serverless-full}} Security and Observability projects are only charged for data ingestion and retention. They are not charged for processing power (VCU usage), which is used for more complex operations, like running advanced search models. For example, in Search projects, models such as ELSER require significant processing power to provide more accurate search results. + +## Enabling autoscaling through APIs - adaptive allocations [enabling-autoscaling-through-apis-adaptive-allocations] +$$$nlp-model-adaptive-resources$$$ + +Model allocations are independent units of work for NLP tasks. If you set the numbers of threads and allocations for a model manually, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources. Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process. This can help you to manage performance and cost more easily. (Refer to the [pricing calculator](https://cloud.elastic.co/pricing) to learn more about the possible costs.) + +When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. You can explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits. + +::::{note} +If you set the minimum number of allocations to 1, you will be charged even if the system is not using those resources. + +:::: + +You can enable adaptive allocations by using: + +* the create inference endpoint API for [ELSER](../../explore-analyze/elastic-inference/inference-api/elasticsearch-inference-integration.md), [E5 and models uploaded through Eland](../../explore-analyze/elastic-inference/inference-api/elasticsearch-inference-integration.md) that are used as inference services. +* the [start trained model deployment](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-start-trained-model-deployment) or [update trained model deployment](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-update-trained-model-deployment) APIs for trained models that are deployed on {{ml}} nodes. + +If the new allocations fit on the current {{ml}} nodes, they are immediately started. If more resource capacity is needed for creating new model allocations, then your {{ml}} node will be scaled up if {{ml}} autoscaling is enabled to provide enough resources for the new allocation. The number of model allocations can be scaled down to 0. They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more. Adaptive allocations must be set up independently for each deployment and [{{infer}} endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference). + +:::{note} +When you create inference endpoints on {{serverless-short}} using Kibana, adaptive allocations are automatically turned on, and there is no option to disable them. +::: + +### Optimizing for typical use cases [optimizing-for-typical-use-cases] +You can optimize your model deployment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of {{infer}} requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes. + +* If you want to optimize for ingest, set the number of threads to `1` (`"threads_per_allocation": 1`). +* If you want to optimize for search, set the number of threads to greater than `1`. Increasing the number of threads will make the search processes more performant. + +## Enabling autoscaling in {{kib}} - adaptive resources [enabling-autoscaling-in-kibana-adaptive-resources] + +You can enable adaptive resources for your models when starting or updating the model deployment. Adaptive resources make it possible for {{es}} to scale up or down the available resources based on the load on the process. This can help you to manage performance and cost more easily. When adaptive resources are enabled, the number of vCPUs that the model deployment uses is set automatically based on the current load. When the load is high, the number of vCPUs that the process can use is automatically increased. When the load is low, the number of vCPUs that the process can use is automatically decreased. + +You can choose from three levels of resource usage for your trained model deployment; autoscaling will occur within the selected level’s range. + +Refer to the tables in the [Model deployment resource matrix](#model-deployment-resource-matrix) section to find out the setings for the level you selected. + +:::{image} ../../images/machine-learning-ml-nlp-deployment-id-elser-v2.png +:alt: ELSER deployment with adaptive resources enabled. +:screenshot: +:width: 500px +::: + +In {{serverless-full}}, Search projects are given access to more processing resources, while Security and Observability projects have lower limits. This difference is reflected in the UI configuration: Search projects have higher resource limits compared to Security and Observability projects to accommodate their more complex operations. + +On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in Kibana for Observability and Security projects. + +## Model deployment resource matrix [model-deployment-resource-matrix] + +The used resources for trained model deployments depend on three factors: + +* your cluster environment ({{serverless-short}}, Cloud (ECE, ECK, ECH), or self-managed) +* the use case you optimize the model deployment for (ingest or search) +* whether model autoscaling is enabled with adaptive allocations/resources to have dynamic resources, or disabled for static resources + +If you use a self-managed cluster or ECK, vCPUs level ranges are derived from the `total_ml_processors` and `max_single_ml_node_processors` values. Use the [get {{ml}} info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-info) to check these values. + +The following tables show you the number of allocations, threads, and vCPUs available in ECE and ECH when adaptive resources are enabled or disabled. + +::::{note} +On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in {{kib}} for Observability and Security projects. +:::: + +### Ingest optimized + +In case of ingest-optimized deployments, we maximize the number of model allocations. + +#### Adaptive resources enabled + +::::{tab-set} + +:::{tab-item} ECH, ECE + +| Level | Allocations | Threads | vCPUs | +| --- | --- | --- | --- | +| Low | 0 to 2 if available, dynamically | 1 | 0 to 2 if available, dynamically | +| Medium | 1 to 32 dynamically | 1 | 1 to the smaller of 32 or the limit set in the Cloud console, dynamically | +| High | 1 to limit set in the Cloud console *, dynamically | 1 | 1 to limit set in the Cloud console, dynamically | + +\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads. + +::: + +:::{tab-item} {{serverless-short}} + +| Level | Allocations | Threads | VCUs | +| --- | --- | --- | --- | +| Low | 0 to 2 dynamically | 1 | 0 to 16 dynamically | +| Medium | 1 to 32 dynamically | 1 | 8 to 256 dynamically | +| High | 1 to 512 for Search
1 to 128 for Security and Observability
| 1 | 8 to 4096 for Search
8 to 1024 for Security and Observability
| + +::: + +:::: + +#### Adaptive resources disabled + +::::{tab-set} + +:::{tab-item} ECH, ECE + +| Level | Allocations | Threads | vCPUs | +| --- | --- | --- | --- | +| Low | 2 if available, otherwise 1, statically | 1 | 2 if available | +| Medium | the smaller of 32 or the limit set in the Cloud console, statically | 1 | 32 if available | +| High | Maximum available set in the Cloud console *, statically | 1 | Maximum available set in the Cloud console, statically | + +\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads. + +::: + +:::{tab-item} {{serverless-short}} + +| Level | Allocations | Threads | VCUs | +| --- | --- | --- | --- | +| Low | Exactly 2 | 1 | 16 | +| Medium | Exactly 32 | 1 | 256 | +| High | 512 for Search
No static allocations for Security and Observability
| 1 | 4096 for Search
No static allocations for Security and Observability
| + +::: + +:::: + +### Search optimized + +In case of search-optimized deployments, we maximize the number of threads. The maximum number of threads that can be claimed depends on the hardware your architecture has. + +#### Adaptive resources enabled + +::::{tab-set} + +:::{tab-item} ECH, ECE + +| Level | Allocations | Threads | vCPUs | +| --- | --- | --- | --- | +| Low | 1 | 2 | 2 | +| Medium | 1 to 2 (if threads=16) dynamically | maximum that the hardware allows (for example, 16) | 1 to 32 dynamically | +| High | 1 to limit set in the Cloud console *, dynamically | maximum that the hardware allows (for example, 16) | 1 to limit set in the Cloud console, dynamically | + +\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads. + +::: + +:::{tab-item} {{serverless-short}} + +| Level | Allocations | Threads | VCUs | +| --- | --- | --- | --- | +| Low | 0 to 1 dynamically | Always 2 | 0 to 16 dynamically | +| Medium | 1 to 2 (if threads=16), dynamically | Maximum (for example, 16) | 8 to 256 dynamically | +| High | 1 to 32 (if threads=16), dynamically
1 to 128 for Security and Observability
| Maximum (for example, 16) | 8 to 4096 for Search
8 to 1024 for Security and Observability
| + +::: + +:::: + +#### Adaptive resources disabled -% GitHub issue: https://github.com/elastic/docs-projects/issues/344 +::::{tab-set} -% Scope notes: Serverless and stateful pages are very similar, might need to merge them together or create subpages +:::{tab-item} ECH, ECE -% Use migrated content from existing pages that map to this page: +| Level | Allocations | Threads | vCPUs | +| --- | --- | --- | --- | +| Low | 1 if available, statically | 2 | 2 if available | +| Medium | 2 (if threads=16) statically | maximum that the hardware allows (for example, 16) | 32 if available | +| High | Maximum available set in the Cloud console *, statically | maximum that the hardware allows (for example, 16) | Maximum available set in the Cloud console, statically | -% - [ ] ./raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md -% - [ ] ./raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md +\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads. -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +::: -$$$enabling-autoscaling-in-kibana-adaptive-resources$$$ +:::{tab-item} {{serverless-short}} -$$$enabling-autoscaling-through-apis-adaptive-allocations$$$ +| Level | Allocations | Threads | VCUs | +| --- | --- | --- | --- | +| Low | 1 statically | Always 2 | 16 | +| Medium | 2 statically (if threads=16) | Maximum (for example, 16) | 256 | +| High | 32 statically (if threads=16) for Search
No static allocations for Security and Observability
| Maximum (for example, 16) | 4096 for Search
No static allocations for Security and Observability
| -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +::: -* [/raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md](/raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md) -* [/raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md](/raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md) \ No newline at end of file +:::: diff --git a/deploy-manage/cloud-organization.md b/deploy-manage/cloud-organization.md index d78661682..b022b7717 100644 --- a/deploy-manage/cloud-organization.md +++ b/deploy-manage/cloud-organization.md @@ -9,7 +9,7 @@ applies_to: # Manage your Cloud organization [ec-organizations] -When you sign up to {{ecloud}}, you create an organization. This organization is the umbrella for all of your {{ecloud}} resources, users, and account settings. Every organization has a unique identifier. +When you [sign up for {{ecloud}}](/deploy-manage/deploy/elastic-cloud/create-an-organization.md), you create an organization. This organization is the umbrella for all of your {{ecloud}} resources, users, and account settings. Every organization has a unique identifier. You can perform the following tasks to manage your Cloud organization: diff --git a/deploy-manage/deploy.md b/deploy-manage/deploy.md index 52702cced..61b924696 100644 --- a/deploy-manage/deploy.md +++ b/deploy-manage/deploy.md @@ -2,19 +2,9 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/intro.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-intro-deploy.html + - https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/get-elastic.html --- -% What needs to be done: Write from scratch - -% GitHub issue: https://github.com/elastic/docs-projects/issues/334 - -% Scope notes: does plan for production content go here? With orchestrator layer - explain relationship between orchestrator and clusters how to help people to be aware of the other products that might need to be deployed? "these are the core products, you might add others on" describe relationship between orchestrators and ES Explain that when using orchestrators a lot of the reference configuration of the orchestrated applications is still applicable. The user needs to learn how to configure the applications when using an orchestrator, then afterwards, the documentation of the application will be valid and applicable to their use case. When a certain feature or configuration is not applicable in some deployment types, the document will specify it. - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/docs-content/serverless/intro.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/elasticsearch-intro-deploy.md - # Deploy Whether you're planning to use Elastic's pre-built solutions or Serverless projects, build your own applications with {{es}}, or analyze your data using {{kib}} tools, you'll need to deploy Elastic first. @@ -23,13 +13,20 @@ This page will help you understand your deployment options and choose the approa ## Core components -Every Elastic deployment requires {{es}} as its core data store and search/analytics engine. -Additionally, {{kib}} provides the user interface for all Elastic solutions and Serverless projects. It is required for most use cases, from data exploration to monitoring and security analysis. +All deployments include **{{es}}**. {{es}} is the distributed search and analytics engine, scalable data store, and vector database at the heart of all Elastic solutions. -Your choice of deployment type determines how you'll set up and manage these core components, plus any additional components you need. +In most cases, you also need to deploy **{{kib}}**. {{kib}} provides the user interface for all Elastic solutions and Serverless projects. It’s a powerful tool for visualizing and analyzing your data, and for managing and monitoring the {{stack}}. Although {{kib}} is not required to use {{es}}, it's required for most use cases, and is included by default when you deploy using certain deployment methods. -:::{tip} -Learn more about the [{{stack}}](/get-started/the-stack.md) to understand the core and optional components of an Elastic deployment. +Your choice of deployment type determines how you'll set up and manage these core components, as well as any additional components you need. + +:::{admonition} Other {{stack}} components +This section focuses on deploying and managing {{es}} and {{kib}}, as well as supporting orchestration technologies. However, depending on your use case, you might need to deploy [other {{stack}} components](/get-started/the-stack.md). For example, you might need to add components to ingest logs or metrics. + +To learn how to deploy optional {{stack}} components, refer to the following sections: +* [Fleet and Elastic Agent](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/index.md) +* [APM](/solutions/observability/apps/application-performance-monitoring-apm.md) +* [Beats](asciidocalypse://docs/beats/docs/reference/index.md) +* [Logstash](asciidocalypse://docs/logstash/docs/reference/index.md) ::: ## Choosing your deployment type @@ -41,10 +38,12 @@ Learn more about the [{{stack}}](/get-started/the-stack.md) to understand the co #### Managed by Elastic -If you want to focus on using Elastic products rather than managing infrastructure, choose: +If you want to focus on using Elastic products rather than managing infrastructure, choose one of the following options: -- **Serverless**: Zero operational overhead, automatic scaling and updates, latest features -- **Cloud hosted**: Balance of control and managed operations, choice of resources and regions +- **{{serverless-full}}**: Zero operational overhead, automatic scaling and updates, latest features +- **{{ech}}**: Balance of control and managed operations, choice of resources and regions + +Both of these options use [{{ecloud}}](/deploy-manage/deploy/elastic-cloud.md) as the orchestration platform. #### Self-hosted options @@ -52,11 +51,14 @@ If you need to run Elastic on your infrastructure, choose between a fully self-m - **Fully self-managed**: Complete control and responsibility for your Elastic deployment - **With orchestration**: - - **Elastic Cloud on Kubernetes (ECK)**: If you need Kubernetes-native orchestration - - **Elastic Cloud Enterprise (ECE)**: If you need a multi-tenant orchestration platform + - **{{eck}} (ECK)**: If you need Kubernetes-native orchestration + - **{{ece}} (ECE)**: If you need a multi-tenant orchestration platform -:::::{note} -:::{dropdown} About orchestration +::::{tip} +Documentation will specify when certain features or configurations are not applicable to specific deployment types. +:::: + +### About orchestration An orchestrator automates the deployment and management of multiple Elastic clusters, handling tasks like scaling, upgrades, and monitoring. @@ -66,27 +68,21 @@ Consider orchestration if you: - Have a Kubernetes environment (ECK) - Need to build a multi-tenant platform (ECE) -Orchestrators manage the lifecycle of your Elastic deployments but don't change how the core products work. When using ECK or ECE: +Orchestrators manage the lifecycle of your Elastic deployments but don't change how the core products work. When using an orchestrated deployment: - You'll still use the same Elasticsearch and Kibana features and configurations - Most product documentation remains applicable - You can add other Elastic products as needed - The orchestrator handles operational tasks while you focus on using and configuring the products -::::{tip} -Documentation will specify when certain features or configurations are not applicable to specific deployment types. -:::: -::: -::::: - ### Versioning and compatibility In {{serverless-full}}, you automatically get access to the latest versions of Elastic features and you don't need to manage version compatibility. -With other deployment types ({{ecloud}} Hosted, ECE, and ECK), you control which {{stack}} versions you deploy and when you upgrade. The ECE and ECK orchestrators themselves also receive regular version updates, independent of the {{stack}} versions they manage. +With other deployment types (ECH, ECE, and ECK), you control which {{stack}} versions you deploy and when you upgrade. The ECE and ECK orchestrators themselves also receive regular version updates, independent of the {{stack}} versions they manage. Consider this when choosing your deployment type: -- Choose Serverless if you want automatic access to the latest features and don't want to manage version compatibility +- Choose {{serverless-full}} if you want automatic access to the latest features and don't want to manage version compatibility - Choose other deployment types if you need more control over version management :::{tip} @@ -95,8 +91,8 @@ Learn more about [versioning and availability](/get-started/versioning-availabil ### Cost considerations -- **Serverless**: Pay for what you use -- **Cloud hosted**: Subscription-based with resource allocation +- **{{serverless-full}}**: Pay for what you use +- **{{ech}}**: Subscription-based with resource allocation - **Self-hosted options**: Infrastructure costs plus operational overhead mean a higher total cost of ownership (TCO) :::::{tip} diff --git a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md index bdb668f1a..b6ac2f600 100644 --- a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md +++ b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md @@ -1084,7 +1084,7 @@ Similar to the `node_roles` example, the following one is also based on the `def To add support for autoscaling, the deployment template has to meet the following requirements: 1. Already has support for `node_roles`. -2. Contains the `size`, `autoscaling_min`, and `autoscaling_max` fields, according to the rules specified in the [autoscaling requirements table](../../autoscaling/ece-autoscaling-api-example.md#ece-autoscaling-api-example-requirements-table). +2. Contains the `size`, `autoscaling_min`, and `autoscaling_max` fields, according to the rules specified in the [autoscaling requirements table](../../autoscaling/autoscaling-in-ece-and-ech.md#ece-autoscaling-api-example-requirements-table). 3. Contains the `autoscaling_enabled` fields on the `elasticsearch` resource. If necessary, the values chosen for each field can be based on the reference example. @@ -1094,7 +1094,7 @@ If necessary, the values chosen for each field can be based on the reference exa To update a custom deployment template: -1. Add the `autoscaling_min` and `autoscaling_max` fields to the Elasticsearch topology elements (check [Autoscaling through the API](../../autoscaling/ece-autoscaling-api-example.md)). +1. Add the `autoscaling_min` and `autoscaling_max` fields to the Elasticsearch topology elements (check [Autoscaling through the API](../../autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-api-example)). 2. Add the `autoscaling_enabled` fields to the `elasticsearch` resource. Set this field to `true` in case you want autoscaling enabled by default, and to `false` otherwise. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-create-templates.md b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-create-templates.md index 7c60a6059..f1ff134bf 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-create-templates.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-create-templates.md @@ -55,7 +55,7 @@ Before you start creating your own deployment templates, you should have: [tagge * For data nodes, autoscaling up is supported based on the amount of available storage. You can set the default initial size of the node and the default maximum size that the node can be autoscaled up to. * For machine learning nodes, autoscaling is supported based on the expected memory requirements for machine learning jobs. You can set the default minimum size that the node can be scaled down to and the default maximum size that the node can be scaled up to. If autoscaling is not enabled for the deployment, the "minimum" value will instead be the default initial size of the machine learning node. - The default values provided by the deployment template can be adjusted at any time. Check our [Autoscaling example](../../autoscaling/ece-autoscaling-example.md) for details about these settings. Nodes and components that currently support autoscaling are indicated by a `supports autoscaling` badge on the **Configure instances** page. + The default values provided by the deployment template can be adjusted at any time. Check our [Autoscaling example](../../autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-example) for details about these settings. Nodes and components that currently support autoscaling are indicated by a `supports autoscaling` badge on the **Configure instances** page. * Add [fault tolerance](ece-ha.md) (high availability) by using more than one availability zone. diff --git a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md index 0ba4474c6..b06359d6b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md @@ -56,7 +56,7 @@ Other sections of the documentation also include relevant configuration options * [Remote clusters](/deploy-manage/remote-clusters/eck-remote-clusters.md) -* [Autoscaling](../../autoscaling/deployments-autoscaling-on-eck.md) +* [Autoscaling](../../autoscaling/autoscaling-in-eck.md#k8s-autoscaling) * [Stack monitoring](/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md): Monitor your {{es}} cluster smoothly with the help of ECK. diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md index ff5f06c4d..fbfe35bc6 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md @@ -10,7 +10,7 @@ mapped_pages: The instructions in this section describe how to connect the operator and managed resources to the Istio service mesh and assume that Istio is already installed and configured on your Kubernetes cluster. To know more about Istio and how to install it, check the [product documentation](https://istio.io). -These instructions have been tested with Istio 1.6.1. Older or newer versions of Istio might require additional configuration steps not documented here. +These instructions have been tested with Istio 1.24.3. Older or newer versions of Istio might require additional configuration steps not documented here. ::::{warning} Some Elastic Stack features such as [Kibana alerting and actions](/explore-analyze/alerts-cases.md) rely on the Elasticsearch API keys feature which requires TLS to be enabled at the application level. If you want to use these features, you should not disable the self-signed certificate on the Elasticsearch resource and enable `PERMISSIVE` mode for the Elasticsearch service through a `DestinationRule` or `PeerAuthentication` resource. Strict mTLS mode is currently not compatible with Elastic Stack features requiring TLS to be enabled for the Elasticsearch HTTP layer. diff --git a/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md b/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md index 569de652b..3bab8364b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md @@ -29,6 +29,6 @@ The following sections describe how to customize a {{kib}} deployment to suit yo * [Disable TLS](k8s-kibana-http-configuration.md#k8s-kibana-http-disable-tls) * [Install {{kib}} plugins](k8s-kibana-plugins.md) -* [Autoscaling stateless applications](../../autoscaling/autoscaling-stateless-applications-on-eck.md): Use [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) for {{kib}} or other stateless applications. +* [Autoscaling stateless applications](../../autoscaling/autoscaling-in-eck.md#k8s-stateless-autoscaling): Use [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) for {{kib}} or other stateless applications. diff --git a/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md b/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md index 31103f59b..48bb6f4a1 100644 --- a/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md +++ b/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md @@ -325,7 +325,7 @@ To avoid this, explicitly define the requests and limits mandated by your enviro :::{image} ../../../images/cloud-on-k8s-metrics-explorer-cpu.png :alt: cgroup CPU perforamce chart -:class: screenshot +:screenshot: ::: @@ -355,6 +355,6 @@ The **Cgroup usage** curve shows that the CPU usage of this container has been s :::{image} ../../../images/cloud-on-k8s-cgroups-cfs-stats.png :alt: cgroup CPU perforamce chart -:class: screenshot +:screenshot: ::: diff --git a/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md b/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md index 2376f6ff0..4ac0a6668 100644 --- a/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md +++ b/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md @@ -22,7 +22,7 @@ When orchestrating any of these applications, also consider the following topics * [Access Elastic Stack services](accessing-services.md) * [Customize Pods](customize-pods.md) * [Manage compute resources](manage-compute-resources.md) -* [Autoscaling stateless applications](../../autoscaling/autoscaling-stateless-applications-on-eck.md) +* [Autoscaling stateless applications](../../autoscaling/autoscaling-in-eck.md#k8s-stateless-autoscaling) * [Elastic Stack configuration policies](elastic-stack-configuration-policies.md) * [Upgrade the Elastic Stack version](../../upgrade/deployment-or-cluster.md) * [Connect to external Elastic resources](connect-to-external-elastic-resources.md) \ No newline at end of file diff --git a/deploy-manage/deploy/deployment-comparison.md b/deploy-manage/deploy/deployment-comparison.md index d7113f32e..4c71a8159 100644 --- a/deploy-manage/deploy/deployment-comparison.md +++ b/deploy-manage/deploy/deployment-comparison.md @@ -1,5 +1,5 @@ -# Deployment comparison reference +# Compare deployment options This reference provides detailed comparisons of features and capabilities across Elastic's deployment options: self-managed deployments, Elastic Cloud Hosted, and Serverless. For a high-level overview of deployment types and guidance on choosing between them, see the [overview](../deploy.md). diff --git a/deploy-manage/deploy/kibana-reporting-configuration.md b/deploy-manage/deploy/kibana-reporting-configuration.md index 25cb0281e..c521cfee9 100644 --- a/deploy-manage/deploy/kibana-reporting-configuration.md +++ b/deploy-manage/deploy/kibana-reporting-configuration.md @@ -4,7 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/kibana/current/reporting-production-considerations.html --- -# Kibana reporting configuration +# Configure reporting % What needs to be done: Refine diff --git a/deploy-manage/index.md b/deploy-manage/index.md index e4f0ce85a..8b15874d7 100644 --- a/deploy-manage/index.md +++ b/deploy-manage/index.md @@ -1,11 +1,7 @@ --- mapped_urls: - - https://www.elastic.co/guide/en/kibana/current/introduction.html - https://www.elastic.co/guide/en/kibana/current/setup.html - - https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/get-elastic.html - - https://www.elastic.co/guide/en/elasticsearch/reference/current/scalability.html - https://www.elastic.co/guide/en/cloud/current/ec-faq-technical.html - - https://www.elastic.co/guide/en/elastic-stack/current/overview.html - https://www.elastic.co/guide/en/elastic-stack/current/index.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-administering-deployments.html - https://www.elastic.co/guide/en/kibana/current/management.html @@ -13,51 +9,61 @@ mapped_urls: # Deploy and manage -% What needs to be done: Write from scratch +To get started with Elastic, you need to choose a deployment method and deploy {{stack}} components. -% GitHub issue: https://github.com/elastic/docs-projects/issues/332 +In this section, you'll learn about how to deploy and manage all aspects of your Elastic environment. You'll learn how to design resilient, highly available clusters and deployments, and how to maintain and scale your environment to grow with your use case. -% Scope notes: Explain that a basic deployment always has ES, usually has Kibana, might have xyz. +This section focuses on deploying and managing the core components of the {{stack}}: {{es}} and {{kib}}. It also documents deploying and managing supporting orchestration technologies. However, depending on your use case, you might need to deploy other components. [Learn more](/get-started/the-stack.md). -% Use migrated content from existing pages that map to this page: +:::{tip} +To get started quickly, you can set up a [local development and testing environment](/solutions/search/run-elasticsearch-locally.md), or sign up for a [Serverless](https://cloud.elastic.co/serverless-registration) or [Hosted](https://cloud.elastic.co/registration) trial in {{ecloud}}. +::: -% - [ ] ./raw-migrated-files/kibana/kibana/introduction.md -% - [ ] ./raw-migrated-files/kibana/kibana/setup.md -% - [ ] ./raw-migrated-files/tech-content/starting-with-the-elasticsearch-platform-and-its-solutions/get-elastic.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/scalability.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-faq-technical.md -% - [ ] ./raw-migrated-files/stack-docs/elastic-stack/overview.md -% Notes: redirect only -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-administering-deployments.md -% Notes: redirect only -% - [ ] ./raw-migrated-files/kibana/kibana/management.md -% Notes: redirect only +## Design and deploy -$$$adding_index_privileges$$$ +Learn how to design and deploy a production-ready Elastic environment. -$$$faq-hw-architecture$$$ +* [](/deploy-manage/deploy.md): Understand your deployment options and choose the approach that best fits your needs. + + If you already know how you want to deploy, you can jump to the documentation for your preferred deployment method: + * [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md) + * [{{ech}}](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md) + * [{{ece}}](/deploy-manage/deploy/cloud-enterprise.md) + * [{{eck}}](/deploy-manage/deploy/cloud-on-k8s.md) + * [Self-managed](/deploy-manage/deploy/self-managed.md) -$$$faq-master-nodes$$$ +* [](/deploy-manage/distributed-architecture.md): Learn about the architecture of {{es}} and {{kib}}, and how Elastic stores and retrieves data and executes tasks in clusters with multiple nodes. +* [](/deploy-manage/production-guidance.md): Review tips and guidance that you can use to design a production environment that matches your workloads, policies, and deployment needs. +* [](/deploy-manage/reference-architectures.md): Explore blueprints for deploying clusters tailored to different use cases. +* [](/deploy-manage/tools.md): Learn about the tools available to safeguard data, ensure continuous availability, and maintain resilience in your {{es}} environment. +* [](/deploy-manage/autoscaling.md): Learn how to configure your [orchestrated](/deploy-manage/deploy.md#about-orchestration) deployment to scale based on policies and cluster signals. Applies to {{ech}}, {{ece}}, and {{eck}} deployments. -$$$faq-ssl$$$ +:::{admonition} Serverless does it for you +If you deploy an {{serverless-full}} project, then you don't need to learn about Elastic architecture, production design, resilience, or scaling concepts. Serverless automatically scales and backs up your cluster for you, and is ready for production out of the box. +::: -$$$faq-autoscale$$$ +## Secure and control access -$$$faq-ip-sniffing$$$ +Learn how to secure your Elastic environment to restrict access to only authorized parties, and allow communication between your environment and external parties. -$$$faq-encryption-at-rest$$$ +* [](/deploy-manage/security.md): Learn about security features that prevent bad actors from tampering with your data, and encrypt communications to, from, and within your cluster. +* [](/deploy-manage/users-roles.md): Manage user authentication and authorization at the level of your Cloud organization, your orchestrator, or your deployment or cluster. +* [](/deploy-manage/manage-spaces.md): Learn how to organize content in {{kib}}, and restrict access to this content to specific users. +* [](/deploy-manage/api-keys.md): Authenticate and authorize programmatic access to your deployments and {{es}} resources. +* [](/deploy-manage/manage-connectors.md): Manage connection information between Elastic and third-party systems. +* [](/deploy-manage/remote-clusters.md): Enable communication between {{es}} clusters to support [cross-cluster replication](/deploy-manage/tools/cross-cluster-replication.md) and [cross-cluster search](/solutions/search/cross-cluster-search.md). -$$$faq-static-ip-elastic-cloud$$$ +## Administer and maintain -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +Monitor the performance of your Elastic environment, administer your organization and license, and maintain the health of your environment. -% Doesn't exist -% * [/raw-migrated-files/kibana/kibana/introduction.md](/raw-migrated-files/kibana/kibana/introduction.md) +* [](/deploy-manage/monitor.md): View health and performance data for Elastic components, and receive recommendations and insights. +* [](/deploy-manage/cloud-organization.md): Administer your {{ecloud}} organization, including billing, organizational contacts, and service monitoring. +* [](/deploy-manage/license.md): Learn how to manage your Elastic license or subscription. +* [](/deploy-manage/maintenance.md): Learn how to isolate or deactivate parts of your Elastic environment to perform maintenance, or restart parts of Elastic. +* [](/deploy-manage/uninstall.md): Uninstall one or more Elastic components. + +## Upgrade + +You can [upgrade your Elastic environment](/deploy-manage/upgrade.md) to gain access to the latest features. Learn how to upgrade your cluster or deployment to the latest {{stack}} version, or upgrade your {{ece}} orchestrator or {{eck}} operator to the latest version. -* [/raw-migrated-files/kibana/kibana/setup.md](/raw-migrated-files/kibana/kibana/setup.md) -* [/raw-migrated-files/tech-content/starting-with-the-elasticsearch-platform-and-its-solutions/get-elastic.md](/raw-migrated-files/tech-content/starting-with-the-elasticsearch-platform-and-its-solutions/get-elastic.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/scalability.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/scalability.md) -* [/raw-migrated-files/cloud/cloud/ec-faq-technical.md](/raw-migrated-files/cloud/cloud/ec-faq-technical.md) -* [/raw-migrated-files/stack-docs/elastic-stack/overview.md](/raw-migrated-files/stack-docs/elastic-stack/overview.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-administering-deployments.md](/raw-migrated-files/cloud/cloud-enterprise/ece-administering-deployments.md) -* [/raw-migrated-files/kibana/kibana/management.md](/raw-migrated-files/kibana/kibana/management.md) \ No newline at end of file diff --git a/deploy-manage/license.md b/deploy-manage/license.md index 5a31b963b..ec1bf6c66 100644 --- a/deploy-manage/license.md +++ b/deploy-manage/license.md @@ -8,9 +8,9 @@ applies_to: serverless: --- -# Manage your license +# Licenses and subscriptions -Your Elastic license determines which features are available and what level of support you receive. +Your Elastic license or subscription determines which features are available and what level of support you receive. Depending on your deployment type, licenses and subscriptions are applied at different levels: diff --git a/deploy-manage/manage-connectors.md b/deploy-manage/manage-connectors.md index bd8efe9a0..4162b1ea6 100644 --- a/deploy-manage/manage-connectors.md +++ b/deploy-manage/manage-connectors.md @@ -7,7 +7,7 @@ applies_to: serverless: --- -# Manage connectors [connector-management] +# Connectors [connector-management] Connectors serve as a central place to store connection information for both Elastic and third-party systems. They enable the linking of actions to rules, which execute as background tasks on the {{kib}} server when rule conditions are met. This allows rules to route actions to various destinations such as log files, ticketing systems, and messaging tools. Different {{kib}} apps may have their own rule types, but they typically share connectors. The **{{stack-manage-app}} > {{connectors-ui}}** provides a central location to view and manage all connectors in the current space. @@ -34,14 +34,14 @@ In **{{stack-manage-app}} > {{connectors-ui}}**, you can find a list of the conn :::{image} ../images/kibana-connector-filter-by-type.png :alt: Filtering the connector list by types of connectors -:class: screenshot +:screenshot: ::: You can delete individual connectors using the trash icon. Alternatively, select multiple connectors and delete them in bulk using the **Delete** button. :::{image} ../images/kibana-connector-delete.png :alt: Deleting connectors individually or in bulk -:class: screenshot +:screenshot: ::: ::::{note} @@ -59,7 +59,7 @@ Some connector types are paid commercial features, while others are free. For a :::{image} ../images/kibana-connector-select-type.png :alt: Connector select type -:class: screenshot +:screenshot: :width: 75% ::: @@ -81,7 +81,7 @@ If a connector is missing sensitive information after the import, a **Fix** butt :::{image} ../images/kibana-connectors-with-missing-secrets.png :alt: Connectors with missing secrets -:class: screenshot +:screenshot: ::: ## Monitoring connectors [monitoring-connectors] diff --git a/deploy-manage/manage-spaces.md b/deploy-manage/manage-spaces.md index a4a826b56..90c0599c9 100644 --- a/deploy-manage/manage-spaces.md +++ b/deploy-manage/manage-spaces.md @@ -7,26 +7,7 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/spaces.html --- -# Manage spaces [xpack-spaces] - -% What needs to be done: Refine - -% GitHub issue: https://github.com/elastic/docs-projects/issues/348 - -% Scope notes: Create a new landing page including the content that is relevant for both serverless and stateful Highlight the differences in subheadings for serverless and stateful Link to solution topics on spaces - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/kibana/kibana/xpack-spaces.md -% - [ ] ./raw-migrated-files/docs-content/serverless/spaces.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$spaces-control-feature-visibility$$$ - -$$$spaces-control-user-access$$$ - -$$$spaces-managing$$$ +# Spaces [xpack-spaces] **Spaces** let you organize your content and users according to your needs. @@ -38,7 +19,7 @@ $$$spaces-managing$$$ :::{image} ../images/kibana-change-space.png :alt: Change current space menu -:class: screenshot +:screenshot: ::: To go to **Spaces**, find **Stack Management** in the navigation menu or use the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md). @@ -55,14 +36,14 @@ To go to **Spaces**, find **Stack Management** in the navigation menu or use the The maximum number of spaces that you can have differs by deployment type: * **Serverless projects:** Maximum of 100 spaces. -* **{{stack}} deployments:** Controlled by the `xpack.spaces.maxSpaces` setting. Default is 1000. View the full list of Space settings in [this document](kibana://reference/configuration-reference/spaces-settings.md). +* **{{stack}} deployments:** Controlled by the `xpack.spaces.maxSpaces` setting. Default is 1000. View the [full list of Space settings](kibana://reference/configuration-reference/spaces-settings.md). To create a space: -::::{tab-set} +:::::{tab-set} :group: stack-serverless -:::{tab-item} {{serverless-short}} +::::{tab-item} {{serverless-short}} :sync: serverless 1. Click **Create space** or select the space you want to edit. @@ -73,9 +54,9 @@ To create a space: 3. Customize the avatar of the space to your liking. 4. Save the space. -::: +:::: -:::{tab-item} {{stack}} +::::{tab-item} {{stack}} :sync: stack 1. Select **Create space** and provide a name, description, and URL identifier. @@ -89,15 +70,16 @@ To create a space: 3. If you selected the **Classic** solution view, you can customize the **Feature visibility** as you need it to be for that space. - % This is hacking since proper admonition blocks are currently breaking my tabs - > **Note:** Even when disabled in this menu, some Management features can remain visible to some users depending on their privileges. Additionally, controlling feature visibility is not a security feature. To secure access to specific features on a per-user basis, you must configure [{{kib}} Security](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md). + :::{note} + Even when disabled in this menu, some Management features can remain visible to some users depending on their privileges. Additionally, controlling feature visibility is not a security feature. To secure access to specific features on a per-user basis, you must configure [{{kib}} Security](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md). + ::: 4. Customize the avatar of the space to your liking. 5. Save your new space by selecting **Create space**. -::: - :::: +::::: + You can edit all of the space settings you just defined at any time, except for the URL identifier. Elastic also allows you to manage spaces using APIs: @@ -151,5 +133,5 @@ To configure the landing page, use the default route setting in [Stack Managemen :::{image} ../images/kibana-spaces-configure-landing-page.png :alt: Configure space-level landing page -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/deploy-manage/monitor/monitoring-data/beats-page.md b/deploy-manage/monitor/monitoring-data/beats-page.md index 21e270079..4ef25cc27 100644 --- a/deploy-manage/monitor/monitoring-data/beats-page.md +++ b/deploy-manage/monitor/monitoring-data/beats-page.md @@ -19,7 +19,7 @@ If you are monitoring Beats, the **Stack Monitoring** page in {{kib}} contains a :::{image} ../../../images/kibana-monitoring-beats.png :alt: Monitoring Beats -:class: screenshot +:screenshot: ::: To view an overview of the Beats data in the cluster, click **Overview**. The overview page has a section for activity in the last day, which is a real-time sample of data. The summary bar and charts follow the typical paradigm of data in the Monitoring UI, which is bound to the span of the time filter. This overview page can therefore show up-to-date or historical information. @@ -28,7 +28,7 @@ To view a listing of the individual Beat instances in the cluster, click **Beats :::{image} ../../../images/kibana-monitoring-beats-detail.png :alt: Monitoring details for Filebeat -:class: screenshot +:screenshot: ::: The detail page contains a summary bar and charts. There are more charts on this page than the overview page and they are specific to a single Beat instance. diff --git a/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md b/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md index 37b1e43a4..023aa2963 100644 --- a/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md +++ b/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md @@ -19,7 +19,7 @@ You can drill down into the status of your {{es}} cluster in {{kib}} by clicking :::{image} ../../../images/kibana-monitoring-elasticsearch.png :alt: Monitoring clusters -:class: screenshot +:screenshot: ::: For more information, refer to [Monitor a cluster](../../monitor.md). @@ -38,7 +38,7 @@ The panel at the top shows the current cluster statistics, the charts show the s :::{image} ../../../images/kibana-monitoring-overview.png :alt: Elasticsearch Cluster Overview -:class: screenshot +:screenshot: ::: ::::{tip} diff --git a/deploy-manage/monitor/monitoring-data/kibana-alerts.md b/deploy-manage/monitor/monitoring-data/kibana-alerts.md index 8be37e17c..5245ade6f 100644 --- a/deploy-manage/monitor/monitoring-data/kibana-alerts.md +++ b/deploy-manage/monitor/monitoring-data/kibana-alerts.md @@ -17,7 +17,7 @@ The {{stack}} {{monitor-features}} provide [Alerting rules](../../../explore-ana :::{image} ../../../images/kibana-monitoring-kibana-alerting-notification.png :alt: {{kib}} alerting notifications in {{stack-monitor-app}} -:class: screenshot +:screenshot: ::: When you open **{{stack-monitor-app}}** for the first time, you will be asked to acknowledge the creation of these default rules. They are initially configured to detect and notify on various conditions across your monitored clusters. You can view notifications for: **Cluster health**, **Resource utilization**, and **Errors and exceptions** for {{es}} in real time. diff --git a/deploy-manage/monitor/monitoring-data/kibana-page.md b/deploy-manage/monitor/monitoring-data/kibana-page.md index 5ec2076f3..6d4b19423 100644 --- a/deploy-manage/monitor/monitoring-data/kibana-page.md +++ b/deploy-manage/monitor/monitoring-data/kibana-page.md @@ -19,7 +19,7 @@ To view the key metrics that indicate the overall health of {{kib}} itself, clic :::{image} ../../../images/kibana-monitoring-kibana-overview.png :alt: Kibana Overview -:class: screenshot +:screenshot: ::: 1. To view {{kib}} instance metrics, click **Instances**. diff --git a/deploy-manage/monitor/monitoring-data/logstash-page.md b/deploy-manage/monitor/monitoring-data/logstash-page.md index 59f16201e..3b8e22046 100644 --- a/deploy-manage/monitor/monitoring-data/logstash-page.md +++ b/deploy-manage/monitor/monitoring-data/logstash-page.md @@ -19,7 +19,7 @@ If you are monitoring Logstash nodes, click **Overview** in the Logstash section :::{image} ../../../images/kibana-monitoring-logstash-overview.png :alt: Logstash Overview -:class: screenshot +:screenshot: ::: 1. To view Logstash node metrics, click **Nodes**. The Nodes section shows the status of each Logstash node. diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md index a0639141c..a07b9f570 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md @@ -73,7 +73,7 @@ You’ll see cluster alerts that require your attention and a summary of the ava :::{image} ../../../images/kibana-monitoring-dashboard.png :alt: Monitoring dashboard -:class: screenshot +:screenshot: ::: If you encounter problems, see [Troubleshooting monitoring](../monitoring-data/monitor-troubleshooting.md). diff --git a/deploy-manage/production-guidance.md b/deploy-manage/production-guidance.md index 322e3f7d9..c2b005180 100644 --- a/deploy-manage/production-guidance.md +++ b/deploy-manage/production-guidance.md @@ -1,6 +1,7 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-best-practices-data.html + - https://www.elastic.co/guide/en/elasticsearch/reference/current/scalability.html --- # Production guidance [ec-best-practices-data] diff --git a/deploy-manage/remote-clusters/ec-migrate-ccs.md b/deploy-manage/remote-clusters/ec-migrate-ccs.md index 1a27120a6..ca9075aaf 100644 --- a/deploy-manage/remote-clusters/ec-migrate-ccs.md +++ b/deploy-manage/remote-clusters/ec-migrate-ccs.md @@ -27,7 +27,7 @@ You can use a PUT request to update your deployment, changing both the deploymen :::{image} ../../images/cloud-ec-migrate-deployment-template(2).png :alt: Deployment Template ID - :class: screenshot + :screenshot: ::: 2. Make a request to update your deployment with two changes: @@ -273,7 +273,7 @@ You can make this change in the user [{{ecloud}} Console](https://cloud.elastic. :::{image} ../../images/cloud-ec-create-from-snapshot-updated.png :alt: Create a Deployment using a snapshot - :class: screenshot + :screenshot: ::: diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-ece.md b/deploy-manage/remote-clusters/ec-remote-cluster-ece.md index 987653daf..f1cf44075 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-ece.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-ece.md @@ -209,7 +209,7 @@ On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png :alt: Remote Cluster Parameters in Deployment - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-other-ess.md b/deploy-manage/remote-clusters/ec-remote-cluster-other-ess.md index 156f5ba2a..edbab1f49 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-other-ess.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-other-ess.md @@ -147,7 +147,7 @@ On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png :alt: Remote Cluster Parameters in Deployment - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md b/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md index 1c0098e23..109ed0550 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md @@ -85,7 +85,7 @@ By default, any deployment that you create trusts all other deployments in the s :::{image} ../../images/cloud-ec-account-trust-management.png :alt: Trust management at the account Level -:class: screenshot +:screenshot: ::: ::::{note} @@ -183,7 +183,7 @@ On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png :alt: Remote Cluster Parameters in Deployment - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md b/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md index c4d6cba4a..57f9eaad2 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md @@ -235,7 +235,7 @@ On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png :alt: Remote Cluster Parameters in Deployment - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/deploy-manage/remote-clusters/ece-migrate-ccs.md b/deploy-manage/remote-clusters/ece-migrate-ccs.md index 96a95775e..6bc9a5de0 100644 --- a/deploy-manage/remote-clusters/ece-migrate-ccs.md +++ b/deploy-manage/remote-clusters/ece-migrate-ccs.md @@ -24,7 +24,7 @@ You can make this change in the user Cloud UI. The only drawback of this method :::{image} ../../images/cloud-enterprise-ce-create-from-snapshot-updated.png :alt: Create a Deployment using a snapshot - :class: screenshot + :screenshot: ::: 4. Finally, [configure the remote clusters](/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md). diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md b/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md index 8cc3ef63f..e036dd2e2 100644 --- a/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md +++ b/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md @@ -154,7 +154,7 @@ On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png :alt: Remote Cluster Parameters in Deployment - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md b/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md index bcaf2db56..6e1c1535c 100644 --- a/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md +++ b/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md @@ -226,7 +226,7 @@ On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png :alt: Remote Cluster Parameters in Deployment - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md b/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md index 3ecb9d2f2..1a436be28 100644 --- a/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md +++ b/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md @@ -84,7 +84,7 @@ By default, any deployment that you or your users create trusts all other deploy :::{image} ../../images/cloud-enterprise-ce-environment-trust-management.png :alt: Trust management at the environment Level -:class: screenshot +:screenshot: ::: ::::{note} @@ -182,7 +182,7 @@ On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png :alt: Remote Cluster Parameters in Deployment - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md b/deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md index 0a5299b3e..71e88888e 100644 --- a/deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md +++ b/deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md @@ -233,7 +233,7 @@ On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png :alt: Remote Cluster Parameters in Deployment - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/deploy-manage/security.md b/deploy-manage/security.md index 589190a66..b085b550c 100644 --- a/deploy-manage/security.md +++ b/deploy-manage/security.md @@ -9,10 +9,21 @@ mapped_urls: - https://www.elastic.co/guide/en/kibana/current/using-kibana-with-security.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/security-limitations.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/es-security-principles.html + - https://www.elastic.co/guide/en/cloud/current/ec-faq-technical.html --- # Security +% SR: include this info somewhere in this section +% {{ech}} doesn't support custom SSL certificates, which means that a custom CNAME for an {{ech}} endpoint such as *mycluster.mycompanyname.com* also is not supported. +% +% In {{ech}}, IP sniffing is not supported by design and will not return the expected results. We prevent IP sniffing from returning the expected results to improve the security of our underlying {{ech}} infrastructure. +% +% encryption at rest (EAR) is enabled in {{ech}} by default. We support EAR for both the data stored in your clusters and the snapshots we take for backup, on all cloud platforms and across all regions. +% You can also bring your own key (BYOK) to encrypt your Elastic Cloud deployment data and snapshots. For more information, check [Encrypt your deployment with a customer-managed encryption key](../../../deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md). + +Note that the encryption happens at the file system level. + % What needs to be done: Refine % GitHub issue: https://github.com/elastic/docs-projects/issues/346 @@ -31,6 +42,7 @@ mapped_urls: % - [ ] ./raw-migrated-files/kibana/kibana/using-kibana-with-security.md % - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/security-limitations.md % - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md +% - [ ] ./raw-migrated-files/cloud/cloud/ec-faq-technical.md $$$field-document-limitations$$$ @@ -52,4 +64,5 @@ $$$maintaining-audit-trail$$$ * [/raw-migrated-files/cloud/cloud-heroku/ech-security.md](/raw-migrated-files/cloud/cloud-heroku/ech-security.md) * [/raw-migrated-files/kibana/kibana/using-kibana-with-security.md](/raw-migrated-files/kibana/kibana/using-kibana-with-security.md) * [/raw-migrated-files/elasticsearch/elasticsearch-reference/security-limitations.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/security-limitations.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md) \ No newline at end of file +* [/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/es-security-principles.md) +* [/raw-migrated-files/cloud/cloud/ec-faq-technical.md](/raw-migrated-files/cloud/cloud/ec-faq-technical.md) \ No newline at end of file diff --git a/deploy-manage/toc.yml b/deploy-manage/toc.yml index 809b55934..f8cb841b5 100644 --- a/deploy-manage/toc.yml +++ b/deploy-manage/toc.yml @@ -1,27 +1,6 @@ project: 'Deploy and manage' toc: - file: index.md - - file: distributed-architecture.md - children: - - file: distributed-architecture/clusters-nodes-shards.md - children: - - file: distributed-architecture/clusters-nodes-shards/node-roles.md - - file: distributed-architecture/reading-and-writing-documents.md - - file: distributed-architecture/shard-allocation-relocation-recovery.md - children: - - file: distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md - - file: distributed-architecture/shard-allocation-relocation-recovery/index-level-shard-allocation.md - children: - - file: distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md - - file: distributed-architecture/discovery-cluster-formation.md - children: - - file: distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md - - file: distributed-architecture/discovery-cluster-formation/modules-discovery-quorums.md - - file: distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md - - file: distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md - - file: distributed-architecture/discovery-cluster-formation/cluster-state-overview.md - - file: distributed-architecture/discovery-cluster-formation/cluster-fault-detection.md - - file: distributed-architecture/kibana-tasks-management.md - file: deploy.md children: - file: deploy/deployment-comparison.md @@ -364,6 +343,27 @@ toc: - file: deploy/self-managed/air-gapped-install.md - file: deploy/self-managed/tools-apis.md - file: deploy/kibana-reporting-configuration.md + - file: distributed-architecture.md + children: + - file: distributed-architecture/clusters-nodes-shards.md + children: + - file: distributed-architecture/clusters-nodes-shards/node-roles.md + - file: distributed-architecture/reading-and-writing-documents.md + - file: distributed-architecture/shard-allocation-relocation-recovery.md + children: + - file: distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md + - file: distributed-architecture/shard-allocation-relocation-recovery/index-level-shard-allocation.md + children: + - file: distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md + - file: distributed-architecture/discovery-cluster-formation.md + children: + - file: distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md + - file: distributed-architecture/discovery-cluster-formation/modules-discovery-quorums.md + - file: distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md + - file: distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md + - file: distributed-architecture/discovery-cluster-formation/cluster-state-overview.md + - file: distributed-architecture/discovery-cluster-formation/cluster-fault-detection.md + - file: distributed-architecture/kibana-tasks-management.md - file: production-guidance.md children: - file: production-guidance/getting-ready-for-production-elasticsearch.md @@ -469,48 +469,10 @@ toc: - file: tools/cross-cluster-replication/_perform_update_or_delete_by_query.md - file: autoscaling.md children: - - file: autoscaling/ech-autoscaling.md - children: - - file: autoscaling/ech-autoscaling-example.md - - file: autoscaling/ec-autoscaling.md - children: - - file: autoscaling/ec-autoscaling-example.md - - file: autoscaling/ec-autoscaling-api-example.md - - file: autoscaling/ece-autoscaling.md - children: - - file: autoscaling/ece-autoscaling-example.md - - file: autoscaling/ece-autoscaling-api-example.md - - file: autoscaling/autoscaling-stateless-applications-on-eck.md - - file: autoscaling/deployments-autoscaling-on-eck.md + - file: autoscaling/autoscaling-in-ece-and-ech.md + - file: autoscaling/autoscaling-in-eck.md - file: autoscaling/autoscaling-deciders.md - file: autoscaling/trained-model-autoscaling.md - - file: remote-clusters.md - children: - - file: remote-clusters/ec-enable-ccs.md - children: - - file: remote-clusters/ec-remote-cluster-same-ess.md - - file: remote-clusters/ec-remote-cluster-other-ess.md - - file: remote-clusters/ec-remote-cluster-ece.md - - file: remote-clusters/ec-remote-cluster-self-managed.md - - file: remote-clusters/ec-enable-ccs-for-eck.md - - file: remote-clusters/ec-edit-remove-trusted-environment.md - - file: remote-clusters/ec-migrate-ccs.md - - file: remote-clusters/ece-enable-ccs.md - children: - - file: remote-clusters/ece-remote-cluster-same-ece.md - - file: remote-clusters/ece-remote-cluster-other-ece.md - - file: remote-clusters/ece-remote-cluster-ece-ess.md - - file: remote-clusters/ece-remote-cluster-self-managed.md - - file: remote-clusters/ece-enable-ccs-for-eck.md - - file: remote-clusters/ece-edit-remove-trusted-environment.md - - file: remote-clusters/ece-migrate-ccs.md - - file: remote-clusters/remote-clusters-self-managed.md - children: - - file: remote-clusters/remote-clusters-api-key.md - - file: remote-clusters/remote-clusters-cert.md - - file: remote-clusters/remote-clusters-migrate.md - - file: remote-clusters/remote-clusters-settings.md - - file: remote-clusters/eck-remote-clusters.md - file: security.md children: - file: security/secure-your-elastic-cloud-enterprise-installation.md @@ -655,6 +617,33 @@ toc: - file: api-keys/elastic-cloud-api-keys.md - file: api-keys/elastic-cloud-enterprise-api-keys.md - file: manage-connectors.md + - file: remote-clusters.md + children: + - file: remote-clusters/ec-enable-ccs.md + children: + - file: remote-clusters/ec-remote-cluster-same-ess.md + - file: remote-clusters/ec-remote-cluster-other-ess.md + - file: remote-clusters/ec-remote-cluster-ece.md + - file: remote-clusters/ec-remote-cluster-self-managed.md + - file: remote-clusters/ec-enable-ccs-for-eck.md + - file: remote-clusters/ec-edit-remove-trusted-environment.md + - file: remote-clusters/ec-migrate-ccs.md + - file: remote-clusters/ece-enable-ccs.md + children: + - file: remote-clusters/ece-remote-cluster-same-ece.md + - file: remote-clusters/ece-remote-cluster-other-ece.md + - file: remote-clusters/ece-remote-cluster-ece-ess.md + - file: remote-clusters/ece-remote-cluster-self-managed.md + - file: remote-clusters/ece-enable-ccs-for-eck.md + - file: remote-clusters/ece-edit-remove-trusted-environment.md + - file: remote-clusters/ece-migrate-ccs.md + - file: remote-clusters/remote-clusters-self-managed.md + children: + - file: remote-clusters/remote-clusters-api-key.md + - file: remote-clusters/remote-clusters-cert.md + - file: remote-clusters/remote-clusters-migrate.md + - file: remote-clusters/remote-clusters-settings.md + - file: remote-clusters/eck-remote-clusters.md - file: monitor.md children: - file: monitor/autoops.md diff --git a/deploy-manage/tools/cross-cluster-replication/ccr-getting-started-auto-follow.md b/deploy-manage/tools/cross-cluster-replication/ccr-getting-started-auto-follow.md index 4a754644e..b378edf88 100644 --- a/deploy-manage/tools/cross-cluster-replication/ccr-getting-started-auto-follow.md +++ b/deploy-manage/tools/cross-cluster-replication/ccr-getting-started-auto-follow.md @@ -28,7 +28,7 @@ As new indices matching these patterns are created on the remote, {{es}} automat :::{image} ../../../images/elasticsearch-reference-auto-follow-patterns.png :alt: The Auto-follow patterns page in {{kib}} -:class: screenshot +:screenshot: ::: ::::{dropdown} API example diff --git a/deploy-manage/tools/cross-cluster-replication/ccr-getting-started-follower-index.md b/deploy-manage/tools/cross-cluster-replication/ccr-getting-started-follower-index.md index bc5959ee0..95ee05393 100644 --- a/deploy-manage/tools/cross-cluster-replication/ccr-getting-started-follower-index.md +++ b/deploy-manage/tools/cross-cluster-replication/ccr-getting-started-follower-index.md @@ -27,7 +27,7 @@ When you index documents into your leader index, {{es}} replicates the documents :::{image} ../../../images/elasticsearch-reference-ccr-follower-index.png :alt: The Cross-Cluster Replication page in {{kib}} -:class: screenshot +:screenshot: ::: ::::{dropdown} API example diff --git a/deploy-manage/users-roles.md b/deploy-manage/users-roles.md index fb6bb93c3..8059b7bd7 100644 --- a/deploy-manage/users-roles.md +++ b/deploy-manage/users-roles.md @@ -11,7 +11,7 @@ applies_to: serverless: all --- -# Manage users and roles +# Users and roles To prevent unauthorized access to your Elastic resources, you need a way to identify users and validate that a user is who they claim to be (*authentication*), and control what data users can access and what tasks they can perform (*authorization*). diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md b/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md index 9b438c621..31c0e8447 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md @@ -39,6 +39,6 @@ When you authenticate using `basic.basic1`, you’ll see the following agreement :::{image} ../../../images/kibana-access-agreement.png :alt: Access Agreement UI -:class: screenshot +:screenshot: ::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md index 2247f7683..bca152fb3 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md @@ -68,7 +68,7 @@ xpack.security.authc.providers: :::{image} ../../../images/kibana-kibana-login.png :alt: Login Selector UI -:class: screenshot +:screenshot: ::: For more information, refer to [authentication security settings](kibana://reference/configuration-reference/security-settings.md#authentication-security-settings). diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md index 33fb4f667..71a4bdf49 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md @@ -33,7 +33,7 @@ From the role management screen: :::{image} ../../../images/kibana-assign-base-privilege.png :alt: Assign base privilege -:class: screenshot +:screenshot: ::: Using the [role APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-roles): @@ -78,7 +78,7 @@ From the role management screen: :::{image} ../../../images/kibana-assign-subfeature-privilege.png :alt: Assign feature privilege -:class: screenshot +:screenshot: ::: Using the [role APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-roles): diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-role-management.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-role-management.md index 411ea7a22..43e6e6255 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-role-management.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-role-management.md @@ -45,7 +45,7 @@ Document-level and field-level security affords you even more granularity when i
:::{image} ../../../images/kibana-create-role-index-example.png :alt: Create role with index privileges -:class: screenshot +:screenshot: ::: ### Example: Grant read access to specific documents in indices that match the `filebeat-*` pattern [index_privilege_dls_example] @@ -76,7 +76,7 @@ Document-level and field-level security affords you even more granularity when i
:::{image} ../../../images/kibana-create-role-dls-example.png :alt: Create role with DLS index privileges -:class: screenshot +:screenshot: ::: @@ -85,7 +85,7 @@ Document-level and field-level security affords you even more granularity when i If you have at least a platinum license, you can manage access to indices in remote clusters. -You can assign the same privileges, document-level, and field-level as for [local index privileges](/deploy-manage/index.md#adding_index_privileges). +You can assign the same privileges, document-level, and field-level as for [local index privileges](#adding_index_privileges). ### Example: Grant access to indices in remote clusters [remote_index_privilege_example_1] @@ -99,7 +99,7 @@ You can assign the same privileges, document-level, and field-level as for [loca
:::{image} ../../../images/kibana-create-role-remote-index-example.png :alt: Create role with remote index privileges -:class: screenshot +:screenshot: ::: ## {{kib}} privileges [adding_kibana_privileges] @@ -109,7 +109,7 @@ To assign {{kib}} privileges to the role, click **Add {{kib}} privilege** in the
:::{image} ../../../images/kibana-spaces-roles.png :alt: Add {{kib}} privileges -:class: screenshot +:screenshot: :width: 650px ::: @@ -129,7 +129,7 @@ To apply your changes, click **Add {{kib}} privilege**. The privilege shows up u
:::{image} ../../../images/kibana-create-space-privilege.png :alt: Add {{kib}} privilege -:class: screenshot +:screenshot: ::: @@ -167,7 +167,7 @@ To view a summary of the privileges granted, click **View privilege summary**.
:::{image} ../../../images/kibana-privilege-example-1.png :alt: Privilege example 1 -:class: screenshot +:screenshot: :width: 650px ::: @@ -185,7 +185,7 @@ To view a summary of the privileges granted, click **View privilege summary**.
:::{image} ../../../images/kibana-privilege-example-2.png :alt: Privilege example 2 -:class: screenshot +:screenshot: ::: @@ -202,5 +202,5 @@ To view a summary of the privileges granted, click **View privilege summary**.
:::{image} ../../../images/kibana-privilege-example-3.png :alt: Privilege example 3 -:class: screenshot +:screenshot: ::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/native.md b/deploy-manage/users-roles/cluster-or-deployment-auth/native.md index 0f73cdfbe..89b770e40 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/native.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/native.md @@ -75,7 +75,7 @@ Elastic enables you to easily manage users in {{kib}} on the **Stack Management :::{image} ../../../images/kibana-tutorial-secure-access-example-1-user.png :alt: Create user UI -:class: screenshot +:screenshot: ::: ## Manage native users using the `user` API [native-users-api] diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md b/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md index 1f62145b6..0a5e93c6d 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md @@ -69,7 +69,7 @@ Create a **Marketing** space for your marketing analysts to use. :::{image} ../../../images/kibana-tutorial-secure-access-example-1-space.png :alt: Create space UI - :class: screenshot + :screenshot: ::: @@ -107,7 +107,7 @@ To create the role: :::{image} ../../../images/kibana-tutorial-secure-access-example-1-role.png :alt: Create role UI - :class: screenshot + :screenshot: ::: @@ -124,7 +124,7 @@ Now that you created a role, create a user account. :::{image} ../../../images/kibana-tutorial-secure-access-example-1-user.png :alt: Create user UI -:class: screenshot +:screenshot: ::: @@ -139,7 +139,7 @@ Verify that the user and role are working correctly. :::{image} ../../../images/kibana-tutorial-secure-access-example-1-test.png :alt: Verifying access to dashboards - :class: screenshot + :screenshot: ::: diff --git a/deploy-manage/users-roles/serverless-custom-roles.md b/deploy-manage/users-roles/serverless-custom-roles.md index 94382592c..fb087f75c 100644 --- a/deploy-manage/users-roles/serverless-custom-roles.md +++ b/deploy-manage/users-roles/serverless-custom-roles.md @@ -39,7 +39,7 @@ Cluster privileges grant access to monitoring and management features in {{es}}. :::{image} ../../images/serverless-custom-roles-cluster-privileges.png :alt: Create a custom role and define {{es}} cluster privileges -:class: screenshot +:screenshot: ::: Refer to [cluster privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md#privileges-list-cluster) for a complete description of available options. @@ -51,7 +51,7 @@ Each role can grant access to multiple data indices, and each index can have a d :::{image} ../../images/serverless-custom-roles-index-privileges.png :alt: Create a custom role and define {{es}} index privileges -:class: screenshot +:screenshot: ::: Refer to [index privileges](elasticsearch://reference/elasticsearch/security-privileges.md#privileges-list-indices) for a complete description of available options. @@ -83,7 +83,7 @@ When you create a custom role, click **Add Kibana privilege** to grant access to :::{image} ../../images/serverless-custom-roles-kibana-privileges.png :alt: Create a custom role and define {{kib}} privileges -:class: screenshot +:screenshot: ::: Open the **Spaces** selection control to specify whether to grant the role access to all spaces or one or more individual spaces. When using the **Customize by feature** option, you can choose either **All**, **Read** or **None** for access to each feature. diff --git a/docset.yml b/docset.yml index 3eca9c92c..6d80ca5ac 100644 --- a/docset.yml +++ b/docset.yml @@ -2,7 +2,6 @@ project: 'Elastic documentation' features: primary-nav: true - landing-page: true exclude: - 'README.md' @@ -61,6 +60,7 @@ toc: - toc: reference - toc: extend - toc: raw-migrated-files + - hidden: 404.md subs: ref: "https://www.elastic.co/guide/en/elasticsearch/reference/current" diff --git a/explore-analyze/alerts-cases/alerts.md b/explore-analyze/alerts-cases/alerts.md index de30889f7..dee005996 100644 --- a/explore-analyze/alerts-cases/alerts.md +++ b/explore-analyze/alerts-cases/alerts.md @@ -30,7 +30,7 @@ Each project type supports a specific set of rule types. Each *rule type* provid :::{image} ../../images/serverless-es-query-rule-conditions.png :alt: UI for defining rule conditions in an {{es}} query rule -:class: screenshot +:screenshot: ::: ### Schedule [rules-schedule] @@ -58,14 +58,14 @@ After you select a connector, set the *action frequency*. If you want to reduce :::{image} ../../images/serverless-es-query-rule-action-summary.png :alt: UI for defining rule conditions in an {{es}} query rule -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that the action runs for each alert. If the rule type does not support alert summaries, this is your only available option. You must choose when the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). You must also choose an action group, which affects whether the action runs. Each rule type has a specific set of valid action groups. For example, you can set *Run when* to `Query matched` or `Recovered` for the {{es}} query rule: :::{image} ../../images/serverless-es-query-rule-recovery-action.png :alt: UI for defining a recovery action -:class: screenshot +:screenshot: ::: Each connector supports a specific set of actions for each action group and enables different action properties. For example, you can have actions that create an {{opsgenie}} alert when rule conditions are met and recovery actions that close the {{opsgenie}} alert. @@ -95,7 +95,7 @@ You can pass rule values to an action at the time a condition is detected. To vi :::{image} ../../images/serverless-es-query-rule-action-variables.png :alt: Passing rule values to an action -:class: screenshot +:screenshot: ::: For more information about common action variables, refer to [Rule actions variables](../../explore-analyze/alerts-cases/alerts/rule-action-variables.md) @@ -112,7 +112,7 @@ A rule consists of conditions, actions, and a schedule. When conditions are met, :::{image} ../../images/serverless-rule-concepts-summary.svg :alt: Rules -:class: screenshot +:screenshot: ::: 1. Any time a rule’s conditions are met, an alert is created. This example checks for servers with average CPU > 0.9. Three servers meet the condition, so three alerts are created. diff --git a/explore-analyze/alerts-cases/alerts/alerting-common-issues.md b/explore-analyze/alerts-cases/alerts/alerting-common-issues.md index 07df0771e..699f940c9 100644 --- a/explore-analyze/alerts-cases/alerts/alerting-common-issues.md +++ b/explore-analyze/alerts-cases/alerts/alerting-common-issues.md @@ -73,7 +73,7 @@ and in the [details page](create-manage-rules.md#rule-details): :::{image} ../../../images/kibana-rule-details-timeout-error.png :alt: Rule details page with timeout error -:class: screenshot +:screenshot: ::: If you want your rules to run longer, update the `xpack.alerting.rules.run.timeout` configuration in your [Alerting settings](kibana://reference/configuration-reference/alerting-settings.md#alert-settings). You can also target a specific rule type by using `xpack.alerting.rules.run.ruleTypeOverrides`. diff --git a/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md b/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md index e258af277..959b10cc7 100644 --- a/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md +++ b/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md @@ -33,7 +33,7 @@ The following debugging tools are available: :::{image} ../../../images/kibana-rule-details-alerts-inactive.png :alt: Alerting management details -:class: screenshot +:screenshot: ::: ## Preview the index threshold rule chart [alerting-index-threshold-chart] @@ -42,7 +42,7 @@ When creating or editing an index threshold rule, you see a graph of the data th :::{image} ../../../images/kibana-index-threshold-chart.png :alt: Index Threshold chart -:class: screenshot +:screenshot: ::: The end date is related to the check interval for the rule. You can use this view to see if the rule is getting the data you expect, and visually compare to the threshold value (a horizontal line in the graph). If the graph does not contain any lines except for the threshold line, then the rule has an issue, for example, no data is available given the specified index and fields or there is a permission error. Diagnosing these may be difficult - but there may be log messages for error conditions. @@ -79,7 +79,7 @@ The **{{stack-manage-app}}** > **{{rules-ui}}** page contains an error banner th :::{image} ../../../images/kibana-rules-management-health.png :alt: Rule management page with the errors banner -:class: screenshot +:screenshot: ::: ## Task Manager diagnostics [task-manager-diagnostics] diff --git a/explore-analyze/alerts-cases/alerts/create-manage-rules.md b/explore-analyze/alerts-cases/alerts/create-manage-rules.md index 61880ab49..064741f55 100644 --- a/explore-analyze/alerts-cases/alerts/create-manage-rules.md +++ b/explore-analyze/alerts-cases/alerts/create-manage-rules.md @@ -45,7 +45,7 @@ Each rule type provides its own way of defining the conditions to detect, but an :::{image} ../../../images/kibana-rule-types-es-query-conditions.png :alt: UI for defining rule conditions in an {{es}} query rule -:class: screenshot +:screenshot: ::: All rules must have a check interval, which defines how often to evaluate the rule conditions. Checks are queued; they run as close to the defined value as capacity allows. @@ -70,14 +70,14 @@ For example, if you create an {{es}} query rule, you can send notifications that :::{image} ../../../images/kibana-es-query-rule-action-summary.png :alt: UI for defining alert summary action in an {{es}} query rule -:class: screenshot +:screenshot: ::: When you choose to run actions for each alert, you must specify an action group. Each rule type has a set of valid action groups, which affect when an action runs. For example, you can set **Run when** to `Query matched` or `Recovered` for the {{es}} query rule: :::{image} ../../../images/kibana-es-query-rule-recovery-action.png :alt: UI for defining a recovery action -:class: screenshot +:screenshot: ::: Connectors have unique behavior for each action group. For example, you can have actions that create an {{opsgenie}} alert when rule conditions are met and recovery actions that close the {{opsgenie}} alert. For more information about connectors, refer to [*Connectors*](../../../deploy-manage/manage-connectors.md). @@ -107,7 +107,7 @@ You can pass rule values to an action at the time a condition is detected. To vi :::{image} ../../../images/kibana-es-query-rule-action-variables.png :alt: Passing rule values to an action -:class: screenshot +:screenshot: ::: For more information about common action variables, refer to [*Rule action variables*](rule-action-variables.md). @@ -145,7 +145,7 @@ Click the rule name to access a rule details page: :::{image} ../../../images/kibana-rule-details-alerts-active.png :alt: Rule details page with multiple alerts -:class: screenshot +:screenshot: ::: In this example, the rule detects when a site serves more than a threshold number of bytes in a 24 hour period. Four sites are above the threshold. These are called alerts - occurrences of the condition being detected - and the alert name, status, time of detection, and duration of the condition are shown in this view. Alerts come and go from the list depending on whether the rule conditions are met. For more information about alerts, go to [*View alerts*](view-alerts.md). @@ -154,7 +154,7 @@ If there are rule actions that failed to run successfully, you can see the detai :::{image} ../../../images/kibana-rule-details-errored-actions.png :alt: Rule histor page with alerts that have errored actions -:class: screenshot +:screenshot: ::: ## Import and export rules [importing-and-exporting-rules] @@ -174,5 +174,5 @@ Rules are disabled on export. You are prompted to re-enable the rule on successf :::{image} ../../../images/kibana-rules-imported-banner.png :alt: Rules import banner -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/alerts-cases/alerts/geo-alerting.md b/explore-analyze/alerts-cases/alerts/geo-alerting.md index ae2b00cff..098105eaa 100644 --- a/explore-analyze/alerts-cases/alerts/geo-alerting.md +++ b/explore-analyze/alerts-cases/alerts/geo-alerting.md @@ -19,7 +19,7 @@ When you create a tracking containment rule, you must define the conditions that :::{image} ../../../images/kibana-alert-types-tracking-containment-conditions.png :alt: Creating a tracking containment rule in Kibana -:class: screenshot +:screenshot: ::: 1. Define the entities index, which must contain a `geo_point` or `geo_shape` field, `date` field, and entity identifier. An entity identifier is a `keyword`, `number`, or `ip` field that identifies the entity. Entity data is expected to be updating so that there are entity movements to alert upon. @@ -43,7 +43,7 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/kibana-alert-types-tracking-containment-action-options.png :alt: Action frequency options for an action -:class: screenshot +:screenshot: ::: You can further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame. @@ -54,7 +54,7 @@ You can pass rule values to an action to provide contextual details. To view the :::{image} ../../../images/kibana-alert-types-tracking-containment-rule-action-variables.png :alt: Passing rule values to an action -:class: screenshot +:screenshot: ::: The following action variables are specific to the tracking containment rule. You can also specify [variables common to all rules](rule-action-variables.md). diff --git a/explore-analyze/alerts-cases/alerts/maintenance-windows.md b/explore-analyze/alerts-cases/alerts/maintenance-windows.md index 1a6c11a11..0f1ffdfd8 100644 --- a/explore-analyze/alerts-cases/alerts/maintenance-windows.md +++ b/explore-analyze/alerts-cases/alerts/maintenance-windows.md @@ -42,7 +42,7 @@ When you create a maintenance window, you must provide a name and a schedule. Yo :::{image} ../../../images/kibana-create-maintenance-window.png :alt: The Create Maintenance Window user interface in {{kib}} -:class: screenshot +:screenshot: ::: By default, maintenance windows affect all categories of rules. The category-specific maintenance window options alter this behavior. For the definitive list of rule types in each category, refer to the [get rule types API](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-alerting). @@ -51,7 +51,7 @@ If you turn on **Filter alerts**, you can use KQL to filter the alerts affected :::{image} ../../../images/kibana-create-maintenance-window-filter.png :alt: The Create Maintenance Window user interface in {{kib}} with alert filters turned on -:class: screenshot +:screenshot: ::: ::::{note} diff --git a/explore-analyze/alerts-cases/alerts/rule-type-es-query.md b/explore-analyze/alerts-cases/alerts/rule-type-es-query.md index 289532836..5d477c0d5 100644 --- a/explore-analyze/alerts-cases/alerts/rule-type-es-query.md +++ b/explore-analyze/alerts-cases/alerts/rule-type-es-query.md @@ -19,7 +19,7 @@ When you create an {{es}} query rule, your choice of query type affects the info :::{image} ../../../images/kibana-rule-types-es-query-conditions.png :alt: Define the condition to detect -:class: screenshot +:screenshot: ::: 1. Define your query @@ -72,14 +72,14 @@ If you use query DSL, KQL, or Lucene, the query runs against the selected indice :::{image} ../../../images/kibana-rule-types-es-query-valid.png :alt: Test {{es}} query returns number of matches when valid -:class: screenshot +:screenshot: ::: If you use an ES|QL query, a table is displayed. For example: :::{image} ../../../images/kibana-rule-types-esql-query-valid.png :alt: Test ES|QL query returns a table when valid -:class: screenshot +:screenshot: ::: If the query is not valid, an error occurs. @@ -98,14 +98,14 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/kibana-es-query-rule-action-summary.png :alt: UI for defining alert summary action in an {{es}} query rule -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that actions run for each alert. Choose how often the action runs (at each check interval, only when the alert status changes, or at a custom action interval). You must also choose an action group, which indicates whether the action runs when the query is matched or when the alert is recovered. Each connector supports a specific set of actions for each action group. For example: :::{image} ../../../images/kibana-es-query-rule-action-query-matched.png :alt: UI for defining a recovery action -:class: screenshot +:screenshot: ::: You can further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame. @@ -127,7 +127,7 @@ Rules use rule action variables and Mustache templates to pass contextual detail :::{image} ../../../images/kibana-es-query-rule-action-variables.png :alt: Passing rule values to an action -:class: screenshot +:screenshot: ::: The following variables are specific to the {{es}} query rule: diff --git a/explore-analyze/alerts-cases/alerts/rule-type-index-threshold.md b/explore-analyze/alerts-cases/alerts/rule-type-index-threshold.md index a1003fe44..ce51e4682 100644 --- a/explore-analyze/alerts-cases/alerts/rule-type-index-threshold.md +++ b/explore-analyze/alerts-cases/alerts/rule-type-index-threshold.md @@ -19,7 +19,7 @@ When you create an index threshold rule, you must define the conditions for the :::{image} ../../../images/kibana-rule-types-index-threshold-conditions.png :alt: Defining index threshold rule conditions in {{kib}} -:class: screenshot +:screenshot: ::: 1. Specify the indices to query and a time field that will be used for the time window. @@ -47,14 +47,14 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/kibana-rule-types-index-threshold-example-action-summary.png :alt: UI for defining alert summary action in an index threshold rule -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that actions run for each alert. Choose how often the action runs (at each check interval, only when the alert status changes, or at a custom action interval). You must also choose an action group, which indicates whether the action runs when the threshold is met or when the alert is recovered. Each connector supports a specific set of actions for each action group. For example: :::{image} ../../../images/kibana-rule-types-index-threshold-example-action.png :alt: UI for defining an action for each alert -:class: screenshot +:screenshot: ::: You can further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame. @@ -92,31 +92,31 @@ In this example, you will use the {{kib}} [sample weblog data set](/explore-anal 2. Select an index. Click **Index**, and set **Indices to query** to `kibana_sample_data_logs`. Set the **Time field** to `@timestamp`. :::{image} ../../../images/kibana-rule-types-index-threshold-example-index.png :alt: Choosing an index - :class: screenshot + :screenshot: ::: 3. To detect the number of bytes served during the time window, click **When** and select `sum` as the aggregation, and `bytes` as the field to aggregate. :::{image} ../../../images/kibana-rule-types-index-threshold-example-aggregation.png :alt: Choosing the aggregation - :class: screenshot + :screenshot: ::: 4. To detect the four sites that have the most traffic, click **Over** and select `top`, enter `4`, and select `host.keyword` as the field. :::{image} ../../../images/kibana-rule-types-index-threshold-example-grouping.png :alt: Choosing the groups - :class: screenshot + :screenshot: ::: 5. To trigger the rule when any of the top four sites exceeds 420,000 bytes over a 24 hour period, select `is above` and enter `420000`. Then click **For the last**, enter `24`, and select `hours`. :::{image} ../../../images/kibana-rule-types-index-threshold-example-threshold.png :alt: Setting the threshold - :class: screenshot + :screenshot: ::: 6. Schedule the rule to check every four hours. :::{image} ../../../images/kibana-rule-types-index-threshold-example-preview.png :alt: Setting the check interval - :class: screenshot + :screenshot: ::: The preview chart will render showing the 24 hour sum of bytes at 4 hours intervals for the past 120 hours (the last 30 intervals). @@ -127,7 +127,7 @@ In this example, you will use the {{kib}} [sample weblog data set](/explore-anal You can add one or more actions to your rule to generate notifications when its conditions are met and when they are no longer met. For each action, you must select a connector, set the action frequency, and compose the notification details. For example, add an action that uses a server log connector to write an entry to the Kibana server log: :::{image} ../../../images/kibana-rule-types-index-threshold-example-action.png :alt: Add an action to the rule - :class: screenshot + :screenshot: ::: The unique action variables that you can use in the notification are listed in [Add action variables](#action-variables-index-threshold). For more information, refer to [Actions](create-manage-rules.md#defining-rules-actions-details) and [*Connectors*](../../../deploy-manage/manage-connectors.md). @@ -137,7 +137,7 @@ In this example, you will use the {{kib}} [sample weblog data set](/explore-anal 3. Find the rule and view its details in **{{stack-manage-app}} > {{rules-ui}}**. For example, you can see the status of the rule and its alerts: :::{image} ../../../images/kibana-rule-types-index-threshold-example-alerts.png :alt: View the list of alerts for the rule - :class: screenshot + :screenshot: ::: 4. Delete or disable this example rule when it’s no longer useful. In the detailed rule view, select **Delete rule** from the actions menu. diff --git a/explore-analyze/alerts-cases/alerts/testing-connectors.md b/explore-analyze/alerts-cases/alerts/testing-connectors.md index 21cb9d2d2..1c0d6af64 100644 --- a/explore-analyze/alerts-cases/alerts/testing-connectors.md +++ b/explore-analyze/alerts-cases/alerts/testing-connectors.md @@ -12,19 +12,19 @@ In **{{stack-manage-app}} > {{connectors-ui}}**, you can test a newly created co :::{image} ../../../images/kibana-connector-save-and-test.png :alt: Rule management page with the errors banner -:class: screenshot +:screenshot: ::: or by directly opening the proper connector edit flyout: :::{image} ../../../images/kibana-email-connector-test.png :alt: Rule management page with the errors banner -:class: screenshot +:screenshot: ::: :::{image} ../../../images/kibana-teams-connector-test.png :alt: Five clauses define the condition to detect -:class: screenshot +:screenshot: ::: ## [preview] Troubleshooting connectors with the `kbn-action` tool [_troubleshooting_connectors_with_the_kbn_action_tool] diff --git a/explore-analyze/alerts-cases/alerts/view-alerts.md b/explore-analyze/alerts-cases/alerts/view-alerts.md index 582e6ea9e..fe5c9559b 100644 --- a/explore-analyze/alerts-cases/alerts/view-alerts.md +++ b/explore-analyze/alerts-cases/alerts/view-alerts.md @@ -14,7 +14,7 @@ You can manage the alerts for each rule in **{{stack-manage-app}}** > **{{rules- :::{image} ../../../images/kibana-stack-management-alerts-page.png :alt: Alerts page with multiple alerts -:class: screenshot +:screenshot: ::: ::::{note} @@ -34,7 +34,7 @@ By default, the list contains all the alerts that you have authority to view in :::{image} ../../../images/kibana-stack-management-alerts-query-menu.png :alt: The Alerts page with the query menu open -:class: screenshot +:screenshot: ::: Alternatively, view those alerts in the [{{security-app}}](../../../solutions/security/detect-and-alert/manage-detection-alerts.md). diff --git a/explore-analyze/alerts-cases/cases.md b/explore-analyze/alerts-cases/cases.md index a8203b6aa..923ac018f 100644 --- a/explore-analyze/alerts-cases/cases.md +++ b/explore-analyze/alerts-cases/cases.md @@ -14,7 +14,7 @@ You can also optionally add custom fields and case templates. [preview] :::{image} ../../images/kibana-cases-list.png :alt: Cases page -:class: screenshot +:screenshot: ::: ::::{note} diff --git a/explore-analyze/alerts-cases/cases/manage-cases-settings.md b/explore-analyze/alerts-cases/cases/manage-cases-settings.md index 259b39622..5310f103e 100644 --- a/explore-analyze/alerts-cases/cases/manage-cases-settings.md +++ b/explore-analyze/alerts-cases/cases/manage-cases-settings.md @@ -14,7 +14,7 @@ To perform these tasks, you must have [full access](setup-cases.md) to the appro :::{image} ../../../images/kibana-cases-settings.png :alt: View case settings -:class: screenshot +:screenshot: ::: ## Case closures [case-closures] @@ -58,7 +58,7 @@ To create a custom field: 1. In the **Custom fields** section, click **Add field**. :::{image} ../../../images/kibana-cases-custom-fields-add.png :alt: Add a custom field in case settings - :class: screenshot + :screenshot: ::: 2. You must provide a field label and type (text or toggle). You can optionally designate it as a required field and provide a default value. @@ -80,7 +80,7 @@ To create a template: 1. In the **Templates** section, click **Add template**. :::{image} ../../../images/kibana-cases-templates-add.png :alt: Add a template in case settings - :class: screenshot + :screenshot: ::: 2. You must provide a template name and case severity. You can optionally add template tags and a description, values for each case field, and a case connector. diff --git a/explore-analyze/alerts-cases/cases/manage-cases.md b/explore-analyze/alerts-cases/cases/manage-cases.md index 1d9b8573e..dda00f013 100644 --- a/explore-analyze/alerts-cases/cases/manage-cases.md +++ b/explore-analyze/alerts-cases/cases/manage-cases.md @@ -17,7 +17,7 @@ Open a new case to keep track of issues and share their details with colleagues. 1. Go to **Management > {{stack-manage-app}} > Cases**, then click **Create case**. :::{image} ../../../images/kibana-cases-create.png :alt: Create a case in {{stack-manage-app}} - :class: screenshot + :screenshot: ::: 2. If you defined [templates](manage-cases-settings.md#case-templates), you can optionally select one to use its default field values. [preview] @@ -78,7 +78,7 @@ After you create a case, you can upload and manage files on the **Files** tab: :::{image} ../../../images/kibana-cases-files.png :alt: A list of files attached to a case -:class: screenshot +:screenshot: ::: The acceptable file types and sizes are affected by your [case settings](../../../deploy-manage/deploy/self-managed/configure.md). @@ -98,7 +98,7 @@ You can also optionally add visualizations. For example, you can portray event a :::{image} ../../../images/kibana-cases-visualization.png :alt: Adding a visualization as a comment within a case -:class: screenshot +:screenshot: ::: To add a visualization to a comment within your case: diff --git a/explore-analyze/dashboards/create-dashboard-of-panels-with-ecommerce-data.md b/explore-analyze/dashboards/create-dashboard-of-panels-with-ecommerce-data.md index 4ba886823..a47eb340a 100644 --- a/explore-analyze/dashboards/create-dashboard-of-panels-with-ecommerce-data.md +++ b/explore-analyze/dashboards/create-dashboard-of-panels-with-ecommerce-data.md @@ -14,7 +14,7 @@ When you’re done, you’ll have a complete overview of the sample web logs dat :::{image} ../../images/kibana-lens_timeSeriesDataTutorialDashboard_8.3.png :alt: Final dashboard with eCommerce sample data -:class: screenshot +:screenshot: ::: @@ -49,7 +49,7 @@ To analyze the data with a custom time interval, create a bar chart that shows y :::{image} ../../images/kibana-lens_clickAndDragZoom_7.16.gif :alt: Cursor clicking and dragging across the bars to zoom in on the data - :class: screenshot + :screenshot: ::: 3. In the layer pane, click **Count of records**. @@ -84,7 +84,7 @@ To identify the 75th percentile of orders, add a reference line: :::{image} ../../images/kibana-lens_barChartCustomTimeInterval_8.3.png :alt: Orders per day - :class: screenshot + :screenshot: ::: 5. Click **Save and return**. @@ -112,7 +112,7 @@ To copy a function, you drag it to the **Add or drag-and-drop a field** area wit :::{image} ../../images/drag-and-drop-a-field-8.16.0.gif :alt: Easily duplicate the items with drag and drop - :class: screenshot + :screenshot: ::: 2. Click **95th [1]**, then enter `90` in the **Percentile** field. @@ -122,7 +122,7 @@ To copy a function, you drag it to the **Add or drag-and-drop a field** area wit :::{image} ../../images/kibana-lens_lineChartMultipleDataSeries_7.16.png :alt: Percentiles for product prices chart - :class: screenshot + :screenshot: ::: 6. Click **Save and return**. @@ -162,7 +162,7 @@ Add a layer to display the customer traffic: :::{image} ../../images/kibana-lens_mixedXYChart_7.16.png :alt: Layer visualization type menu - :class: screenshot + :screenshot: ::: 6. Click **Save and return**. @@ -202,7 +202,7 @@ For each order category, create a filter: :::{image} ../../images/kibana-lens_areaPercentageNumberOfOrdersByCategory_8.3.png :alt: Prices share by category - :class: screenshot + :screenshot: ::: 8. Click **Save and return**. @@ -238,7 +238,7 @@ Filter the results to display the data for only Saturday and Sunday: :::{image} ../../images/kibana-lens_areaChartCumulativeNumberOfSalesOnWeekend_7.16.png :alt: Area chart with cumulative sum of orders made on the weekend - :class: screenshot + :screenshot: :width: 50% ::: @@ -265,7 +265,7 @@ To create a week-over-week comparison, shift **Count of records [1]** by one wee :::{image} ../../images/kibana-lens_time_shift.png :alt: Line chart with week-over-week sales comparison - :class: screenshot + :screenshot: :width: 50% ::: @@ -290,7 +290,7 @@ To compare time range changes as a percent, create a bar chart that compares the :::{image} ../../images/kibana-lens_percent_chage.png :alt: Bar chart with percent change in sales between the current time and the previous week - :class: screenshot + :screenshot: :width: 50% ::: @@ -323,7 +323,7 @@ To split the metric, add columns for each continent using the **Columns** field: :::{image} ../../images/kibana-lens_table_over_time.png :alt: Date histogram table with groups for the customer count metric - :class: screenshot + :screenshot: :width: 50% ::: @@ -341,6 +341,6 @@ Now that you have a complete overview of your eCommerce sales data, save the das :::{image} ../../images/kibana-lens_timeSeriesDataTutorialDashboard_8.3.png :alt: Final dashboard with eCommerce sample data -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/dashboards/create-dashboard-of-panels-with-web-server-data.md b/explore-analyze/dashboards/create-dashboard-of-panels-with-web-server-data.md index 2b8f87be1..c914831ce 100644 --- a/explore-analyze/dashboards/create-dashboard-of-panels-with-web-server-data.md +++ b/explore-analyze/dashboards/create-dashboard-of-panels-with-web-server-data.md @@ -14,7 +14,7 @@ When you’re done, you’ll have a complete overview of the sample web logs dat :::{image} ../../images/kibana-lens_logsDashboard_8.4.0.png :alt: Logs dashboard -:class: screenshot +:screenshot: ::: @@ -37,7 +37,7 @@ Open the visualization editor, then make sure the correct fields appear. :::{image} ../../images/kibana-lens_dataViewDropDown_8.4.0.png :alt: Data view dropdown - :class: screenshot + :screenshot: ::: @@ -53,7 +53,7 @@ Click a field name to view more details, such as its top values and distribution :::{image} ../../images/tutorial-field-more-info.gif :alt: Clicking a field name to view more details -:class: screenshot +:screenshot: :width: 50% ::: @@ -68,14 +68,14 @@ The only number function that you can use with **clientip** is **Unique count**, :::{image} ../../images/kibana-visualization-type-dropdown-8.16.0.png :alt: Visualization type dropdown - :class: screenshot + :screenshot: ::: 2. From the **Available fields** list, drag **clientip** to the workspace or layer pane. :::{image} ../../images/kibana-tutorial-unique-count-of-client-ip-8.16.0.png :alt: Metric visualization of the clientip field - :class: screenshot + :screenshot: ::: In the layer pane, **Unique count of clientip** appears because the editor automatically applies the **Unique count** function to the **clientip** field. **Unique count** is the only numeric function that works with IP addresses. @@ -103,7 +103,7 @@ To visualize the **bytes** field over time: :::{image} ../../images/kibana-lens_end_to_end_3_1_1.gif :alt: Zoom in on the data - :class: screenshot + :screenshot: ::: @@ -127,7 +127,7 @@ To save space on the dashboard, hide the axis labels. :::{image} ../../images/kibana-line-chart-left-axis-8.16.0.png :alt: Left axis menu - :class: screenshot + :screenshot: :width: 50% ::: @@ -135,7 +135,7 @@ To save space on the dashboard, hide the axis labels. :::{image} ../../images/kibana-line-chart-bottom-axis-8.16.0.png :alt: Bottom axis menu - :class: screenshot + :screenshot: :width: 50% ::: @@ -148,7 +148,7 @@ Since you removed the axis labels, add a panel title: :::{image} ../../images/kibana-lens_lineChartMetricOverTime_8.4.0.png :alt: Line chart that displays metric data over time - :class: screenshot + :screenshot: :width: 50% ::: @@ -169,7 +169,7 @@ The **Top values** function ranks the unique values of a field by another functi :::{image} ../../images/kibana-tutorial-top-values-of-field-8.16.0.png :alt: Vertical bar chart with top values of request.keyword by most unique visitors - :class: screenshot + :screenshot: :width: 50% ::: @@ -182,7 +182,7 @@ The chart labels are unable to display because the **request.keyword** field con :::{image} ../../images/kibana-table-with-request-keyword-and-client-ip-8.16.0.png :alt: Table with top values of request.keyword by most unique visitors - :class: screenshot + :screenshot: :width: 50% ::: @@ -194,7 +194,7 @@ The chart labels are unable to display because the **request.keyword** field con :::{image} ../../images/kibana-lens_tableTopFieldValues_7.16.png :alt: Table that displays the top field values - :class: screenshot + :screenshot: :width: 50% ::: @@ -231,7 +231,7 @@ Specify the file size ranges: :::{image} ../../images/kibana-lens_end_to_end_6_1.png :alt: Custom ranges configuration - :class: screenshot + :screenshot: ::: 4. From the **Value format** dropdown, select **Bytes (1024)**, then click **Close**. @@ -242,7 +242,7 @@ To display the values as a percentage of the sum of all values, use the **Pie** :::{image} ../../images/kibana-lens_pieChartCompareSubsetOfDocs_7.16.png :alt: Pie chart that compares a subset of documents to all documents - :class: screenshot + :screenshot: :width: 50% ::: @@ -271,7 +271,7 @@ The distribution of a number can help you find patterns. For example, you can an :::{image} ../../images/kibana-lens_barChartDistributionOfNumberField_7.16.png :alt: Bar chart that displays the distribution of a number field - :class: screenshot + :screenshot: :width: 60% ::: @@ -319,7 +319,7 @@ Add the user geography grouping: :::{image} ../../images/kibana-lens_end_to_end_7_2.png :alt: Treemap visualization - :class: screenshot + :screenshot: :width: 50% ::: @@ -331,7 +331,7 @@ Remove the documents that do not match the filter criteria: :::{image} ../../images/kibana-lens_treemapMultiLevelChart_7.16.png :alt: Treemap visualization - :class: screenshot + :screenshot: :width: 50% ::: @@ -356,7 +356,7 @@ Decrease the size of the following panels, then move the panels to the first row :::{image} ../../images/kibana-lens_logsDashboard_8.4.0.png :alt: Logs dashboard - :class: screenshot + :screenshot: ::: diff --git a/explore-analyze/dashboards/drilldowns.md b/explore-analyze/dashboards/drilldowns.md index 6b3f28cc9..02269819e 100644 --- a/explore-analyze/dashboards/drilldowns.md +++ b/explore-analyze/dashboards/drilldowns.md @@ -72,7 +72,7 @@ Create a drilldown that opens the **Detailed logs** dashboard from the **[Logs] 4. In the data table panel, hover over a value, click **+**, then select `View details`. :::{image} ../../images/kibana-dashboard_drilldownOnPanel_8.3.png :alt: Drilldown on data table that navigates to another dashboard - :class: screenshot + :screenshot: ::: @@ -158,7 +158,7 @@ Create a drilldown that opens **Discover** from the [**Sample web logs**](../ind 7. On the **[Logs] Bytes distribution** bar vertical stacked chart, click a bar, then select **View bytes distribution in Discover**. :::{image} ../../images/kibana-dashboard_discoverDrilldown_8.3.png :alt: Drilldown on bar vertical stacked chart that navigates to Discover - :class: screenshot + :screenshot: ::: diff --git a/explore-analyze/dashboards/duplicate-dashboards.md b/explore-analyze/dashboards/duplicate-dashboards.md index 18b73d257..6c69af7b9 100644 --- a/explore-analyze/dashboards/duplicate-dashboards.md +++ b/explore-analyze/dashboards/duplicate-dashboards.md @@ -19,7 +19,7 @@ To duplicate a managed dashboard, follow the instructions above or click the **M :::{image} ../../images/kibana-managed-dashboard-popover-8.16.0.png :alt: Managed badge dialog with Duplicate button -:class: screenshot +:screenshot: :width: 50% ::: diff --git a/explore-analyze/dashboards/using.md b/explore-analyze/dashboards/using.md index a0ccae4eb..595b0e6b1 100644 --- a/explore-analyze/dashboards/using.md +++ b/explore-analyze/dashboards/using.md @@ -28,7 +28,7 @@ Use filter pills to focus in on the specific data you want. :::{image} ../../images/kibana-dashboard_filter_pills_8.15.0.png :alt: Filter pills -:class: screenshot +:screenshot: ::: @@ -112,7 +112,7 @@ Filter the data with one or more options that you select. :::{image} ../../images/kibana-dashboard_controlsOptionsList_8.7.0.png :alt: Options list control -:class: screenshot +:screenshot: ::: @@ -129,7 +129,7 @@ Filter the data within a specified range of values. :::{image} ../../images/kibana-dashboard_controlsRangeSlider_8.3.0.png :alt: Range slider control -:class: screenshot +:screenshot: ::: @@ -145,7 +145,7 @@ Filter the data within a specified range of time. :::{image} ../../images/dashboard_timeslidercontrol_8.17.0.gif :alt: Time slider control -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/discover.md b/explore-analyze/discover.md index ac531a6a9..45e0d58e4 100644 --- a/explore-analyze/discover.md +++ b/explore-analyze/discover.md @@ -15,6 +15,6 @@ With **Discover**, you can quickly search and filter your data, get information :::{image} ../images/kibana-hello-field.png :alt: A view of the Discover app -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/discover/discover-get-started.md b/explore-analyze/discover/discover-get-started.md index a469f8876..585a1e1c1 100644 --- a/explore-analyze/discover/discover-get-started.md +++ b/explore-analyze/discover/discover-get-started.md @@ -35,7 +35,7 @@ Select the data you want to explore, and then specify the time range in which to If you’re using sample data, data views are automatically created and are ready to use. :::{image} ../../images/kibana-discover-data-view.png :alt: How to set the {{data-source}} in Discover - :class: screenshot + :screenshot: :width: 300px ::: diff --git a/explore-analyze/discover/discover-search-for-relevance.md b/explore-analyze/discover/discover-search-for-relevance.md index 1ab2960b7..97b4e5f69 100644 --- a/explore-analyze/discover/discover-search-for-relevance.md +++ b/explore-analyze/discover/discover-search-for-relevance.md @@ -35,7 +35,7 @@ This example shows how to use **Discover** to list your documents from most rele Your table now sorts documents from most to least relevant. :::{image} ../../images/kibana-discover-search-for-relevance.png :alt: Documents are sorted from most relevant to least relevant. - :class: screenshot + :screenshot: ::: diff --git a/explore-analyze/discover/document-explorer.md b/explore-analyze/discover/document-explorer.md index 61aaec357..f73b47733 100644 --- a/explore-analyze/discover/document-explorer.md +++ b/explore-analyze/discover/document-explorer.md @@ -12,7 +12,7 @@ Fine tune your explorations by customizing **Discover** to bring out the the bes :::{image} ../../images/kibana-hello-field.png :alt: A view of the Discover app -:class: screenshot +:screenshot: ::: @@ -75,7 +75,7 @@ To sort by multiple fields: By default, columns are sorted in the order they are added. :::{image} ../../images/kibana-document-explorer-multi-field.png :alt: Multi field sort in the document table - :class: screenshot + :screenshot: :width: 50% ::: @@ -100,7 +100,7 @@ Narrow your results to a subset of documents so you’re comparing just the data 2. Click the **Selected** option, and then select **Show selected documents only**. :::{image} ../../images/kibana-document-explorer-compare-data.png :alt: Compare data in the document table - :class: screenshot + :screenshot: :width: 50% ::: @@ -114,5 +114,5 @@ To change the numbers of results you want to display on each page, use the **Row :::{image} ../../images/kibana-document-table-rows-per-page.png :alt: Menu with options for setting the number of results in the document table -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/discover/run-pattern-analysis-discover.md b/explore-analyze/discover/run-pattern-analysis-discover.md index 55e00c2bf..6d7c39f5d 100644 --- a/explore-analyze/discover/run-pattern-analysis-discover.md +++ b/explore-analyze/discover/run-pattern-analysis-discover.md @@ -21,7 +21,7 @@ This example uses the [sample web logs data](../index.md#gs-get-data-into-kibana :::{image} ../../images/kibana-log-pattern-analysis-results.png :alt: Log pattern analysis results in Discover. -:class: screenshot +:screenshot: ::: 5. (optional) Apply filters to one or more patterns. **Discover** only displays documents that match the selected patterns. Additionally, you can remove selected patterns from **Discover**, resulting in the display of only those documents that don’t match the selected pattern. These options enable you to remove unimportant messages and focus on the more important, actionable data during troubleshooting. You can also create a categorization {{anomaly-job}} directly from the **Patterns** tab to find anomalous behavior in the selected pattern. diff --git a/explore-analyze/discover/save-open-search.md b/explore-analyze/discover/save-open-search.md index f178be7ea..83aa0b7f3 100644 --- a/explore-analyze/discover/save-open-search.md +++ b/explore-analyze/discover/save-open-search.md @@ -18,7 +18,7 @@ If you don’t have sufficient privileges to save Discover sessions, the followi :::{image} ../../images/kibana-read-only-badge.png :alt: Example of Discover's read only access indicator in Kibana's header -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/discover/search-sessions.md b/explore-analyze/discover/search-sessions.md index a6c80c8bb..ad1f465f7 100644 --- a/explore-analyze/discover/search-sessions.md +++ b/explore-analyze/discover/search-sessions.md @@ -21,7 +21,7 @@ Save your search session from **Discover** or **Dashboard**, and when your sessi :::{image} ../../images/kibana-search-session.png :alt: Search Session indicator displaying the current state of the search -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/discover/show-field-statistics.md b/explore-analyze/discover/show-field-statistics.md index 7d2301569..ee5e7f410 100644 --- a/explore-analyze/discover/show-field-statistics.md +++ b/explore-analyze/discover/show-field-statistics.md @@ -23,7 +23,7 @@ This example explores the fields in the [sample web logs data](../index.md#gs-ge :::{image} ../../images/kibana-field-statistics-view.png :alt: Field statistics view in Discover showing a summary of document data. - :class: screenshot + :screenshot: ::: 5. Expand the `hour_of_day` field. @@ -31,7 +31,7 @@ This example explores the fields in the [sample web logs data](../index.md#gs-ge :::{image} ../../images/kibana-field-statistics-numeric.png :alt: Field statistics for a numeric field. - :class: screenshot + :screenshot: ::: 6. Expand the `geo.coordinates` field. @@ -40,7 +40,7 @@ This example explores the fields in the [sample web logs data](../index.md#gs-ge :::{image} ../../images/kibana-field-statistics-geo.png :alt: Field statistics for a geo field. - :class: screenshot + :screenshot: ::: 7. Explore additional field types to see the statistics that **Discover** provides. diff --git a/explore-analyze/elastic-inference/inference-api.md b/explore-analyze/elastic-inference/inference-api.md index 76bd3a22d..4b8f1a38b 100644 --- a/explore-analyze/elastic-inference/inference-api.md +++ b/explore-analyze/elastic-inference/inference-api.md @@ -22,7 +22,7 @@ The **Inference endpoints** page provides an interface for managing inference en :::{image} ../../images/kibana-inference-endpoints-ui.png :alt: Inference endpoints UI -:class: screenshot +:screenshot: ::: Available actions: diff --git a/explore-analyze/elastic-inference/inference-api/elasticsearch-inference-integration.md b/explore-analyze/elastic-inference/inference-api/elasticsearch-inference-integration.md index 7feb809b3..f899600fe 100644 --- a/explore-analyze/elastic-inference/inference-api/elasticsearch-inference-integration.md +++ b/explore-analyze/elastic-inference/inference-api/elasticsearch-inference-integration.md @@ -163,7 +163,7 @@ PUT _inference/rerank/my-elastic-rerank ``` 1. The `model_id` must be the ID of the built-in Elastic Rerank model: `.rerank-v1`. -2. [Adaptive allocations](../../../explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md#nlp-model-adaptive-allocations) will be enabled with the minimum of 1 and the maximum of 10 allocations. +2. [Adaptive allocations](../../../deploy-manage/autoscaling/trained-model-autoscaling.md#enabling-autoscaling-through-apis-adaptive-allocations) will be enabled with the minimum of 1 and the maximum of 10 allocations. diff --git a/explore-analyze/elastic-inference/inference-api/elser-inference-integration.md b/explore-analyze/elastic-inference/inference-api/elser-inference-integration.md index 5e2a828f9..dac64a6b4 100644 --- a/explore-analyze/elastic-inference/inference-api/elser-inference-integration.md +++ b/explore-analyze/elastic-inference/inference-api/elser-inference-integration.md @@ -102,7 +102,7 @@ The `elser` service is deprecated and will be removed in a future release. Use t When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. ::::{note} -For more information on how to optimize your ELSER endpoints, refer to [the ELSER recommendations](../../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md#elser-recommendations) section in the model documentation. To learn more about model autoscaling, refer to the [trained model autoscaling](../../../explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md) page. +For more information on how to optimize your ELSER endpoints, refer to [the ELSER recommendations](../../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md#elser-recommendations) section in the model documentation. To learn more about model autoscaling, refer to the [trained model autoscaling](../../../deploy-manage/autoscaling/trained-model-autoscaling.md) page. :::: diff --git a/explore-analyze/find-and-organize/data-views.md b/explore-analyze/find-and-organize/data-views.md index 07591d7b1..b3133aef8 100644 --- a/explore-analyze/find-and-organize/data-views.md +++ b/explore-analyze/find-and-organize/data-views.md @@ -54,7 +54,7 @@ If you collected data using one of the {{kib}} [ingest options](../../manage-dat :::{image} ../../images/kibana-discover-data-view.png :alt: How to set the {{data-source}} in Discover - :class: screenshot + :screenshot: :width: 50% ::: @@ -91,7 +91,7 @@ A temporary {{data-source}} remains in your space until you change apps, or unti :::{image} ../../images/ad-hoc-data-view.gif :alt: how to create an ad-hoc data view -:class: screenshot +:screenshot: ::: ::::{note} diff --git a/explore-analyze/find-and-organize/files.md b/explore-analyze/find-and-organize/files.md index 3a710bd90..0fc3242ef 100644 --- a/explore-analyze/find-and-organize/files.md +++ b/explore-analyze/find-and-organize/files.md @@ -14,5 +14,5 @@ You can access and manage all of the files currently stored in {{kib}} from the :::{image} ../../images/serverless-file-management.png :alt: Files UI -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/find-and-organize/find-apps-and-objects.md b/explore-analyze/find-and-organize/find-apps-and-objects.md index 1259b560b..a03946c93 100644 --- a/explore-analyze/find-and-organize/find-apps-and-objects.md +++ b/explore-analyze/find-and-organize/find-apps-and-objects.md @@ -12,7 +12,7 @@ To quickly find apps and the objects you create, use the search field in the glo :::{image} ../../images/kibana-app-navigation-search.png :alt: Example of searching for apps -:class: screenshot +:screenshot: :width: 60% ::: @@ -33,7 +33,7 @@ This example searches for visualizations with the tag `design` . :::{image} ../../images/kibana-tags-search.png :alt: Example of searching for tags -:class: screenshot +:screenshot: :width: 60% ::: diff --git a/explore-analyze/find-and-organize/reports.md b/explore-analyze/find-and-organize/reports.md index a03c30ecd..4e1b2d90f 100644 --- a/explore-analyze/find-and-organize/reports.md +++ b/explore-analyze/find-and-organize/reports.md @@ -16,7 +16,7 @@ To view and manage reports, go to **Management** > **Reporting**. :::{image} ../../images/serverless-reports-management.png :alt: {{reports-app}} -:class: screenshot +:screenshot: ::: You can download or view details about the report by clicking the icons in the actions menu. diff --git a/explore-analyze/find-and-organize/saved-objects.md b/explore-analyze/find-and-organize/saved-objects.md index 951f5f4b6..8786d9344 100644 --- a/explore-analyze/find-and-organize/saved-objects.md +++ b/explore-analyze/find-and-organize/saved-objects.md @@ -44,7 +44,7 @@ You can find the **Saved Objects** page using the navigation menu or the [global :::{image} ../../images/kibana-management-saved-objects.png :alt: Saved Objects -:class: screenshot +:screenshot: ::: ## Permissions [_required_permissions_5] diff --git a/explore-analyze/find-and-organize/tags.md b/explore-analyze/find-and-organize/tags.md index 6835a0eb4..65ea3e984 100644 --- a/explore-analyze/find-and-organize/tags.md +++ b/explore-analyze/find-and-organize/tags.md @@ -23,7 +23,7 @@ To get started, go to the **Tags** management page using the navigation menu or :::{image} ../../images/kibana-tag-management-section.png :alt: Tags management -:class: screenshot +:screenshot: ::: @@ -62,7 +62,7 @@ To assign and remove tags, you must have `write` permission on the objects to wh :::{image} ../../images/kibana-manage-assignments-flyout.png :alt: Assign flyout - :class: screenshot + :screenshot: :width: 50% ::: diff --git a/explore-analyze/index.md b/explore-analyze/index.md index e594b2849..54676f789 100644 --- a/explore-analyze/index.md +++ b/explore-analyze/index.md @@ -7,6 +7,7 @@ mapped_urls: - https://www.elastic.co/guide/en/kibana/current/introduction.html#visualize-and-analyze - https://www.elastic.co/guide/en/kibana/current/get-started.html - https://www.elastic.co/guide/en/kibana/current/accessibility.html + - https://www.elastic.co/guide/en/kibana/current/introduction.html --- # Explore and analyze diff --git a/explore-analyze/machine-learning/anomaly-detection/geographic-anomalies.md b/explore-analyze/machine-learning/anomaly-detection/geographic-anomalies.md index 3da797b5c..2d9301fc6 100644 --- a/explore-analyze/machine-learning/anomaly-detection/geographic-anomalies.md +++ b/explore-analyze/machine-learning/anomaly-detection/geographic-anomalies.md @@ -29,7 +29,7 @@ To get the best results from {{ml}} analytics, you must understand your data. Yo :::{image} ../../../images/machine-learning-weblogs-data-visualizer-geopoint.jpg :alt: A screenshot of a geo_point field in {{data-viz}} -:class: screenshot +:screenshot: ::: ## Create an {{anomaly-job}} [geographic-anomalies-jobs] @@ -47,7 +47,7 @@ For example, create a job that analyzes the sample eCommerce orders data set to :::{image} ../../../images/machine-learning-ecommerce-advanced-wizard-geopoint.jpg :alt: A screenshot of creating an {{anomaly-job}} using the eCommerce data in {{kib}} -:class: screenshot +:screenshot: ::: ::::{dropdown} API example @@ -108,7 +108,7 @@ Alternatively, create a job that analyzes the sample web logs data set to detect :::{image} ../../../images/machine-learning-weblogs-advanced-wizard-geopoint.jpg :alt: A screenshot of creating an {{anomaly-job}} using the web logs data in {{kib}} -:class: screenshot +:screenshot: ::: ::::{dropdown} API example @@ -181,7 +181,7 @@ When you select a period that contains an anomaly in the **Anomaly Explorer** sw :::{image} ../../../images/machine-learning-ecommerce-anomaly-explorer-geopoint.jpg :alt: A screenshot of an anomalous event in the eCommerce data in Anomaly Explorer -:class: screenshot +:screenshot: ::: A "typical" value indicates a centroid of a cluster of previously observed locations that is closest to the "actual" location at that time. For example, there may be one centroid near the user’s home and another near the user’s work place since there are many records associated with these distinct locations. @@ -190,7 +190,7 @@ Likewise, there are time periods in the web logs sample data where there are bot :::{image} ../../../images/machine-learning-weblogs-anomaly-explorer-geopoint.jpg :alt: A screenshot of an anomalous event in the web logs data in Anomaly Explorer -:class: screenshot +:screenshot: ::: You can use the top influencer values to further filter your results and identify possible contributing factors or patterns of behavior. @@ -199,7 +199,7 @@ You can also view the anomaly in **Maps** by clicking **View in Maps** in the ac :::{image} ../../../images/machine-learning-view-in-maps.jpg :alt: A screenshot of the anomaly table with the Action menu opened and the "View in Maps" option selected -:class: screenshot +:screenshot: ::: When you try this type of {{anomaly-job}} with your own data, it might take some experimentation to find the best combination of buckets, detectors, and influencers to detect the type of behavior you’re seeking. @@ -214,7 +214,7 @@ For example, you can extend the map example from [Build a map to compare metrics :::{image} ../../../images/machine-learning-weblogs-anomaly-map.jpg :alt: A screenshot of an anomaly within the Maps app -:class: screenshot +:screenshot: ::: ## What’s next [geographic-anomalies-next] diff --git a/explore-analyze/machine-learning/anomaly-detection/mapping-anomalies.md b/explore-analyze/machine-learning/anomaly-detection/mapping-anomalies.md index 098bb56ff..4006a3634 100644 --- a/explore-analyze/machine-learning/anomaly-detection/mapping-anomalies.md +++ b/explore-analyze/machine-learning/anomaly-detection/mapping-anomalies.md @@ -22,7 +22,7 @@ If you have fields that contain valid vector layers, you can use the **{{data-vi :::{image} ../../../images/machine-learning-weblogs-data-visualizer-choropleth.png :alt: A screenshot of a field that contains vector layer values in {{data-viz}} -:class: screenshot +:screenshot: ::: ## Create an {{anomaly-job}} [mapping-anomalies-jobs] @@ -33,7 +33,7 @@ For example, use the multi-metric job wizard to create a job that analyzes the s :::{image} ../../../images/machine-learning-weblogs-multimetric-wizard-vector.png :alt: A screenshot of creating an {{anomaly-job}} using the web logs data in {{kib}} -:class: screenshot +:screenshot: ::: ::::{dropdown} API example @@ -100,7 +100,7 @@ If you used APIs to create the jobs and {{dfeeds}}, you cannot see them in {{kib :::{image} ../../../images/machine-learning-weblogs-anomaly-explorer-vectors.png :alt: A screenshot of the anomaly count by location in Anomaly Explorer -:class: screenshot +:screenshot: ::: The **Anomaly Explorer** contains a map, which is affected by your swim lane selections. It colors each location to reflect the number of anomalies in that selected time period. Locations that have few anomalies are indicated in blue; locations with many anomalies are red. Thus you can quickly see the locations that are generating the most anomalies. If your vector layers define regions, counties, or postal codes, you can zoom in for fine details. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-ad-explain.md b/explore-analyze/machine-learning/anomaly-detection/ml-ad-explain.md index 7c65106a5..67b64a035 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-ad-explain.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-ad-explain.md @@ -42,7 +42,7 @@ The process when the anomaly detection algorithm adjusts the anomaly scores of p :::{image} ../../../images/machine-learning-renormalization-score-reduction.jpg :alt: Example of a record score reduction in {{kib}} -:class: screenshot +:screenshot: ::: ## Other factors for score reduction [other-factors] @@ -55,7 +55,7 @@ Real-world anomalies often show the impacts of several factors. The **Anomaly ex :::{image} ../../../images/machine-learning-detailed-single-metric.jpg :alt: Detailed view of the Single Metric Viewer in {{kib}} -:class: screenshot +:screenshot: ::: You can also find this information in the `anomaly_score_explanation` field of the [get record API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-get-records). diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-ad-forecast.md b/explore-analyze/machine-learning/anomaly-detection/ml-ad-forecast.md index 8447b7d3e..c7b2094b4 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-ad-forecast.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-ad-forecast.md @@ -19,7 +19,7 @@ Each forecast has a unique ID, which you can use to distinguish between forecast :::{image} ../../../images/machine-learning-overview-forecast.jpg :alt: Example screenshot from the Machine Learning Single Metric Viewer in Kibana -:class: screenshot +:screenshot: ::: The yellow line in the chart represents the predicted data values. The shaded yellow area represents the bounds for the predicted values, which also gives an indication of the confidence of the predictions. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md b/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md index a9baa16b0..cc0ec94dd 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md @@ -26,7 +26,7 @@ You can create {{anomaly-jobs}} by using the [create {{anomaly-jobs}} API](https :::{image} ../../../images/machine-learning-ml-create-job.png :alt: Create New Job -:class: screenshot +:screenshot: ::: * The single metric wizard creates simple jobs that have a single detector. A *detector* applies an analytical function to specific fields in your data. In addition to limiting the number of detectors, the single metric wizard omits many of the more advanced configuration options. @@ -190,7 +190,7 @@ You can see the list of model snapshots for each job with the [get model snapsho :::{image} ../../../images/machine-learning-ml-model-snapshots.png :alt: Example screenshot with a list of model snapshots -:class: screenshot +:screenshot: ::: ::::{tip} diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md b/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md index ab7188edb..ba46fe63c 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-ad-view-results.md @@ -25,7 +25,7 @@ The {{ml}} analytics enhance the anomaly score for each bucket by considering co :::{image} ../../../images/machine-learning-multibucketanalysis.jpg :alt: Examples of anomalies with multi-bucket impact in {{kib}} -:class: screenshot +:screenshot: ::: In this example, you can see that some of the anomalies fall within the shaded blue area, which represents the bounds for the expected values. The bounds are calculated per bucket, but multi-bucket analysis is not limited by that scope. @@ -36,7 +36,7 @@ If you have [{{anomaly-detect-cap}} alert rules](https://www.elastic.co/guide/en :::{image} ../../../images/machine-learning-anomaly-explorer-alerts.png :alt: Alerts table in the Anomaly Explorer -:class: screenshot +:screenshot: ::: If you have more than one {{anomaly-job}}, you can also obtain *overall bucket* results, which combine and correlate anomalies from multiple jobs into an overall score. When you view the results for job groups in {{kib}}, it provides the overall bucket scores. For more information, see [Get overall buckets API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-get-overall-buckets). @@ -51,7 +51,7 @@ For example, the `high_sum_total_sales` {{anomaly-job}} for the eCommerce orders :::{image} ../../../images/machine-learning-influencers.jpg :alt: Influencers in the {{kib}} Anomaly Explorer -:class: screenshot +:screenshot: ::: On the left is a list of the top influencers for all of the detected anomalies in that same time period. The list includes maximum anomaly scores, which in this case are aggregated for each influencer, for each bucket, across all detectors. There is also a total sum of the anomaly scores for each influencer. You can use this list to help you narrow down the contributing factors and focus on the most anomalous entities. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md index 28d8e9c9c..10bd672bc 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md @@ -30,7 +30,7 @@ Categorization is a {{ml}} process that tokenizes a text field, clusters similar :::{image} ../../../images/machine-learning-categorization-wizard.png :alt: Creating a categorization job in Kibana - :class: screenshot + :screenshot: ::: 5. Click **Next**. @@ -71,7 +71,7 @@ Use the **Anomaly Explorer** in {{kib}} to view the analysis results: :::{image} ../../../images/machine-learning-ml-category-anomalies.png :alt: Categorization results in the Anomaly Explorer -:class: screenshot +:screenshot: ::: For this type of job, the results contain extra information for each anomaly: the name of the category (for example, `mlcategory 2`) and examples of the messages in that category. You can use these details to investigate occurrences of unusually high message counts. @@ -98,7 +98,7 @@ If you use the categorization wizard in {{kib}}, you can see which categorizatio :::{image} ../../../images/machine-learning-ml-category-analyzer.png :alt: Editing the categorization analyzer in Kibana -:class: screenshot +:screenshot: ::: The categorization analyzer can refer to a built-in {{es}} analyzer or a combination of zero or more character filters, a tokenizer, and zero or more token filters. In this example, adding a [`pattern_replace` character filter](elasticsearch://reference/data-analysis/text-analysis/analysis-pattern-replace-charfilter.md) achieves the same behavior as the `categorization_filters` job configuration option described earlier. For more details about these properties, refer to the [`categorization_analyzer` API object](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-job#ml-put-job-request-body). diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-populations.md b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-populations.md index d68dde838..0fd2aecd2 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-populations.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-populations.md @@ -28,7 +28,7 @@ Population analysis is resource-efficient and scales well, enabling the analysis 4. Choose a population field - it’s the `clientip` field in this example - and the metric you want to use for the analysis - `Mean(bytes)` in this example. :::{image} ../../../images/machine-learning-ml-population-wizard.png :alt: Creating a population job in Kibana - :class: screenshot + :screenshot: ::: 5. Click **Next**. @@ -73,14 +73,14 @@ Use the **Anomaly Explorer** in {{kib}} to view the analysis results: :::{image} ../../../images/machine-learning-ml-population-anomalies.png :alt: Population results in the Anomaly Explorer -:class: screenshot +:screenshot: ::: The results are often quite sparse. There might be just a few data points for the selected time period. Population analysis is particularly useful when you have many entities and the data for specific entitles is sporadic or sparse. If you click on a section in the timeline or swim lanes, you can see more details about the anomalies: :::{image} ../../../images/machine-learning-ml-population-anomaly.png :alt: Anomaly details for a specific user -:class: screenshot +:screenshot: ::: In this example, the client IP address `167.145.234.154` received a high volume of bytes on the date and time shown. This event is anomalous because the mean is four times higher than the expected behavior of the population. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-transform.md b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-transform.md index 2c407d9b1..03c591d65 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-transform.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-transform.md @@ -141,7 +141,7 @@ You can alternatively use {{kib}} to create an advanced {{anomaly-job}} that use :::{image} ../../../images/machine-learning-ml-runtimefields.jpg :alt: Using runtime_mappings in {{dfeed}} config via {{kib}} -:class: screenshot +:screenshot: ::: $$$ml-configuring-transform2$$$ diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-url.md b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-url.md index 43f706ff7..a820da60e 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-url.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-url.md @@ -12,14 +12,14 @@ You can optionally attach one or more custom URLs to your {{anomaly-jobs}}. Thes :::{image} ../../../images/machine-learning-ml-customurl.jpg :alt: An example of the custom URL links in the Anomaly Explorer anomalies table -:class: screenshot +:screenshot: ::: When you create or edit an {{anomaly-job}} in {{kib}}, it simplifies the creation of the custom URLs for {{kib}} dashboards and the **Discover** app and it enables you to test your URLs. For example: :::{image} ../../../images/machine-learning-ml-customurl-edit.gif :alt: Add a custom URL in {{kib}} -:class: screenshot +:screenshot: ::: For each custom URL, you must supply the URL and a label, which is the link text that appears in the anomalies table. You can also optionally supply a time range. When you link to **Discover** or a {{kib}} dashboard, you’ll have additional options for specifying the pertinent {{data-source}} or dashboard name and query entities. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-getting-started.md b/explore-analyze/machine-learning/anomaly-detection/ml-getting-started.md index 2f02272b0..47c11f75c 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-getting-started.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-getting-started.md @@ -55,13 +55,13 @@ To get the best results from {{ml}} analytics, you must understand your data. Yo In particular, look at the `clientip`, `response.keyword`, and `url.keyword` fields, since we’ll use them in our {{anomaly-jobs}}. For these fields, the {{data-viz}} provides the number of distinct values, a list of the top values, and the number and percentage of documents that contain the field. For example: :::{image} ../../../images/machine-learning-ml-gs-data-keyword.jpg :alt: {{data-viz}} output for ip and keyword fields - :class: screenshot + :screenshot: ::: For numeric fields, the {{data-viz}} provides information about the minimum, median, maximum, and top values, the number of distinct values, and their distribution. You can use the distribution chart to get a better idea of how the values in the data are clustered. For example: :::{image} ../../../images/machine-learning-ml-gs-data-metric.jpg :alt: {{data-viz}} for sample web logs - :class: screenshot + :screenshot: ::: ::::{tip} @@ -117,7 +117,7 @@ Depending on the capacity of your machine, you might need to wait a few seconds :::{image} ../../../images/machine-learning-ml-gs-web-results.jpg :alt: Create jobs for the sample web logs -:class: screenshot +:screenshot: ::: The {{ml-features}} analyze the input stream of data, model its behavior, and perform analysis based on the detectors in each job. When an event occurs outside of the model, that event is identified as an anomaly. You can immediately see that all three jobs have found anomalies, which are indicated by red blocks in the swim lanes for each job. @@ -136,7 +136,7 @@ Let’s start by looking at this simple job in the **Single Metric Viewer**: :::{image} ../../../images/machine-learning-ml-gs-job1-analysis.jpg :alt: Single Metric Viewer for low_request_rate job -:class: screenshot +:screenshot: ::: This view contains a chart that represents the actual and expected values over time. It is available only if the job has `model_plot_config` enabled. It can display only a single time series. @@ -160,7 +160,7 @@ For each anomaly, you can see key details such as the time, the actual and expec :::{image} ../../../images/machine-learning-ml-gs-job1-anomalies.jpg :alt: Single Metric Viewer Anomalies for low_request_rate job -:class: screenshot +:screenshot: ::: In the **Actions** column, there are additional options, such as **Raw data** which generates a query for the relevant documents in **Discover**. You can optionally add more links in the actions menu with [custom URLs](ml-configuring-url.md). @@ -173,7 +173,7 @@ You can optionally annotate your job results by drag-selecting a period of time :::{image} ../../../images/machine-learning-ml-gs-user-annotation.jpg :alt: A user annotation in the Single Metric Viewer -:class: screenshot +:screenshot: ::: After you have identified anomalies, often the next step is to try to determine the context of those situations. For example, are there other factors that are contributing to the problem? Are the anomalies confined to particular applications or servers? You can begin to troubleshoot these situations by layering additional jobs or creating multi-metric jobs. @@ -200,7 +200,7 @@ For this particular job, you can choose to see separate swim lanes for each clie :::{image} ../../../images/machine-learning-ml-gs-job2-explorer.jpg :alt: Anomaly explorer for response_code_rates job -:class: screenshot +:screenshot: ::: Since the job uses `response.keyword` as its *partition field*, the analysis is segmented such that you have completely different baselines for each distinct value of that field. By looking at temporal patterns on a per entity basis, you might spot things that might have otherwise been hidden in the lumped view. @@ -209,7 +209,7 @@ Under the anomaly timeline, there is a section that contains annotations. You ca :::{image} ../../../images/machine-learning-ml-gs-annotations.jpg :alt: Annotations section in the Anomaly Explorer -:class: screenshot +:screenshot: ::: On the left side of the **Anomaly Explorer**, there is a list of the top influencers for all of the detected anomalies in that same time period. The list includes maximum anomaly scores, which in this case are aggregated for each influencer, for each bucket, across all detectors. There is also a total sum of the anomaly scores for each influencer. You can use this list to help you narrow down the contributing factors and focus on the most anomalous entities. @@ -218,7 +218,7 @@ Click on a section in the swim lanes to obtain more information about the anomal :::{image} ../../../images/machine-learning-ml-gs-job2-explorer-anomaly.jpg :alt: Anomaly charts for the response_code_rates job -:class: screenshot +:screenshot: ::: You can see exact times when anomalies occurred. If there are multiple detectors or metrics in the job, you can see which caught the anomaly. You can also switch to viewing this time series in the **Single Metric Viewer** by clicking the **View Series** button in the **Actions** menu. @@ -227,7 +227,7 @@ Below the charts, there is a table that provides more information, such as the t :::{image} ../../../images/machine-learning-ml-gs-job2-explorer-table.jpg :alt: Anomaly tables for the response_code_rates job -:class: screenshot +:screenshot: ::: If your job has multiple detectors, the table aggregates the anomalies to show the highest severity anomaly per detector and entity, which is the field value that is displayed in the **found for** column. To view all the anomalies without any aggregation, set the **Interval** to `Show all`. @@ -246,7 +246,7 @@ If you examine the results from the `url_scanning` {{anomaly-job}} in the **Anom :::{image} ../../../images/machine-learning-ml-gs-job3-explorer.jpg :alt: Anomaly charts for the url_scanning job -:class: screenshot +:screenshot: ::: In this case, the metrics for each client IP are analyzed relative to other client IPs in each bucket and we can once again see that the `30.156.16.164` client IP is behaving abnormally. @@ -255,7 +255,7 @@ If you want to play with another example of a population {{anomaly-job}}, add th :::{image} ../../../images/machine-learning-ml-gs-job4-explorer.jpg :alt: Anomaly charts for the high_sum_total_sales job -:class: screenshot +:screenshot: ::: ## Create forecasts [sample-data-forecasts] @@ -268,19 +268,19 @@ To create a forecast in {{kib}}: 2. Click **Forecast**. :::{image} ../../../images/machine-learning-ml-gs-forecast.png :alt: Create a forecast from the Single Metric Viewer - :class: screenshot + :screenshot: ::: 3. Specify a duration for your forecast. This value indicates how far to extrapolate beyond the last record that was processed. You must use [time units](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#time-units). In this example, the duration is one week (`1w`): :::{image} ../../../images/machine-learning-ml-gs-duration.png :alt: Specify a duration of 1w - :class: screenshot + :screenshot: ::: 4. View the forecast in the **Single Metric Viewer**: :::{image} ../../../images/machine-learning-ml-gs-forecast-results.png :alt: View a forecast from the Single Metric Viewer - :class: screenshot + :screenshot: ::: The yellow line in the chart represents the predicted data values. The shaded yellow area represents the bounds for the predicted values, which also gives an indication of the confidence of the predictions. Note that the bounds generally increase with time (that is to say, the confidence levels decrease), since you are forecasting further into the future. Eventually if the confidence levels are too low, the forecast stops. @@ -288,7 +288,7 @@ To create a forecast in {{kib}}: 5. Optional: Compare the forecast to actual data. :::{image} ../../../images/machine-learning-ml-gs-forecast-actual.png :alt: View a forecast over actual data in the Single Metric Viewer - :class: screenshot + :screenshot: ::: As the job processes more data, you can click the **Forecast** button again and choose to see one of your forecasts overlaid on the actual data. The chart then contains the actual data values, the bounds for the expected values, the anomalies, the forecast data values, and the bounds for the forecast. This combination of actual and forecast data gives you an indication of how well the {{ml-features}} can extrapolate the future behavior of the data. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-jobs-from-lens.md b/explore-analyze/machine-learning/anomaly-detection/ml-jobs-from-lens.md index 5e1aa277d..41d9ed8e0 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-jobs-from-lens.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-jobs-from-lens.md @@ -32,14 +32,14 @@ You need to have a compatible visualization on **Dashboard** to create an {{anom :::{image} ../../../images/machine-learning-create-ad-job-from-lens.jpg :alt: A screenshot of a chart with the Options menu opened -:class: screenshot +:screenshot: ::: If the visualization has multiple compatible layers, you can select which layer to use for creating the {{anomaly-job}}. :::{image} ../../../images/machine-learning-select-layer-for-job.jpg :alt: A screenshot of a chart with the Options menu opened -:class: screenshot +:screenshot: ::: If multiple fields are added to the chart or you selected a `Break down by` field, the multi metric job wizard is used for creating the job. For a single metric chart, the single metric wizard is used. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-reverting-model-snapshot.md b/explore-analyze/machine-learning/anomaly-detection/ml-reverting-model-snapshot.md index 55bcd7028..689db519d 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-reverting-model-snapshot.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-reverting-model-snapshot.md @@ -15,7 +15,7 @@ mapped_pages: 3. Open the job details and navigate to the **Model Snapshots** tab. :::{image} ../../../images/machine-learning-anomaly-job-model-snapshots.jpg :alt: A screenshot of a job with the Model Snapshots tab opened - :class: screenshot + :screenshot: ::: 4. Select a snapshot from the list and click the **Revert** icon under **Actions**. @@ -24,7 +24,7 @@ mapped_pages: * You can select a time range you want to avoid during the replay by declaring a calendar event. This way, you can skip any problematic time frame that you want the {{anomaly-job}} to avoid. :::{image} ../../../images/machine-learning-revert-model-snapshot.jpg :alt: A screenshot of a revert model snapshot flyout - :class: screenshot + :screenshot: ::: 6. Click **Apply**. diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md index f45c2c825..68541beb8 100644 --- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md +++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md @@ -136,7 +136,7 @@ If your objective is to maximize accuracy, the scores are weighted to maximize t :::{image} ../../../images/machine-learning-confusion-matrix-binary-accuracy.jpg :alt: A confusion matrix with the correct predictions highlighted -:class: screenshot +:screenshot: ::: ::::{tip} @@ -147,7 +147,7 @@ By default, {{classanalysis}} jobs accept a slight degradation of the overall ac :::{image} ../../../images/machine-learning-confusion-matrix-multiclass-recall.jpg :alt: A confusion matrix with a row highlighted -:class: screenshot +:screenshot: ::: For each class, the recall is calculated as the number of correct predictions divided by the sum of all the other predicted labels in that row. This value is represented as a percentage in each cell of the confusion matrix. The class scores are then weighted to favor predictions that result in the highest recall values across the training data. This objective typically performs better than accuracy when you have highly imbalanced data. @@ -162,19 +162,19 @@ The model that you created is stored as {{es}} documents in internal indices. In 2. Find the model you want to deploy in the list and click **Deploy model** in the **Actions** menu. :::{image} ../../../images/machine-learning-ml-dfa-trained-models-ui.png :alt: The trained models UI in {{kib}} - :class: screenshot + :screenshot: ::: 3. Create an {{infer}} pipeline to be able to use the model against new data through the pipeline. Add a name and a description or use the default values. :::{image} ../../../images/machine-learning-ml-dfa-inference-pipeline.png :alt: Creating an inference pipeline - :class: screenshot + :screenshot: ::: 4. Configure the pipeline processors or use the default settings. :::{image} ../../../images/machine-learning-ml-dfa-inference-processor.png :alt: Configuring an inference processor - :class: screenshot + :screenshot: ::: 5. Configure to handle ingest failures or use the default settings. @@ -282,7 +282,7 @@ To predict whether a specific flight is delayed: You can use the wizard on the **{{ml-app}}** > **Data Frame Analytics** tab in {{kib}} or the [create {{dfanalytics-jobs}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-data-frame-analytics) API. :::{image} ../../../images/machine-learning-flights-classification-job-1.jpg :alt: Creating a {{dfanalytics-job}} in {{kib}} - :class: screenshot + :screenshot: ::: 1. Choose `kibana_sample_data_flights` as the source index. @@ -292,7 +292,7 @@ To predict whether a specific flight is delayed: The wizard includes a scatterplot matrix, which enables you to explore the relationships between the numeric fields. The color of each point is affected by the value of the {{depvar}} for that document, as shown in the legend. You can highlight an area in one of the charts and the corresponding area is also highlighted in the rest of the charts. You can use this matrix to help you decide which fields to include or exclude. :::{image} ../../../images/machine-learning-flights-classification-scatterplot.png :alt: A scatterplot matrix for three fields in {{kib}} - :class: screenshot + :screenshot: ::: If you want these charts to represent data from a larger sample size or from a randomized selection of documents, you can change the default behavior. However, a larger sample size might slow down the performance of the matrix and a randomized selection might put more load on the cluster due to the more intensive query. 5. Choose a training percent of `10` which means it randomly selects 10% of the source data for training. While that value is low for this example, for many large data sets using a small training sample greatly reduces runtime without impacting accuracy. @@ -357,7 +357,7 @@ POST _ml/data_frame/analytics/model-flight-delays-classification/_start :::{image} ../../../images/machine-learning-flights-classification-details.jpg :alt: Statistics for a {{dfanalytics-job}} in {{kib}} -:class: screenshot +:screenshot: ::: When the job stops, the results are ready to view and evaluate. To learn more about the job phases, see [How {{dfanalytics-jobs}} work](ml-dfa-phases.md). @@ -466,7 +466,7 @@ When you view the {{classification}} results in {{kib}}, it shows the contents o :::{image} ../../../images/machine-learning-flights-classification-results.jpg :alt: Destination index table for a classification job in {{kib}} -:class: screenshot +:screenshot: ::: The table shows a column for the {{depvar}} (`FlightDelay`), which contains the ground truth values that you are trying to predict. It also shows a column for the predicted values (`ml.FlightDelay_prediction`), which were generated by the {{classanalysis}}. The `ml.is_training` column indicates whether the document was used in the training or testing data set. You can use the **Training** and **Testing** filter options to refine the contents of the results table. You can also enable histogram charts to get a better understanding of the distribution of values. @@ -518,14 +518,14 @@ If you chose to calculate {{feat-imp}}, the destination index also contains `ml. :::{image} ../../../images/machine-learning-flights-classification-total-importance.jpg :alt: Total {{feat-imp}} values in {{kib}} -:class: screenshot +:screenshot: ::: You can also see the {{feat-imp}} values for each individual prediction in the form of a decision plot: :::{image} ../../../images/machine-learning-flights-classification-importance.png :alt: A decision plot for {{feat-imp}} values in {{kib}} -:class: screenshot +:screenshot: ::: In {{kib}}, the decision path shows the relative impact of each feature on the probability of the prediction. The features with the most significant positive or negative impact appear at the top of the decision plot. Thus in this example, the features related to flight time and distance had the most significant influence on the probability value for this prediction. This type of information can help you to understand how models arrive at their predictions. It can also indicate which aspects of your data set are most influential or least useful when you are training and tuning your model. @@ -673,7 +673,7 @@ Though you can look at individual results and compare the predicted value (`ml.F :::{image} ../../../images/machine-learning-flights-classification-evaluation.png :alt: Evaluation of a classification job in {{kib}} -:class: screenshot +:screenshot: ::: ::::{note} @@ -688,7 +688,7 @@ Likewise if you select other quadrants in the matrix, it shows the number of doc :::{image} ../../../images/machine-learning-flights-classification-roc-curve.jpg :alt: Evaluation of a classification job in {{kib}} – ROC curve -:class: screenshot +:screenshot: ::: You can also generate these metrics with the [{{dfanalytics}} evaluate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-evaluate-data-frame). For more information about interpreting the evaluation metrics, see [6. Evaluate and interpret the result](#ml-dfanalytics-classification-evaluation). diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-custom-urls.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-custom-urls.md index fe5d38162..985df8449 100644 --- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-custom-urls.md +++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-custom-urls.md @@ -12,14 +12,14 @@ You can optionally attach one or more custom URLs to your {{dfanalytics-jobs}}. :::{image} ../../../images/machine-learning-ml-dfa-custom-url.png :alt: Creating a custom URL during job creation -:class: screenshot +:screenshot: ::: When you create or edit an {{dfanalytics-job}} in {{kib}}, it simplifies the creation of the custom URLs for {{kib}} dashboards and the **Discover** app and it enables you to test your URLs. For example: :::{image} ../../../images/machine-learning-ml-dfa-custom-url-edit.png :alt: Add a custom URL in {{kib}} -:class: screenshot +:screenshot: ::: For each custom URL, you must supply a label. You can also optionally supply a time range. When you link to **Discover** or a {{kib}} dashboard, you’ll have additional options for specifying the pertinent {{data-source}} or dashboard name and query entities. diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md index 812f0b5bb..2b7ed25e7 100644 --- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md +++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-finding-outliers.md @@ -122,7 +122,7 @@ The goal of {{oldetection}} is to find the most unusual documents in an index. L You can preview the {{transform}} before you create it in **{{stack-manage-app}}** > **Transforms**: :::{image} ../../../images/machine-learning-logs-transform-preview.jpg :alt: Creating a {{transform}} in {{kib}} - :class: screenshot + :screenshot: ::: Alternatively, you can use the [preview {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform). @@ -239,13 +239,13 @@ POST _transform/logs-by-clientip/_start In the wizard on the **Machine Learning** > **Data Frame Analytics** page in {{kib}}, select your new {{data-source}} then use the default values for {{oldetection}}. For example: :::{image} ../../../images/machine-learning-weblog-outlier-job-1.jpg :alt: Create a {{dfanalytics-job}} in {{kib}} - :class: screenshot + :screenshot: ::: The wizard includes a scatterplot matrix, which enables you to explore the relationships between the fields. You can use that information to help you decide which fields to include or exclude from the analysis. :::{image} ../../../images/machine-learning-weblog-outlier-scatterplot.jpg :alt: A scatterplot matrix for three fields in {{kib}} - :class: screenshot + :screenshot: ::: If you want these charts to represent data from a larger sample size or from a randomized selection of documents, you can change the default behavior. However, a larger sample size might slow down the performance of the matrix and a randomized selection might put more load on the cluster due to the more intensive query. @@ -293,7 +293,7 @@ PUT _ml/data_frame/analytics/weblog-outliers In {{kib}}, you can view the results from the {{dfanalytics}} job and sort them on the outlier score: :::{image} ../../../images/machine-learning-outliers.jpg :alt: View {{oldetection}} results in {{kib}} - :class: screenshot + :screenshot: ::: The `ml.outlier` score is a value between 0 and 1. The larger the value, the more likely they are to be an outlier. In {{kib}}, you can optionally enable histogram charts to get a better understanding of the distribution of values for each column in the result. @@ -342,7 +342,7 @@ GET weblog-outliers/_search?q="111.237.144.54" :::{image} ../../../images/machine-learning-outliers-scatterplot.jpg :alt: View scatterplot in {{oldetection}} results -:class: screenshot +:screenshot: ::: You can highlight an area in one of the charts and the corresponding area is also highlighted in the rest of the charts. This function makes it easier to focus on specific values and areas in the results. In addition to the sample size and random scoring options, there is a **Dynamic size** option. If you enable this option, the size of each point is affected by its {{olscore}}; that is to say, the largest points have the highest {{olscores}}. The goal of these charts and options is to help you visualize and explore the outliers within your data. diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md index 8b98eba0d..b1ee2f8c4 100644 --- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md +++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md @@ -108,19 +108,19 @@ The model that you created is stored as {{es}} documents in internal indices. In 2. Find the model you want to deploy in the list and click **Deploy model** in the **Actions** menu. :::{image} ../../../images/machine-learning-ml-dfa-trained-models-ui.png :alt: The trained models UI in {{kib}} - :class: screenshot + :screenshot: ::: 3. Create an {{infer}} pipeline to be able to use the model against new data through the pipeline. Add a name and a description or use the default values. :::{image} ../../../images/machine-learning-ml-dfa-inference-pipeline.png :alt: Creating an inference pipeline - :class: screenshot + :screenshot: ::: 4. Configure the pipeline processors or use the default settings. :::{image} ../../../images/machine-learning-ml-dfa-inference-processor.png :alt: Configuring an inference processor - :class: screenshot + :screenshot: ::: 5. Configure to handle ingest failures or use the default settings. @@ -225,7 +225,7 @@ To predict the number of minutes delayed for each flight: You can use the wizard on the **{{ml-app}}** > **Data Frame Analytics** tab in {{kib}} or the [create {{dfanalytics-jobs}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-put-data-frame-analytics) API. :::{image} ../../../images/machine-learning-flights-regression-job-1.jpg :alt: Creating a {{dfanalytics-job}} in {{kib}} - :class: screenshot + :screenshot: ::: 1. Choose `kibana_sample_data_flights` as the source index. 2. Choose `regression` as the job type. @@ -235,7 +235,7 @@ To predict the number of minutes delayed for each flight: The wizard includes a scatterplot matrix, which enables you to explore the relationships between the numeric fields. The color of each point is affected by the value of the {{depvar}} for that document, as shown in the legend. You can highlight an area in one of the charts and the corresponding area is also highlighted in the rest of the chart. You can use this matrix to help you decide which fields to include or exclude from the analysis. :::{image} ../../../images/machine-learning-flightdata-regression-scatterplot.png :alt: A scatterplot matrix for three fields in {{kib}} - :class: screenshot + :screenshot: ::: If you want these charts to represent data from a larger sample size or from a randomized selection of documents, you can change the default behavior. However, a larger sample size might slow down the performance of the matrix and a randomized selection might put more load on the cluster due to the more intensive query. 6. Choose a training percent of `90` which means it randomly selects 90% of the source data for training. @@ -305,7 +305,7 @@ POST _ml/data_frame/analytics/model-flight-delays-regression/_start :::{image} ../../../images/machine-learning-flights-regression-details.jpg :alt: Statistics for a {{dfanalytics-job}} in {{kib}} -:class: screenshot +:screenshot: ::: When the job stops, the results are ready to view and evaluate. To learn more about the job phases, see [How {{dfanalytics-jobs}} work](ml-dfa-phases.md). @@ -413,7 +413,7 @@ When you view the results in {{kib}}, it shows the contents of the destination i :::{image} ../../../images/machine-learning-flights-regression-results.jpg :alt: Results for a {{dfanalytics-job}} in {{kib}} -:class: screenshot +:screenshot: ::: In this example, the table shows a column for the {{depvar}} (`FlightDelayMin`), which contains the ground truth values that we are trying to predict. It also shows a column for the prediction values (`ml.FlightDelayMin_prediction`) and a column that indicates whether the document was used in the training set (`ml.is_training`). You can filter the table to show only testing or training data and you can select which fields are shown in the table. You can also enable histogram charts to get a better understanding of the distribution of values in your data. @@ -422,14 +422,14 @@ If you chose to calculate {{feat-imp}}, the destination index also contains `ml. :::{image} ../../../images/machine-learning-flights-regression-total-importance.jpg :alt: Total {{feat-imp}} values in {{kib}} -:class: screenshot +:screenshot: ::: You can also see the {{feat-imp}} values for each individual prediction in the form of a decision plot: :::{image} ../../../images/machine-learning-flights-regression-importance.png :alt: A decision plot for {{feat-imp}} values in {{kib}} -:class: screenshot +:screenshot: ::: The decision path starts at a baseline, which is the average of the predictions for all the data points in the training data set. From there, the feature importance values are added to the decision path until it arrives at its final prediction. The features with the most significant positive or negative impact appear at the top. Thus in this example, the features related to the flight distance had the most significant influence on this particular predicted flight delay. This type of information can help you to understand how models arrive at their predictions. It can also indicate which aspects of your data set are most influential or least useful when you are training and tuning your model. @@ -535,7 +535,7 @@ Though you can look at individual results and compare the predicted value (`ml.F :::{image} ../../../images/machine-learning-flights-regression-evaluation.jpg :alt: Evaluating {{reganalysis}} results in {{kib}} -:class: screenshot +:screenshot: ::: A mean squared error (MSE) of zero means that the models predicts the {{depvar}} with perfect accuracy. This is the ideal, but is typically not possible. Likewise, an R-squared value of 1 indicates that all of the variance in the {{depvar}} can be explained by the feature variables. Typically, you compare the MSE and R-squared values from multiple {{regression}} models to find the best balance or fit for your data. diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-feature-importance.md b/explore-analyze/machine-learning/data-frame-analytics/ml-feature-importance.md index 3b5658b53..d2cc3e4b1 100644 --- a/explore-analyze/machine-learning/data-frame-analytics/ml-feature-importance.md +++ b/explore-analyze/machine-learning/data-frame-analytics/ml-feature-importance.md @@ -16,28 +16,28 @@ You can see the average magnitude of the {{feat-imp}} values for each field acro :::{image} ../../../images/machine-learning-flights-regression-total-importance.jpg :alt: Total {{feat-imp}} values for a {{regression}} {{dfanalytics-job}} in {{kib}} -:class: screenshot +:screenshot: ::: If the {{classanalysis}} involves more than two classes, {{kib}} uses colors to show how the impact of each field varies by class. For example: :::{image} ../../../images/machine-learning-diamonds-classification-total-importance.png :alt: Total {{feat-imp}} values for a {{classification}} {{dfanalytics-job}} in {{kib}} -:class: screenshot +:screenshot: ::: You can also examine the feature importance values for each individual prediction. In {{kib}}, you can see these values in JSON objects or decision plots. For {{reganalysis}}, each decision plot starts at a shared baseline, which is the average of the prediction values for all the data points in the training data set. When you add all of the feature importance values for a particular data point to that baseline, you arrive at the numeric prediction value. If a {{feat-imp}} value is negative, it reduces the prediction value. If a {{feat-imp}} value is positive, it increases the prediction value. For example: :::{image} ../../../images/machine-learning-flights-regression-decision-plot.png :alt: Feature importance values for a {{regression}} {{dfanalytics-job}} in {{kib}} -:class: screenshot +:screenshot: ::: For {{classanalysis}}, the sum of the {{feat-imp}} values approximates the predicted logarithm of odds for each data point. The simplest way to understand {{feat-imp}} in the context of {{classanalysis}} is to look at the decision plots in {{kib}}. For each data point, there is a chart which shows the relative impact of each feature on the prediction probability for that class. This information helps you to understand which features reduces or increase the prediction probability. For example: :::{image} ../../../images/machine-learning-flights-classification-decision-plot.png :alt: A decision plot in {{kib}}for a {{classification}} {{dfanalytics-job}} -:class: screenshot +:screenshot: ::: By default, {{feat-imp}} values are not calculated. To generate this information, when you create a {{dfanalytics-job}} you must specify the `num_top_feature_importance_values` property. For example, see [Performing {{reganalysis}} in the sample flight data set](ml-dfa-regression.md#performing-regression) and [Performing {{classanalysis}} in the sample flight data set](ml-dfa-classification.md#performing-classification). diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-trained-models.md b/explore-analyze/machine-learning/data-frame-analytics/ml-trained-models.md index a5cf64528..94ec566eb 100644 --- a/explore-analyze/machine-learning/data-frame-analytics/ml-trained-models.md +++ b/explore-analyze/machine-learning/data-frame-analytics/ml-trained-models.md @@ -24,21 +24,21 @@ Alternatively, you can use APIs like [get trained models](https://www.elastic.co :::{image} ../../../images/machine-learning-ml-dfa-trained-models-ui.png :alt: The trained models UI in {{kib}} -:class: screenshot +:screenshot: ::: 3. Create an {{infer}} pipeline to be able to use the model against new data through the pipeline. Add a name and a description or use the default values. :::{image} ../../../images/machine-learning-ml-dfa-inference-pipeline.png :alt: Creating an inference pipeline -:class: screenshot +:screenshot: ::: 4. Configure the pipeline processors or use the default settings. :::{image} ../../../images/machine-learning-ml-dfa-inference-processor.png :alt: Configuring an inference processor -:class: screenshot +:screenshot: ::: 5. Configure to handle ingest failures or use the default settings. diff --git a/explore-analyze/machine-learning/machine-learning-in-kibana.md b/explore-analyze/machine-learning/machine-learning-in-kibana.md index 76321b4e0..b0356bc18 100644 --- a/explore-analyze/machine-learning/machine-learning-in-kibana.md +++ b/explore-analyze/machine-learning/machine-learning-in-kibana.md @@ -15,7 +15,7 @@ As data sets increase in size and complexity, the human effort required to inspe :::{image} ../../images/kibana-ml-data-visualizer-sample.png :alt: {{data-viz}} for sample flight data -:class: screenshot +:screenshot: ::: You can upload different file formats for analysis with the **{{data-viz}}**. @@ -53,7 +53,7 @@ You can find the data drift view in **{{ml-app}}** > **{{data-viz}}** in {{kib}} :::{image} ../../images/kibana-ml-data-drift.png :alt: Data drift view in {{kib}} -:class: screenshot +:screenshot: ::: Select a {{data-source}} that you want to analyze, then select a time range for the reference and the comparison data in the appearing histogram chart. You can adjust the time range for both the reference and the comparison data by moving the respective brushes. When you finished setting the time ranges, click **Run analysis**. diff --git a/explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md b/explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md index e8041f1f4..81c1d1c1c 100644 --- a/explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md +++ b/explore-analyze/machine-learning/machine-learning-in-kibana/inference-processing.md @@ -76,7 +76,7 @@ Once your index-specific ML {{infer}} pipeline is ready, you can add {{infer}} p :::{image} ../../../images/elasticsearch-reference-document-enrichment-add-inference-pipeline.png :alt: Add Inference Pipeline -:class: screenshot +:screenshot: ::: Here, you’ll be able to: diff --git a/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md b/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md index a76b5d9af..ea3681614 100644 --- a/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md +++ b/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md @@ -21,7 +21,7 @@ You can find log rate analysis embedded in multiple applications. In {{kib}}, yo :::{image} ../../../images/kibana-ml-log-rate-analysis-before.png :alt: Log event histogram chart -:class: screenshot +:screenshot: ::: Select a spike or drop in the log event histogram chart to start the analysis. It identifies statistically significant field-value combinations that contribute to the spike or drop and displays them in a table. You can optionally choose to summarize the results into groups. The table also shows an indicator of the level of impact and a sparkline showing the shape of the impact in the chart. Hovering over a row displays the impact on the histogram chart in more detail. You can inspect a field in **Discover**, further investigate in **Log pattern analysis**, or copy the table row information as a query filter to the clipboard by selecting the corresponding option under the **Actions** column. You can also pin a table row by clicking on it then move the cursor to the histogram chart. It displays a tooltip with exact count values for the pinned field which enables closer investigation. @@ -30,7 +30,7 @@ Brushes in the chart show the baseline time range and the deviation in the analy :::{image} ../../../images/kibana-ml-log-rate-analysis.png :alt: Log rate spike explained -:class: screenshot +:screenshot: ::: ## Log pattern analysis [log-pattern-analysis] @@ -41,7 +41,7 @@ You can find log pattern analysis under **{{ml-app}}** > **AIOps Labs** or by us :::{image} ../../../images/kibana-ml-log-pattern-analysis.png :alt: Log pattern analysis UI -:class: screenshot +:screenshot: ::: Select a field for categorization and optionally apply any filters that you want, then start the analysis. The analysis uses the same algorithms as a {{ml}} categorization job. The results of the analysis are shown in a table that makes it possible to open **Discover** and show or filter out the given category there, which helps you to further examine your log messages. @@ -58,7 +58,7 @@ You can find change point detection under **{{ml-app}}** > **AIOps Labs** or by :::{image} ../../../images/kibana-ml-change-point-detection.png :alt: Change point detection UI -:class: screenshot +:screenshot: ::: Select a function and a metric field, then pick a date range to start detecting change points in the defined range. Optionally, you can split the data by a field. If the cardinality of the split field exceeds 10,000, then only the first 10,000, sorted by document count, are analyzed. You can configure a maximum of 6 combinations of a function applied to a metric field, partitioned by a split field to identify change points. @@ -67,7 +67,7 @@ When a change point is detected, a row displays basic information including the :::{image} ../../../images/kibana-ml-change-point-detection-selected.png :alt: Selected change points -:class: screenshot +:screenshot: ::: You can attach change point charts to a dashboard or a case by using the context menu. If the split field is selected, you can either select specific charts (partitions) or set the maximum number of top change points to plot. It’s possible to preserve the applied time range or use the time bound from the page date picker. You can also add or edit change point charts directly from the **Dashboard** app. diff --git a/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-anomalies.md b/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-anomalies.md index fb1673cc1..240aeed51 100644 --- a/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-anomalies.md +++ b/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-anomalies.md @@ -15,28 +15,28 @@ If you have a license that includes the {{ml-features}}, you can create {{anomal :::{image} ../../../images/kibana-ml-job-management.png :alt: Job Management -:class: screenshot +:screenshot: ::: You can use the **Settings** pane to create and edit calendars and the filters that are used in custom rules: :::{image} ../../../images/kibana-ml-settings.png :alt: Calendar Management -:class: screenshot +:screenshot: ::: The **Anomaly Explorer** and **Single Metric Viewer** display the results of your {{anomaly-jobs}}. For example: :::{image} ../../../images/kibana-ml-single-metric-viewer.png :alt: Single Metric Viewer -:class: screenshot +:screenshot: ::: You can optionally add annotations by drag-selecting a period of time in the **Single Metric Viewer** and adding a description. For example, you can add an explanation for anomalies in that time period or provide notes about what is occurring in your operational environment at that time: :::{image} ../../../images/kibana-ml-annotations-list.png :alt: Single Metric Viewer with annotations -:class: screenshot +:screenshot: ::: In some circumstances, annotations are also added automatically. For example, if the {{anomaly-job}} detects that there is missing data, it annotates the affected time period. For more information, see [Handling delayed data](../anomaly-detection/ml-delayed-data-detection.md). The **Job Management** pane shows the full list of annotations for each job. diff --git a/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-dfanalytics.md b/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-dfanalytics.md index ee2d2729c..99c71abda 100644 --- a/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-dfanalytics.md +++ b/explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-dfanalytics.md @@ -14,7 +14,7 @@ If you have a license that includes the {{ml-features}}, you can create {{dfanal :::{image} ../../../images/kibana-classification.png :alt: {{classification-cap}} results in {{kib}} -:class: screenshot +:screenshot: ::: For more information about the {{dfanalytics}} feature, see [{{ml-cap}} {{dfanalytics}}](../data-frame-analytics.md). diff --git a/explore-analyze/machine-learning/nlp.md b/explore-analyze/machine-learning/nlp.md index db8ef0978..ac11d7081 100644 --- a/explore-analyze/machine-learning/nlp.md +++ b/explore-analyze/machine-learning/nlp.md @@ -12,7 +12,6 @@ You can use {{stack-ml-features}} to analyze natural language data and make pred * [Overview](nlp/ml-nlp-overview.md) * [Deploy trained models](nlp/ml-nlp-deploy-models.md) -* [Trained model autoscaling](nlp/ml-nlp-auto-scale.md) * [Add NLP {{infer}} to ingest pipelines](nlp/ml-nlp-inference.md) * [API quick reference](nlp/ml-nlp-apis.md) * [ELSER](nlp/ml-nlp-elser.md) diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md b/explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md deleted file mode 100644 index ea90091f4..000000000 --- a/explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -applies_to: - stack: ga - serverless: ga -mapped_pages: - - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-auto-scale.html ---- - -# Trained model autoscaling [ml-nlp-auto-scale] - -You can enable autoscaling for each of your trained model deployments. Autoscaling allows {{es}} to automatically adjust the resources the model deployment can use based on the workload demand. - -There are two ways to enable autoscaling: - -* through APIs by enabling adaptive allocations -* in {{kib}} by enabling adaptive resources - -::::{important} -To fully leverage model autoscaling, it is highly recommended to enable [{{es}} deployment autoscaling](../../../deploy-manage/autoscaling.md). -:::: - -## Enabling autoscaling through APIs - adaptive allocations [nlp-model-adaptive-allocations] - -Model allocations are independent units of work for NLP tasks. If you set the numbers of threads and allocations for a model manually, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources. Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process. This can help you to manage performance and cost more easily. (Refer to the [pricing calculator](https://cloud.elastic.co/pricing) to learn more about the possible costs.) - -When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. You can explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits. - -You can enable adaptive allocations by using: - -* the create inference endpoint API for [ELSER](../../elastic-inference/inference-api/elser-inference-integration.md), [E5 and models uploaded through Eland](../../elastic-inference/inference-api/elasticsearch-inference-integration.md) that are used as {{infer}} services. -* the [start trained model deployment](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-start-trained-model-deployment) or [update trained model deployment](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-update-trained-model-deployment) APIs for trained models that are deployed on {{ml}} nodes. - -If the new allocations fit on the current {{ml}} nodes, they are immediately started. If more resource capacity is needed for creating new model allocations, then your {{ml}} node will be scaled up if {{ml}} autoscaling is enabled to provide enough resources for the new allocation. The number of model allocations can be scaled down to 0. They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more. Adaptive allocations must be set up independently for each deployment and [{{infer}} endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put). - -### Optimizing for typical use cases [optimize-use-case] - -You can optimize your model deployment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of {{infer}} requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes. - -* If you want to optimize for ingest, set the number of threads to `1` (`"threads_per_allocation": 1`). -* If you want to optimize for search, set the number of threads to greater than `1`. Increasing the number of threads will make the search processes more performant. - -## Enabling autoscaling in {{kib}} - adaptive resources [nlp-model-adaptive-resources] - -You can enable adaptive resources for your models when starting or updating the model deployment. Adaptive resources make it possible for {{es}} to scale up or down the available resources based on the load on the process. This can help you to manage performance and cost more easily. When adaptive resources are enabled, the number of vCPUs that the model deployment uses is set automatically based on the current load. When the load is high, the number of vCPUs that the process can use is automatically increased. When the load is low, the number of vCPUs that the process can use is automatically decreased. - -You can choose from three levels of resource usage for your trained model deployment; autoscaling will occur within the selected level’s range. - -Refer to the tables in the [Model deployment resource matrix](#auto-scaling-matrix) section to find out the setings for the level you selected. - -:::{image} ../../../images/machine-learning-ml-nlp-deployment-id-elser-v2.png -:alt: ELSER deployment with adaptive resources enabled. -:class: screenshot -::: - -## Model deployment resource matrix [auto-scaling-matrix] - -The used resources for trained model deployments depend on three factors: - -* your cluster environment (Serverless, Cloud, or on-premises) -* the use case you optimize the model deployment for (ingest or search) -* whether model autoscaling is enabled with adaptive allocations/resources to have dynamic resources, or disabled for static resources - -If you use {{es}} on-premises, vCPUs level ranges are derived from the `total_ml_processors` and `max_single_ml_node_processors` values. Use the [get {{ml}} info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-info) to check these values. The following tables show you the number of allocations, threads, and vCPUs available in Cloud when adaptive resources are enabled or disabled. - -::::{note} -On Serverless, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in {{kib}} for Observability and Security projects. -:::: - -### Deployments in Cloud optimized for ingest [_deployments_in_cloud_optimized_for_ingest] - -In case of ingest-optimized deployments, we maximize the number of model allocations. - -#### Adaptive resources enabled [_adaptive_resources_enabled] - -| Level | Allocations | Threads | vCPUs | -| --- | --- | --- | --- | -| Low | 0 to 2 if available, dynamically | 1 | 0 to 2 if available, dynamically | -| Medium | 1 to 32 dynamically | 1 | 1 to the smaller of 32 or the limit set in the Cloud console, dynamically | -| High | 1 to limit set in the Cloud console *, dynamically | 1 | 1 to limit set in the Cloud console, dynamically | - -* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads. - -#### Adaptive resources disabled [_adaptive_resources_disabled] - -| Level | Allocations | Threads | vCPUs | -| --- | --- | --- | --- | -| Low | 2 if available, otherwise 1, statically | 1 | 2 if available | -| Medium | the smaller of 32 or the limit set in the Cloud console, statically | 1 | 32 if available | -| High | Maximum available set in the Cloud console *, statically | 1 | Maximum available set in the Cloud console, statically | - -* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads. - -### Deployments in Cloud optimized for search [_deployments_in_cloud_optimized_for_search] - -In case of search-optimized deployments, we maximize the number of threads. The maximum number of threads that can be claimed depends on the hardware your architecture has. - -#### Adaptive resources enabled [_adaptive_resources_enabled_2] - -| Level | Allocations | Threads | vCPUs | -| --- | --- | --- | --- | -| Low | 1 | 2 | 2 | -| Medium | 1 to 2 (if threads=16) dynamically | maximum that the hardware allows (for example, 16) | 1 to 32 dynamically | -| High | 1 to limit set in the Cloud console *, dynamically | maximum that the hardware allows (for example, 16) | 1 to limit set in the Cloud console, dynamically | - -* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads. - -#### Adaptive resources disabled [_adaptive_resources_disabled_2] - -| Level | Allocations | Threads | vCPUs | -| --- | --- | --- | --- | -| Low | 1 if available, statically | 2 | 2 if available | -| Medium | 2 (if threads=16) statically | maximum that the hardware allows (for example, 16) | 32 if available | -| High | Maximum available set in the Cloud console *, statically | maximum that the hardware allows (for example, 16) | Maximum available set in the Cloud console, statically | - -\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads. diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md b/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md index c3736bb54..da256d8f4 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md @@ -16,7 +16,7 @@ You can optimize your deplyoment for typical use cases, such as search and inges :::{image} ../../../images/machine-learning-ml-nlp-deployment-id-elser-v2.png :alt: Model deployment on the Trained Models UI. -:class: screenshot +:screenshot: ::: Each deployment will be fine-tuned automatically based on its specific purpose you choose. @@ -25,13 +25,13 @@ Each deployment will be fine-tuned automatically based on its specific purpose y Since eland uses APIs to deploy the models, you cannot see the models in {{kib}} until the saved objects are synchronized. You can follow the prompts in {{kib}}, wait for automatic synchronization, or use the [sync {{ml}} saved objects API](https://www.elastic.co/docs/api/doc/kibana/v8/group/endpoint-ml). :::: -You can define the resource usage level of the NLP model during model deployment. The resource usage levels behave differently depending on [adaptive resources](ml-nlp-auto-scale.md#nlp-model-adaptive-resources) being enabled or disabled. When adaptive resources are disabled but {{ml}} autoscaling is enabled, vCPU usage of Cloud deployments derived from the Cloud console and functions as follows: +You can define the resource usage level of the NLP model during model deployment. The resource usage levels behave differently depending on [adaptive resources](../../../deploy-manage/autoscaling/trained-model-autoscaling.md#enabling-autoscaling-through-apis-adaptive-allocations) being enabled or disabled. When adaptive resources are disabled but {{ml}} autoscaling is enabled, vCPU usage of Cloud deployments derived from the Cloud console and functions as follows: * Low: This level limits resources to two vCPUs, which may be suitable for development, testing, and demos depending on your parameters. It is not recommended for production use * Medium: This level limits resources to 32 vCPUs, which may be suitable for development, testing, and demos depending on your parameters. It is not recommended for production use. * High: This level may use the maximum number of vCPUs available for this deployment from the Cloud console. If the maximum is 2 vCPUs or fewer, this level is equivalent to the medium or low level. -For the resource levels when adaptive resources are enabled, refer to <[*Trained model autoscaling*](ml-nlp-auto-scale.md). +For the resource levels when adaptive resources are enabled, refer to <[*Trained model autoscaling*](../../../deploy-manage/autoscaling/trained-model-autoscaling.md). ## Request queues and search priority [infer-request-queues] diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-e5.md b/explore-analyze/machine-learning/nlp/ml-nlp-e5.md index 98a4d28d7..87e9ba3b7 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-e5.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-e5.md @@ -21,7 +21,7 @@ Refer to the model cards of the [multilingual-e5-small](https://huggingface.co/e To use E5, you must have the [appropriate subscription](https://www.elastic.co/subscriptions) level for semantic search or the trial period activated. -Enabling trained model autoscaling for your E5 deployment is recommended. Refer to [*Trained model autoscaling*](ml-nlp-auto-scale.md) to learn more. +Enabling trained model autoscaling for your E5 deployment is recommended. Refer to [*Trained model autoscaling*](../../../deploy-manage/autoscaling/trained-model-autoscaling.md) to learn more. ## Download and deploy E5 [download-deploy-e5] @@ -65,7 +65,7 @@ For most cases, the preferred version is the **Intel and Linux optimized** model :::{image} ../../../images/machine-learning-ml-nlp-e5-download.png :alt: Downloading E5 - :class: screenshot + :screenshot: ::: Alternatively, click the **Download model** button under **Actions** in the trained model list. @@ -75,7 +75,7 @@ For most cases, the preferred version is the **Intel and Linux optimized** model :::{image} ../../../images/machine-learning-ml-nlp-deployment-id-e5.png :alt: Deploying E5 - :class: screenshot + :screenshot: ::: 5. Click Start. @@ -95,14 +95,14 @@ Alternatively, you can download and deploy the E5 model to an {{infer}} pipeline :::{image} ../../../images/machine-learning-ml-nlp-deploy-e5-es.png :alt: Deploying E5 in Elasticsearch - :class: screenshot + :screenshot: ::: 5. Once the model is downloaded, click the **Start single-threaded** button to start the model with basic configuration or select the **Fine-tune performance** option to navigate to the **Trained Models** page where you can configure the model deployment. :::{image} ../../../images/machine-learning-ml-nlp-start-e5-es.png :alt: Start E5 in Elasticsearch - :class: screenshot + :screenshot: ::: When your E5 model is deployed and started, it is ready to be used in a pipeline. diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-elser.md b/explore-analyze/machine-learning/nlp/ml-nlp-elser.md index 08874b9bd..3fde5ba19 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-elser.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-elser.md @@ -33,7 +33,7 @@ To use ELSER, you must have the [appropriate subscription](https://www.elastic.c The minimum dedicated ML node size for deploying and using the ELSER model is 4 GB in {{ech}} if [deployment autoscaling](../../../deploy-manage/autoscaling.md) is turned off. Turning on autoscaling is recommended because it allows your deployment to dynamically adjust resources based on demand. Better performance can be achieved by using more allocations or more threads per allocation, which requires bigger ML nodes. Autoscaling provides bigger nodes when required. If autoscaling is turned off, you must provide suitably sized nodes yourself. :::: -Enabling trained model autoscaling for your ELSER deployment is recommended. Refer to [*Trained model autoscaling*](ml-nlp-auto-scale.md) to learn more. +Enabling trained model autoscaling for your ELSER deployment is recommended. Refer to [*Trained model autoscaling*](../../../deploy-manage/autoscaling/trained-model-autoscaling.md) to learn more. ## ELSER v2 [elser-v2] @@ -72,7 +72,7 @@ PUT _inference/sparse_embedding/my-elser-model } ``` -The API request automatically initiates the model download and then deploy the model. This example uses [autoscaling](ml-nlp-auto-scale.md) through adaptive allocation. +The API request automatically initiates the model download and then deploy the model. This example uses [autoscaling](../../../deploy-manage/autoscaling/trained-model-autoscaling.md) through adaptive allocation. Refer to the [ELSER {{infer}} integration documentation](../../elastic-inference/inference-api/elser-inference-integration.md) to learn more about the available settings. @@ -97,7 +97,7 @@ You can also download and deploy ELSER either from **{{ml-app}}** > **Trained Mo :::{image} ../../../images/machine-learning-ml-nlp-elser-v2-download.png :alt: Downloading ELSER - :class: screenshot + :screenshot: ::: Alternatively, click the **Download model** button under **Actions** in the trained model list. @@ -107,7 +107,7 @@ You can also download and deploy ELSER either from **{{ml-app}}** > **Trained Mo :::{image} ../../../images/machine-learning-ml-nlp-deployment-id-elser-v2.png :alt: Deploying ELSER - :class: screenshot + :screenshot: ::: 5. Click **Start**. @@ -127,14 +127,14 @@ Alternatively, you can download and deploy ELSER to an {{infer}} pipeline using :::{image} ../../../images/machine-learning-ml-nlp-deploy-elser-v2-es.png :alt: Deploying ELSER in Elasticsearch - :class: screenshot + :screenshot: ::: 5. Once the model is downloaded, click the **Start single-threaded** button to start the model with basic configuration or select the **Fine-tune performance** option to navigate to the **Trained Models** page where you can configure the model deployment. :::{image} ../../../images/machine-learning-ml-nlp-start-elser-v2-es.png :alt: Start ELSER in Elasticsearch - :class: screenshot + :screenshot: ::: :::: @@ -271,7 +271,7 @@ The results contain a list of ten random values for the selected field along wit :::{image} ../../../images/machine-learning-ml-nlp-elser-v2-test.png :alt: Testing ELSER -:class: screenshot +:screenshot: ::: ## Performance considerations [performance] @@ -292,7 +292,7 @@ To gain the biggest value out of ELSER trained models, consider to follow this l * If quick response time is important for your use case, keep {{ml}} resources available at all times by setting `min_allocations` to `1`. * Setting `min_allocations` to `0` can save on costs for non-critical use cases or testing environments. -* Enabling [autoscaling](ml-nlp-auto-scale.md) through adaptive allocations or adaptive resources makes it possible for {{es}} to scale up or down the available resources of your ELSER deployment based on the load on the process. +* Enabling [autoscaling](../../../deploy-manage/autoscaling/trained-model-autoscaling.md) through adaptive allocations or adaptive resources makes it possible for {{es}} to scale up or down the available resources of your ELSER deployment based on the load on the process. * Use dedicated, optimized ELSER {{infer}} endpoints for ingest and search use cases. * When deploying a trained model in {{kib}}, you can select for which case you want to optimize your ELSER deployment. * If you use the trained model or {{infer}} APIs and want to optimize your ELSER trained model deployment or {{infer}} endpoint for ingest, set the number of threads to `1` (`"num_threads": 1`). diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-inference.md b/explore-analyze/machine-learning/nlp/ml-nlp-inference.md index 3f76b8d5b..a4a325318 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-inference.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-inference.md @@ -21,7 +21,7 @@ In {{kib}}, you can create and edit pipelines in **{{stack-manage-app}}** > **In :::{image} ../../../images/machine-learning-ml-nlp-pipeline-lang.png :alt: Creating a pipeline in the Stack Management app -:class: screenshot +:screenshot: ::: 1. Click **Create pipeline** or edit an existing pipeline. @@ -173,7 +173,7 @@ Before you can verify the results of the pipelines, you must [create {{data-sour :::{image} ../../../images/machine-learning-ml-nlp-discover-ner.png :alt: A document from the NER pipeline in the Discover app -:class: screenshot +:screenshot: ::: The `ml.inference.predicted_value` field contains the output from the {{infer}} processor. In this NER example, there are two documents that contain the `Elastic` organization entity. @@ -182,7 +182,7 @@ In this {{lang-ident}} example, the `ml.inference.predicted_value` contains the :::{image} ../../../images/machine-learning-ml-nlp-discover-lang.png :alt: A document from the {{lang-ident}} pipeline in the Discover app -:class: screenshot +:screenshot: ::: To learn more about ingest pipelines and all of the other processors that you can add, refer to [Ingest pipelines](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md). diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md index af867517b..d3adea9da 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md @@ -55,7 +55,7 @@ Deployed models can be evaluated in {{kib}} under **{{ml-app}}** > **Trained Mod :::{image} ../../../images/machine-learning-ml-nlp-ner-test.png :alt: Test trained model UI -:class: screenshot +:screenshot: ::: ::::{dropdown} **Test the model by using the _infer API** @@ -250,5 +250,5 @@ Update and save the visualization. :::{image} ../../../images/machine-learning-ml-nlp-tag-cloud.png :alt: Tag cloud created from Les Misérables -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-rerank.md b/explore-analyze/machine-learning/nlp/ml-nlp-rerank.md index c2e51af5a..70af12766 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-rerank.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-rerank.md @@ -73,7 +73,7 @@ PUT _inference/rerank/my-rerank-model ``` ::::{note} -The API request automatically downloads and deploys the model. This example uses [autoscaling](ml-nlp-auto-scale.md) through adaptive allocation. +The API request automatically downloads and deploys the model. This example uses [autoscaling](../../../deploy-manage/autoscaling/trained-model-autoscaling.md) through adaptive allocation. :::: ::::{note} diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-test-inference.md b/explore-analyze/machine-learning/nlp/ml-nlp-test-inference.md index f005326cb..7a5d4daf7 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-test-inference.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-test-inference.md @@ -14,7 +14,7 @@ The simplest method to test your model against new data is to use the **Test mod :::{image} ../../../images/machine-learning-ml-nlp-test-ner.png :alt: Testing a sentence with two named entities against a NER trained model in the *{{ml}}* app -:class: screenshot +:screenshot: ::: Alternatively, you can use the [infer trained model API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-infer-trained-model). For example, to try a named entity recognition task, provide some sample text: diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md index 5e897abb9..ddd366765 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md @@ -59,7 +59,7 @@ Deployed models can be evaluated in {{kib}} under **{{ml-app}}** > **Trained Mod :::{image} ../../../images/machine-learning-ml-nlp-text-emb-test.png :alt: Test trained model UI -:class: screenshot +:screenshot: ::: ::::{dropdown} **Test the model by using the _infer API** @@ -107,7 +107,7 @@ Upload the file by using the [Data Visualizer](../../../manage-data/ingest/uploa :::{image} ../../../images/machine-learning-ml-nlp-text-emb-data.png :alt: Importing the data -:class: screenshot +:screenshot: ::: ## Add the text embedding model to an {{infer}} ingest pipeline [ex-text-emb-ingest] @@ -199,7 +199,7 @@ You can also open the model stat UI to follow the progress. :::{image} ../../../images/machine-learning-ml-nlp-text-emb-reindex.png :alt: Model status UI -:class: screenshot +:screenshot: ::: After the reindexing is finished, the documents in the new index contain the {{infer}} results – the vector embeddings. diff --git a/explore-analyze/machine-learning/setting-up-machine-learning.md b/explore-analyze/machine-learning/setting-up-machine-learning.md index 62819a592..2d607dd70 100644 --- a/explore-analyze/machine-learning/setting-up-machine-learning.md +++ b/explore-analyze/machine-learning/setting-up-machine-learning.md @@ -66,11 +66,11 @@ Granting `All` or `Read` {{kib}} feature privilege for {{ml-app}} will also gran #### Feature visibility in Spaces [kib-visibility-spaces] -In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md#spaces-control-feature-visibility). To manage which features are visible in your space, go to **{{stack-manage-app}}** > **{{kib}}** > **Spaces** or use the [global search field](../find-and-organize/find-apps-and-objects.md) to locate **Spaces** directly. +In {{kib}}, the {{ml-features}} must be visible in your [space](../../deploy-manage/manage-spaces.md). To manage which features are visible in your space, go to **{{stack-manage-app}}** > **{{kib}}** > **Spaces** or use the [global search field](../find-and-organize/find-apps-and-objects.md) to locate **Spaces** directly. :::{image} ../../images/machine-learning-spaces.jpg :alt: Manage spaces in {{kib}} -:class: screenshot +:screenshot: ::: In addition to index privileges, source {{data-sources}} must also exist in the same space as your {{ml}} jobs. You can configure these under **{{data-sources-caps}}**. To open **{{data-sources-caps}}**, find **{{stack-manage-app}}** > **{{kib}}** in the main menu, or use the [global search field](../find-and-organize/find-apps-and-objects.md). @@ -79,7 +79,7 @@ Each {{ml}} job and trained model can be assigned to all, one, or multiple space :::{image} ../../images/machine-learning-assign-job-spaces.jpg :alt: Assign machine learning jobs to spaces -:class: screenshot +:screenshot: ::: #### {{kib}} user [kib-security-privileges] diff --git a/explore-analyze/query-filter/tools/console.md b/explore-analyze/query-filter/tools/console.md index 143168877..6d8aab964 100644 --- a/explore-analyze/query-filter/tools/console.md +++ b/explore-analyze/query-filter/tools/console.md @@ -30,7 +30,7 @@ $$$import-export-console-requests$$$ :::{image} ../../../images/kibana-console.png :alt: Console -:class: screenshot +:screenshot: ::: To go to **Console**, find **Dev Tools** in the navigation menu or use the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md). @@ -39,7 +39,7 @@ You can also find Console directly on certain Search solution and Elasticsearch :::{image} ../../../images/kibana-persistent-console.png :alt: Console -:class: screenshot +:screenshot: ::: @@ -117,7 +117,7 @@ Click **Variables** to create, edit, and delete variables. :::{image} ../../../images/kibana-variables.png :alt: Variables -:class: screenshot +:screenshot: ::: You can refer to these variables in the paths and bodies of your requests. Each variable can be referenced multiple times. diff --git a/explore-analyze/query-filter/tools/grok-debugger.md b/explore-analyze/query-filter/tools/grok-debugger.md index e15fbe9d1..d44dcd56b 100644 --- a/explore-analyze/query-filter/tools/grok-debugger.md +++ b/explore-analyze/query-filter/tools/grok-debugger.md @@ -47,7 +47,7 @@ If you’re using {{stack-security-features}}, you must have the `manage_pipelin :::{image} ../../../images/kibana-grok-debugger-overview.png :alt: Grok Debugger - :class: screenshot + :screenshot: ::: @@ -89,7 +89,7 @@ Follow this example to define a custom pattern. :::{image} ../../../images/kibana-grok-debugger-custom-pattern.png :alt: Debugging a custom pattern - :class: screenshot + :screenshot: ::: If an error occurs, you can continue iterating over the custom pattern until the output matches the event that you expect. diff --git a/explore-analyze/query-filter/tools/search-profiler.md b/explore-analyze/query-filter/tools/search-profiler.md index d43829ae3..09cda9634 100644 --- a/explore-analyze/query-filter/tools/search-profiler.md +++ b/explore-analyze/query-filter/tools/search-profiler.md @@ -23,7 +23,7 @@ The following example shows the results of profiling the `match_all` query. If y :::{image} ../../../images/kibana-overview.png :alt: {{searchprofiler}} visualization -:class: screenshot +:screenshot: ::: ::::{note} @@ -47,7 +47,7 @@ In the following example, the query is executed against the indices `.security-7 :::{image} ../../../images/kibana-filter.png :alt: Filtering by index and type -:class: screenshot +:screenshot: ::: @@ -109,7 +109,7 @@ To understand how the query trees are displayed inside the **{{searchprofiler}}* :::{image} ../../../images/kibana-gs8.png :alt: Profiling the more complicated query - :class: screenshot + :screenshot: ::: * The top `BooleanQuery` component corresponds to the bool in the query. @@ -126,7 +126,7 @@ To understand how the query trees are displayed inside the **{{searchprofiler}}* :::{image} ../../../images/kibana-gs10.png :alt: Drilling into the first shard's details - :class: screenshot + :screenshot: ::: For more information about how the **{{searchprofiler}}** works, how timings are calculated, and how to interpret various results, see [Profiling queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-profile.html#profiling-queries). @@ -285,6 +285,6 @@ Your output should look similar to this: :::{image} ../../../images/kibana-search-profiler-json.png :alt: Rendering pre-captured profiler JSON -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/report-and-share.md b/explore-analyze/report-and-share.md index 20945e0b3..cfdf94c0d 100644 --- a/explore-analyze/report-and-share.md +++ b/explore-analyze/report-and-share.md @@ -87,7 +87,7 @@ In the following dashboard, the shareable container is highlighted: :::{image} ../images/kibana-shareable-container.png :alt: Shareable Container -:class: screenshot +:screenshot: ::: 1. Open the saved Discover session, dashboard, visualization, or workpad you want to share. diff --git a/explore-analyze/report-and-share/reporting-troubleshooting-csv.md b/explore-analyze/report-and-share/reporting-troubleshooting-csv.md index 3735744c6..45afdef44 100644 --- a/explore-analyze/report-and-share/reporting-troubleshooting-csv.md +++ b/explore-analyze/report-and-share/reporting-troubleshooting-csv.md @@ -80,7 +80,7 @@ The listing of reports in **Stack Management > Reporting** allows you to inspect :::{image} ../../images/inspect-query-from-csv-export.gif :alt: Inspect the query used for CSV export -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/scripting/painless-lab.md b/explore-analyze/scripting/painless-lab.md index 599fde234..6b608bef9 100644 --- a/explore-analyze/scripting/painless-lab.md +++ b/explore-analyze/scripting/painless-lab.md @@ -18,5 +18,5 @@ Find **Painless Lab** by navigating to the **Developer tools** page using the na :::{image} ../../images/kibana-painless-lab.png :alt: Painless Lab -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/toc.yml b/explore-analyze/toc.yml index 038d3480d..41abc68da 100644 --- a/explore-analyze/toc.yml +++ b/explore-analyze/toc.yml @@ -211,7 +211,6 @@ toc: - file: machine-learning/nlp/ml-nlp-import-model.md - file: machine-learning/nlp/ml-nlp-deploy-model.md - file: machine-learning/nlp/ml-nlp-test-inference.md - - file: machine-learning/nlp/ml-nlp-auto-scale.md - file: machine-learning/nlp/ml-nlp-inference.md - file: machine-learning/nlp/ml-nlp-apis.md - file: machine-learning/nlp/ml-nlp-built-in-models.md diff --git a/explore-analyze/transforms/ecommerce-transforms.md b/explore-analyze/transforms/ecommerce-transforms.md index fb82e8af3..ee0d6e5de 100644 --- a/explore-analyze/transforms/ecommerce-transforms.md +++ b/explore-analyze/transforms/ecommerce-transforms.md @@ -24,14 +24,14 @@ mapped_pages: Go to **Management** > **Stack Management** > **Data** > **Transforms** in {{kib}} and use the wizard to create a {{transform}}: :::{image} ../../images/elasticsearch-reference-ecommerce-pivot1.png :alt: Creating a simple {{transform}} in {{kib}} - :class: screenshot + :screenshot: ::: Group the data by customer ID and add one or more aggregations to learn more about each customer’s orders. For example, let’s calculate the sum of products they purchased, the total price of their purchases, the maximum number of products that they purchased in a single order, and their total number of orders. We’ll accomplish this by using the [`sum` aggregation](elasticsearch://reference/data-analysis/aggregations/search-aggregations-metrics-sum-aggregation.md) on the `total_quantity` and `taxless_total_price` fields, the [`max` aggregation](elasticsearch://reference/data-analysis/aggregations/search-aggregations-metrics-max-aggregation.md) on the `total_quantity` field, and the [`cardinality` aggregation](elasticsearch://reference/data-analysis/aggregations/search-aggregations-metrics-cardinality-aggregation.md) on the `order_id` field: :::{image} ../../images/elasticsearch-reference-ecommerce-pivot2.png :alt: Adding multiple aggregations to a {{transform}} in {{kib}} - :class: screenshot + :screenshot: ::: ::::{tip} @@ -96,13 +96,13 @@ mapped_pages: 3. Optionally, you can configure a retention policy that applies to your {{transform}}. Select a date field that is used to identify old documents in the destination index and provide a maximum age. Documents that are older than the configured value are removed from the destination index. :::{image} ../../images/elasticsearch-reference-ecommerce-pivot3.png :alt: Adding transfrom ID and retention policy to a {{transform}} in {{kib}} - :class: screenshot + :screenshot: ::: In {{kib}}, before you finish creating the {{transform}}, you can copy the preview {{transform}} API request to your clipboard. This information is useful later when you’re deciding whether you want to manually create the destination index. :::{image} ../../images/elasticsearch-reference-ecommerce-pivot4.png :alt: Copy the Dev Console statement of the transform preview to the clipboard - :class: screenshot + :screenshot: ::: If you prefer, you can use the [create {{transforms}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform). @@ -296,7 +296,7 @@ mapped_pages: You can start, stop, reset, and manage {{transforms}} in {{kib}}: :::{image} ../../images/elasticsearch-reference-manage-transforms.png :alt: Managing {{transforms}} in {{kib}} - :class: screenshot + :screenshot: ::: Alternatively, you can use the [start {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-start-transform), [stop {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-stop-transform) and [reset {{transforms}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-reset-transform) APIs. @@ -318,14 +318,14 @@ mapped_pages: For example, use the **Discover** application in {{kib}}: :::{image} ../../images/elasticsearch-reference-ecommerce-results.png :alt: Exploring the new index in {{kib}} - :class: screenshot + :screenshot: ::: 8. Optional: Create another {{transform}}, this time using the `latest` method. This method populates the destination index with the latest documents for each unique key value. For example, you might want to find the latest orders (sorted by the `order_date` field) for each customer or for each country and region. :::{image} ../../images/elasticsearch-reference-ecommerce-latest1.png :alt: Creating a latest {{transform}} in {{kib}} - :class: screenshot + :screenshot: ::: ::::{dropdown} API example diff --git a/explore-analyze/transforms/transform-alerts.md b/explore-analyze/transforms/transform-alerts.md index 8bad52c2e..75ce4ff7c 100644 --- a/explore-analyze/transforms/transform-alerts.md +++ b/explore-analyze/transforms/transform-alerts.md @@ -19,7 +19,7 @@ You can create {{transform}} rules under **{{stack-manage-app}} > {{rules-ui}}** 3. Select the {{transform}} or {{transforms}} to include. You can also use a special character (`*`) to apply the rule to all your {{transforms}}. {{transforms-cap}} created after the rule are automatically included. :::{image} ../../images/elasticsearch-reference-transform-check-config.png :alt: Selecting health check - :class: screenshot + :screenshot: ::: 4. The following health checks are available and enabled by default: @@ -49,7 +49,7 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../images/elasticsearch-reference-transform-alert-summary-actions.png :alt: Setting action frequency to summary of alerts -:class: screenshot +:screenshot: ::: ::::{tip} @@ -64,7 +64,7 @@ There is a set of variables that you can use to customize the notification messa :::{image} ../../images/elasticsearch-reference-transform-alert-actions.png :alt: Selecting action variables -:class: screenshot +:screenshot: ::: After you save the configurations, the rule appears in the **{{rules-ui}}** list where you can check its status and see the overview of its configuration information. diff --git a/explore-analyze/transforms/transform-examples.md b/explore-analyze/transforms/transform-examples.md index 5feffc7ba..959428e21 100644 --- a/explore-analyze/transforms/transform-examples.md +++ b/explore-analyze/transforms/transform-examples.md @@ -24,7 +24,7 @@ This example uses the eCommerce orders sample data set to find the customers who :::{image} ../../images/elasticsearch-reference-transform-ex1-1.jpg :alt: Finding your best customers with {{transforms}} in {{kib}} -:class: screenshot +:screenshot: ::: Alternatively, you can use the [preview {{transform}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-preview-transform) and the [create {{transform}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-put-transform). @@ -292,14 +292,14 @@ Pick the `clientip` field as the unique key; the data is grouped by this field. :::{image} ../../images/elasticsearch-reference-transform-ex4-1.jpg :alt: Finding the last log event for each IP address with {{transforms}} in {{kib}} -:class: screenshot +:screenshot: ::: Let’s assume that we’re interested in retaining documents only for IP addresses that appeared recently in the log. You can define a retention policy and specify a date field that is used to calculate the age of a document. This example uses the same date field that is used to sort the data. Then set the maximum age of a document; documents that are older than the value you set will be removed from the destination index. :::{image} ../../images/elasticsearch-reference-transform-ex4-2.jpg :alt: Defining retention policy for {{transforms}} in {{kib}} -:class: screenshot +:screenshot: ::: This {{transform}} creates the destination index that contains the latest login date for each client IP. As the {{transform}} runs in continuous mode, the destination index will be updated as new data that comes into the source index. Finally, every document that is older than 30 days will be removed from the destination index due to the applied retention policy. diff --git a/explore-analyze/transforms/transform-overview.md b/explore-analyze/transforms/transform-overview.md index 3bc796333..78d15a5a9 100644 --- a/explore-analyze/transforms/transform-overview.md +++ b/explore-analyze/transforms/transform-overview.md @@ -42,7 +42,7 @@ If you want to check the sales in the different categories in your last fiscal y :::{image} ../../images/elasticsearch-reference-pivot-preview.png :alt: Example of a pivot {{transform}} preview in {{kib}} -:class: screenshot +:screenshot: ::: ## Latest {{transforms}} [latest-transform-overview] @@ -51,7 +51,7 @@ You can use the `latest` type of {{transform}} to copy the most recent documents :::{image} ../../images/elasticsearch-reference-latest-preview.png :alt: Example of a latest {{transform}} preview in {{kib}} -:class: screenshot +:screenshot: ::: As in the case of a pivot, a latest {{transform}} can run once or continuously. It performs a composite aggregation on the data in the source index and stores the output in the destination index. If the {{transform}} runs continuously, new unique key values are automatically added to the destination index and the most recent documents for existing key values are automatically updated at each checkpoint. diff --git a/explore-analyze/visualize/canvas/canvas-tutorial.md b/explore-analyze/visualize/canvas/canvas-tutorial.md index a1a302184..27377540d 100644 --- a/explore-analyze/visualize/canvas/canvas-tutorial.md +++ b/explore-analyze/visualize/canvas/canvas-tutorial.md @@ -35,7 +35,7 @@ To customize your workpad to look the way you want, add your own images. :::{image} ../../../images/kibana-canvas_tutorialCustomImage_7.17.0.png :alt: The Analytics logo added to the workpad - :class: screenshot + :screenshot: ::: @@ -78,7 +78,7 @@ Customize your data by connecting it to the Sample eCommerce orders data. :::{image} ../../../images/kibana-canvas_tutorialCustomMetric_7.17.0.png :alt: The total sales metric added to the workpad using Elasticsearch SQL -:class: screenshot +:screenshot: ::: @@ -106,7 +106,7 @@ To show what your data can do, add charts, graphs, progress monitors, and more t :::{image} ../../../images/kibana-canvas_tutorialCustomChart_7.17.0.png :alt: Custom line chart added to the workpad using Elasticsearch SQL -:class: screenshot +:screenshot: ::: @@ -122,7 +122,7 @@ To focus your data on a specific time range, add the time filter. % :::{image} ../../../images/kibana-canvas_tutorialCustomTimeFilter_7.17.0.png % :alt: Custom time filter added to the workpad -% :class: screenshot +% :screenshot: % ::: To see how the data changes, set the time filter to **Last 7 days**. As you change the time filter options, the elements automatically update. diff --git a/explore-analyze/visualize/canvas/edit-workpads.md b/explore-analyze/visualize/canvas/edit-workpads.md index 4401080db..0a6f91a8f 100644 --- a/explore-analyze/visualize/canvas/edit-workpads.md +++ b/explore-analyze/visualize/canvas/edit-workpads.md @@ -34,14 +34,14 @@ For example, to change the {{data-source}} for a set of charts: :::{image} ../../../images/kibana-specify_variable_syntax.png :alt: Variable syntax options - :class: screenshot + :screenshot: ::: 2. Copy the variable, then apply it to each element you want to update in the **Expression editor**. :::{image} ../../../images/kibana-copy_variable_syntax.png :alt: Copied variable syntax pasted in the Expression editor - :class: screenshot + :screenshot: ::: diff --git a/explore-analyze/visualize/custom-visualizations-with-vega.md b/explore-analyze/visualize/custom-visualizations-with-vega.md index 55567608a..9c4557e6b 100644 --- a/explore-analyze/visualize/custom-visualizations-with-vega.md +++ b/explore-analyze/visualize/custom-visualizations-with-vega.md @@ -26,7 +26,7 @@ These grammars have some limitations: they do not support tables, and can’t ru :::{image} ../../images/kibana-vega.png :alt: Vega UI -:class: screenshot +:screenshot: ::: Both **Vega** and **Vega-Lite** use JSON, but {{kib}} has made this simpler to type by integrating [HJSON](https://hjson.github.io/). HJSON supports the following: @@ -224,14 +224,14 @@ To generate the data, **Vega-Lite** uses the `source_0` and `data_0`. `source_0` :::{image} ../../images/kibana-vega_lite_tutorial_4.png :alt: Table for data_0 with columns key - :class: screenshot + :screenshot: ::: 4. To compare to the visually encoded data, select **data_0** from the dropdown. :::{image} ../../images/kibana-vega_lite_tutorial_5.png :alt: Table for data_0 where the key is NaN instead of a string - :class: screenshot + :screenshot: ::: **key** is unable to convert because the property is category (`Men's Clothing`, `Women's Clothing`, etc.) instead of a timestamp. @@ -262,7 +262,7 @@ In the **Vega-Lite** spec, add the `encoding` block: :::{image} ../../images/kibana-vega_lite_tutorial_6.png :alt: Table for data_0 showing that the column time_buckets.buckets.key is undefined - :class: screenshot + :screenshot: ::: @@ -288,7 +288,7 @@ In the **Vega-Lite** spec, add a `transform` block, then click **Update**: :::{image} ../../images/kibana-vega_lite_tutorial_7.png :alt: Table showing data_0 with multiple pages of results - :class: screenshot + :screenshot: ::: Vega-Lite displays **undefined** values because there are duplicate names. @@ -412,7 +412,7 @@ Move your cursor around the stacked area chart. The points are able to indicate :::{image} ../../images/kibana-vega_lite_tutorial_2.png :alt: Vega-Lite tutorial selection enabled -:class: screenshot +:screenshot: ::: The selection is controlled by a signal. To view the signal, click **Inspect** in the toolbar. @@ -664,7 +664,7 @@ Add the `key` and `doc_count` fields as the X- and Y-axis values, then click **U :::{image} ../../images/kibana-vega_tutorial_3.png :alt: vega tutorial 3 -:class: screenshot +:screenshot: ::: @@ -754,7 +754,7 @@ To allow users to filter based on a time range, add a drag interaction, which re :::{image} ../../images/kibana-vega_tutorial_4.png :alt: vega tutorial 4 -:class: screenshot +:screenshot: ::: In the **Vega** spec, add a `signal` to track the X position of the cursor: @@ -1455,7 +1455,7 @@ Use the contextual **Inspect** tool to gain insights into different elements. :::{image} ../../images/kibana-vega_tutorial_inspect_requests.png :alt: vega tutorial inspect requests -:class: screenshot +:screenshot: ::: @@ -1467,7 +1467,7 @@ The runtime data is read from the [runtime scope](https://vega.github.io/vega/do :::{image} ../../images/kibana-vega_tutorial_inspect_data_sets.png :alt: vega tutorial inspect data sets -:class: screenshot +:screenshot: ::: To debug more complex specs, access to the `view` variable. For more information, refer to the [Vega browser debugging process](#vega-browser-debugging-console). @@ -1479,7 +1479,7 @@ Because of the dynamic nature of the data in {{es}}, it is hard to help you with :::{image} ../../images/kibana-vega_tutorial_getting_help.png :alt: vega tutorial getting help -:class: screenshot +:screenshot: ::: To copy the response, click **Copy to clipboard**. Paste the copied data to [gist.github.com](https://gist.github.com/), possibly with a .json extension. Use the [raw] button, and share that when asking for help. diff --git a/explore-analyze/visualize/graph.md b/explore-analyze/visualize/graph.md index f845406bf..70d580703 100644 --- a/explore-analyze/visualize/graph.md +++ b/explore-analyze/visualize/graph.md @@ -23,7 +23,7 @@ The terms you want to include in the graph are called *vertices*. The relationsh :::{image} ../../images/kibana-graph-vertices-connections.jpg :alt: Graph components -:class: screenshot +:screenshot: ::: ::::{note} @@ -56,14 +56,14 @@ Use **Graph** to reveal the relationships in your data. :::{image} ../../images/kibana-graph-url-connections.png :alt: URL connections - :class: screenshot + :screenshot: ::: 3. Add more fields, or click an existing field to edit, disable or deselect it. :::{image} ../../images/kibana-graph-menu.png :alt: menu for editing, disabling, or removing a field from the graph - :class: screenshot + :screenshot: :width: 50% ::: @@ -75,7 +75,7 @@ Use **Graph** to reveal the relationships in your data. :::{image} ../../images/kibana-graph-control-bar.png :alt: Graph toolbar - :class: screenshot + :screenshot: :width: 50% ::: diff --git a/explore-analyze/visualize/graph/graph-configuration.md b/explore-analyze/visualize/graph/graph-configuration.md index 6f7d58e2d..a5dbe2290 100644 --- a/explore-analyze/visualize/graph/graph-configuration.md +++ b/explore-analyze/visualize/graph/graph-configuration.md @@ -47,7 +47,7 @@ You can also use security to grant read only or all access to different roles. W :::{image} ../../../images/kibana-graph-read-only-badge.png :alt: Example of Graph's read only access indicator in Kibana's header -:class: screenshot +:screenshot: :width: 50% ::: diff --git a/explore-analyze/visualize/image-panels.md b/explore-analyze/visualize/image-panels.md index 90f70c7da..963ac42b4 100644 --- a/explore-analyze/visualize/image-panels.md +++ b/explore-analyze/visualize/image-panels.md @@ -24,7 +24,7 @@ You can upload images from your computer, select previously uploaded images, or :::{image} ../../images/kibana-dashboard_addImageEditor_8.7.0.png :alt: Add image editor -:class: screenshot +:screenshot: ::: To manage your uploaded image files, go to the **Files** management page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md). diff --git a/explore-analyze/visualize/legacy-editors/timelion.md b/explore-analyze/visualize/legacy-editors/timelion.md index abaa0e958..482e6d131 100644 --- a/explore-analyze/visualize/legacy-editors/timelion.md +++ b/explore-analyze/visualize/legacy-editors/timelion.md @@ -221,7 +221,7 @@ Move the legend to the north west position with two columns, then click **Updat :::{image} ../../../images/kibana-timelion-customize04.png :alt: Final time series visualization -:class: screenshot +:screenshot: :::   @@ -344,7 +344,7 @@ Customize and format the visualization using the following functions, then click :::{image} ../../../images/kibana-timelion-math05.png :alt: Final visualization that displays inbound and outbound network traffic -:class: screenshot +:screenshot: :::   @@ -529,7 +529,7 @@ Customize and format the visualization using the following functions, then click :::{image} ../../../images/kibana-timelion-conditional04.png :alt: Final visualization that displays outliers and patterns over time -:class: screenshot +:screenshot: :::   diff --git a/explore-analyze/visualize/legacy-editors/tsvb.md b/explore-analyze/visualize/legacy-editors/tsvb.md index 39e046abe..770251a13 100644 --- a/explore-analyze/visualize/legacy-editors/tsvb.md +++ b/explore-analyze/visualize/legacy-editors/tsvb.md @@ -19,7 +19,7 @@ With **TSVB**, you can: :::{image} ../../../images/kibana-tsvb-screenshot.png :alt: TSVB overview -:class: screenshot +:screenshot: ::: @@ -234,7 +234,7 @@ Performing math across data series is unsupported in **TSVB**. To calculate the :::{image} ../../../images/kibana-tsvb_clone_series.png :alt: Clone Series action - :class: screenshot + :screenshot: ::: 2. Click **Options**, then enter the offset value in the **Offset series time by** field. diff --git a/explore-analyze/visualize/lens.md b/explore-analyze/visualize/lens.md index 54b48e07f..6a35ee8d7 100644 --- a/explore-analyze/visualize/lens.md +++ b/explore-analyze/visualize/lens.md @@ -121,7 +121,7 @@ To use a keyboard instead of a mouse, use the **Lens** fully accessible and cont :::{image} ../../images/kibana-lens_drag_drop_2.png :alt: Lens drag and drop focus state - :class: screenshot + :screenshot: ::: 2. Complete the following actions: @@ -133,7 +133,7 @@ To use a keyboard instead of a mouse, use the **Lens** fully accessible and cont :::{image} ../../images/kibana-lens_drag_drop_3.gif :alt: Using drag and drop to reorder - :class: screenshot + :screenshot: ::: 3. To confirm the action, press Space bar. To cancel, press Esc. @@ -225,7 +225,7 @@ Annotations allow you to call out specific points in your visualizations that ar :::{image} ../../images/kibana-lens_annotations_8.2.0.png :alt: Lens annotations -:class: screenshot +:screenshot: ::: Annotations support two placement types: @@ -287,7 +287,7 @@ For example, to track the number of bytes in the 75th percentile, add a shaded * :::{image} ../../images/kibana-lens_referenceLine_7.16.png :alt: Lens drag and drop focus state -:class: screenshot +:screenshot: ::: 1. In the layer pane, click **Add layer > Reference lines**. diff --git a/explore-analyze/visualize/link-panels.md b/explore-analyze/visualize/link-panels.md index a0424a296..c4f49fc8c 100644 --- a/explore-analyze/visualize/link-panels.md +++ b/explore-analyze/visualize/link-panels.md @@ -12,7 +12,7 @@ You can use **Links** panels to create links to other dashboards or external web :::{image} ../../images/kibana-dashboard_links_panel.png :alt: A screenshot displaying the new links panel -:class: screenshot +:screenshot: ::: * [Add a links panel](#add-links-panel) @@ -56,7 +56,7 @@ To edit links panels: :::{image} ../../images/kibana-edit-links-panel.png :alt: A screenshot displaying the Edit icon next to the link - :class: screenshot + :screenshot: ::: 3. Edit the link as needed and then click **Update link**. diff --git a/explore-analyze/visualize/maps.md b/explore-analyze/visualize/maps.md index cdc4dd771..ddf24ad84 100644 --- a/explore-analyze/visualize/maps.md +++ b/explore-analyze/visualize/maps.md @@ -35,7 +35,7 @@ Use multiple layers and indices to show all your data in a single map. Show how :::{image} ../../images/kibana-sample_data_ecommerce.png :alt: sample data ecommerce -:class: screenshot +:screenshot: ::: To learn about specific types of layers, check out [Heat map layer](../../explore-analyze/visualize/maps/heatmap-layer.md), [Tile layer](../../explore-analyze/visualize/maps/tile-layer.md), and [Vector layer](../../explore-analyze/visualize/maps/vector-layer.md). @@ -49,7 +49,7 @@ This animated map uses the time slider to show Portland buses over a period of 1 :::{image} ../../images/kibana-timeslider.gif :alt: timeslider -:class: screenshot +:screenshot: ::: To create this type of map, check out [Track, visualize, and alert assets in real time](../../explore-analyze/visualize/maps/asset-tracking-tutorial.md). @@ -67,7 +67,7 @@ This choropleth map shows the density of non-emergency service requests in San D :::{image} ../../images/kibana-embed_in_dashboard.jpeg :alt: embed in dashboard -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/asset-tracking-tutorial.md b/explore-analyze/visualize/maps/asset-tracking-tutorial.md index 79efb3623..72a46d307 100644 --- a/explore-analyze/visualize/maps/asset-tracking-tutorial.md +++ b/explore-analyze/visualize/maps/asset-tracking-tutorial.md @@ -24,7 +24,7 @@ When you complete this tutorial, you’ll have a map that looks like this: :::{image} ../../../images/kibana-construction_zones.png :alt: construction zones -:class: screenshot +:screenshot: ::: @@ -369,12 +369,12 @@ If you already have an agent policy, get its identifier from the `View policy` a :::{image} ../../../images/kibana-agent-policy-id.png :alt: agent policy id -:class: screenshot +:screenshot: ::: :::{image} ../../../images/kibana-policy_id.png :alt: policy id -:class: screenshot +:screenshot: ::: :::::: @@ -470,7 +470,7 @@ POST kbn:/api/data_views/data_view :::{image} ../../../images/kibana-data_view.png :alt: data view -:class: screenshot +:screenshot: ::: ::::{tip} @@ -488,7 +488,7 @@ You may want to tweak this Data View to adjust the field names and number or dat :::{image} ../../../images/kibana-discover.png :alt: discover -:class: screenshot +:screenshot: ::: @@ -534,7 +534,7 @@ At this point, you have a map with lines that represent the routes of the TriMet :::{image} ../../../images/kibana-tracks_layer.png :alt: tracks layer -:class: screenshot +:screenshot: ::: @@ -570,7 +570,7 @@ Add a layer that uses attributes in the data to set the style and orientation of :::{image} ../../../images/kibana-top_hits_layer_style.png :alt: top hits layer style - :class: screenshot + :screenshot: ::: 7. Click **Keep changes**. @@ -580,7 +580,7 @@ Your map should automatically refresh every 10 seconds to show the latest vehicl :::{image} ../../../images/kibana-tracks_and_top_hits.png :alt: tracks and top hits -:class: screenshot +:screenshot: ::: @@ -620,7 +620,7 @@ Your map is now complete for now, congratulations! :::{image} ../../../images/kibana-construction_zones.png :alt: construction zones -:class: screenshot +:screenshot: ::: @@ -694,7 +694,7 @@ For this example, you will set the rule to check every minute. However, when run :::{image} ../../../images/kibana-rule_configuration.png :alt: rule configuration - :class: screenshot + :screenshot: ::: 11. Under **Actions**, select the **Index** connector type. @@ -720,7 +720,7 @@ For this example, you will set the rule to check every minute. However, when run :::{image} ../../../images/kibana-alert_connector.png :alt: alert connector - :class: screenshot + :screenshot: ::: 16. Click **Save**. @@ -743,7 +743,7 @@ With the alert configured and running, in a few minutes your `trimet_alerts` ind :::{image} ../../../images/kibana-vehicle_alerts.png :alt: vehicle alerts - :class: screenshot + :screenshot: ::: diff --git a/explore-analyze/visualize/maps/heatmap-layer.md b/explore-analyze/visualize/maps/heatmap-layer.md index 405533e5a..d470a9c71 100644 --- a/explore-analyze/visualize/maps/heatmap-layer.md +++ b/explore-analyze/visualize/maps/heatmap-layer.md @@ -12,7 +12,7 @@ Heat map layers cluster point data to show locations with higher densities. :::{image} ../../../images/kibana-heatmap_layer.png :alt: heatmap layer -:class: screenshot +:screenshot: ::: To add a heat map layer to your map, click **Add layer**, then select **Heat map**. The index must contain at least one field mapped as [geo_point](elasticsearch://reference/elasticsearch/mapping-reference/geo-point.md) or [geo_shape](elasticsearch://reference/elasticsearch/mapping-reference/geo-shape.md). diff --git a/explore-analyze/visualize/maps/import-geospatial-data.md b/explore-analyze/visualize/maps/import-geospatial-data.md index 84e119d6c..5864cee08 100644 --- a/explore-analyze/visualize/maps/import-geospatial-data.md +++ b/explore-analyze/visualize/maps/import-geospatial-data.md @@ -98,7 +98,7 @@ When feature editing is open, a feature editing toolbox is displayed on the left :::{image} ../../../images/kibana-drawing_layer.png :alt: drawing layer -:class: screenshot +:screenshot: ::: To draw features: diff --git a/explore-analyze/visualize/maps/indexing-geojson-data-tutorial.md b/explore-analyze/visualize/maps/indexing-geojson-data-tutorial.md index b7a7187da..3f0cacd4b 100644 --- a/explore-analyze/visualize/maps/indexing-geojson-data-tutorial.md +++ b/explore-analyze/visualize/maps/indexing-geojson-data-tutorial.md @@ -37,7 +37,7 @@ The data represents two real airports, two fictitious flight routes, and fictiti :::{image} ../../../images/kibana-fu_gs_new_england_map.png :alt: fu gs new england map - :class: screenshot + :screenshot: ::: @@ -65,7 +65,7 @@ For each GeoJSON file you downloaded, complete the following steps: :::{image} ../../../images/kibana-fu_gs_flight_paths.png :alt: fu gs flight paths - :class: screenshot + :screenshot: ::: @@ -94,7 +94,7 @@ Looking at the `Lightning detected` layer, it’s clear where lightning has stru :::{image} ../../../images/kibana-fu_gs_lightning_intensity.png :alt: fu gs lightning intensity - :class: screenshot + :screenshot: ::: @@ -114,6 +114,6 @@ Your final map might look like this: :::{image} ../../../images/kibana-fu_gs_final_map.png :alt: fu gs final map -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/maps-aggregations.md b/explore-analyze/visualize/maps/maps-aggregations.md index a3fc1117e..955c26b7b 100644 --- a/explore-analyze/visualize/maps/maps-aggregations.md +++ b/explore-analyze/visualize/maps/maps-aggregations.md @@ -33,7 +33,7 @@ In the following example, the Grid aggregation layer is only visible when the ma :::{image} ../../../images/kibana-grid_to_docs.gif :alt: grid to docs -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/maps-connect-to-ems.md b/explore-analyze/visualize/maps/maps-connect-to-ems.md index 5b224d2d0..45c111802 100644 --- a/explore-analyze/visualize/maps/maps-connect-to-ems.md +++ b/explore-analyze/visualize/maps/maps-connect-to-ems.md @@ -521,7 +521,7 @@ If you cannot connect to Elastic Maps Service from the {{kib}} server or browser :::{image} ../../../images/kibana-elastic-maps-server-instructions.png :alt: Set-up instructions - :class: screenshot + :screenshot: ::: @@ -603,7 +603,7 @@ services: :::{image} ../../../images/kibana-elastic-maps-server-basemaps.png :alt: Basemaps download options -:class: screenshot +:screenshot: ::: ::::{tip} diff --git a/explore-analyze/visualize/maps/maps-create-filter-from-map.md b/explore-analyze/visualize/maps/maps-create-filter-from-map.md index 3b2ea3dc9..b8f6703a5 100644 --- a/explore-analyze/visualize/maps/maps-create-filter-from-map.md +++ b/explore-analyze/visualize/maps/maps-create-filter-from-map.md @@ -50,7 +50,7 @@ You can create spatial filters in two ways: :::{image} ../../../images/kibana-create_spatial_filter.png :alt: create spatial filter -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/maps-getting-started.md b/explore-analyze/visualize/maps/maps-getting-started.md index 2df0bb655..a67d03f04 100644 --- a/explore-analyze/visualize/maps/maps-getting-started.md +++ b/explore-analyze/visualize/maps/maps-getting-started.md @@ -21,7 +21,7 @@ When you complete this tutorial, you’ll have a map that looks like this: :::{image} ../../../images/kibana-sample_data_web_logs.png :alt: sample data web logs -:class: screenshot +:screenshot: ::: @@ -75,7 +75,7 @@ The first layer you’ll add is a choropleth layer to shade world countries by w :::{image} ../../../images/kibana-gs_add_cloropeth_layer.png :alt: Map showing the Total Requests by Destination layer - :class: screenshot + :screenshot: ::: @@ -107,7 +107,7 @@ This layer displays web log documents as points. The layer is only visible when :::{image} ../../../images/kibana-gs_add_es_document_layer.png :alt: Map showing what zoom level looks like a level 9 - :class: screenshot + :screenshot: ::: @@ -142,7 +142,7 @@ You’ll create a layer for [aggregated data](../../query-filter/aggregations.md :::{image} ../../../images/kibana-sample_data_web_logs.png :alt: Map showing what zoom level 3 looks like - :class: screenshot + :screenshot: ::: @@ -163,7 +163,7 @@ View your geospatial data alongside a heat map and pie chart, and then filter th :::{image} ../../../images/kibana-gs_dashboard_with_map.png :alt: Map in a dashboard with 2 other panels - :class: screenshot + :screenshot: ::: 3. To filter for documents with unusually high byte values, click and drag in the **Bytes distribution** chart. @@ -175,14 +175,14 @@ View your geospatial data alongside a heat map and pie chart, and then filter th :::{image} ../../../images/kibana-gs_tooltip_filter.png :alt: Tooltip on map - :class: screenshot + :screenshot: ::: Your filtered map should look similar to this: :::{image} ../../../images/kibana-gs_map_filtered.png :alt: Map showing filtered data - :class: screenshot + :screenshot: ::: diff --git a/explore-analyze/visualize/maps/maps-layer-based-filtering.md b/explore-analyze/visualize/maps/maps-layer-based-filtering.md index 7063c854b..282f1fa5d 100644 --- a/explore-analyze/visualize/maps/maps-layer-based-filtering.md +++ b/explore-analyze/visualize/maps/maps-layer-based-filtering.md @@ -17,6 +17,6 @@ Layer filters are not applied to the right side of **term joins**. You can apply :::{image} ../../../images/kibana-layer_search.png :alt: layer search -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/maps-search-across-multiple-indices.md b/explore-analyze/visualize/maps/maps-search-across-multiple-indices.md index 3fbf51134..6b181616c 100644 --- a/explore-analyze/visualize/maps/maps-search-across-multiple-indices.md +++ b/explore-analyze/visualize/maps/maps-search-across-multiple-indices.md @@ -34,7 +34,7 @@ the `kibana_sample_data_flights` layer is empty because the index `kibana_sample :::{image} ../../../images/kibana-global_search_multiple_indices_query1.png :alt: global search multiple indices query1 -:class: screenshot +:screenshot: ::: If you instead query for @@ -47,6 +47,6 @@ the `kibana_sample_data_flights` layer includes data. :::{image} ../../../images/kibana-global_search_multiple_indices_query2.png :alt: global search multiple indices query2 -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/maps-search.md b/explore-analyze/visualize/maps/maps-search.md index 45e0ad41a..84effd104 100644 --- a/explore-analyze/visualize/maps/maps-search.md +++ b/explore-analyze/visualize/maps/maps-search.md @@ -14,7 +14,7 @@ This image shows an example of global search and global time narrowing results. :::{image} ../../../images/kibana-global_search_bar.png :alt: global search and global time narrowing results -:class: screenshot +:screenshot: ::: Only layers requesting data from {{es}} are narrowed by global search and global time. To add a layer that requests data from {{es}} to your map, click **Add layer**, then select one of the following: diff --git a/explore-analyze/visualize/maps/maps-top-hits-aggregation.md b/explore-analyze/visualize/maps/maps-top-hits-aggregation.md index 9abeaebe9..506b38541 100644 --- a/explore-analyze/visualize/maps/maps-top-hits-aggregation.md +++ b/explore-analyze/visualize/maps/maps-top-hits-aggregation.md @@ -19,6 +19,6 @@ To enable top hits: :::{image} ../../../images/kibana-top_hits.png :alt: top hits -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/maps-troubleshooting.md b/explore-analyze/visualize/maps/maps-troubleshooting.md index 5711c7973..9c88eb025 100644 --- a/explore-analyze/visualize/maps/maps-troubleshooting.md +++ b/explore-analyze/visualize/maps/maps-troubleshooting.md @@ -21,12 +21,12 @@ Maps uses the [{{es}} vector tile search API](https://www.elastic.co/docs/api/do :::{image} ../../../images/kibana-vector_tile_inspector.png :alt: vector tile inspector -:class: screenshot +:screenshot: ::: :::{image} ../../../images/kibana-requests_inspector.png :alt: requests inspector -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/maps-vector-style-properties.md b/explore-analyze/visualize/maps/maps-vector-style-properties.md index 8dcb60053..4fc226901 100644 --- a/explore-analyze/visualize/maps/maps-vector-style-properties.md +++ b/explore-analyze/visualize/maps/maps-vector-style-properties.md @@ -50,7 +50,7 @@ Available icons :::{image} ../../../images/kibana-maki-icons.png :alt: maki icons -:class: screenshot +:screenshot: ::: Custom Icons diff --git a/explore-analyze/visualize/maps/reverse-geocoding-tutorial.md b/explore-analyze/visualize/maps/reverse-geocoding-tutorial.md index 71ad146f2..1085d6b73 100644 --- a/explore-analyze/visualize/maps/reverse-geocoding-tutorial.md +++ b/explore-analyze/visualize/maps/reverse-geocoding-tutorial.md @@ -24,7 +24,7 @@ When you complete this tutorial, you’ll have a map that looks like this: :::{image} ../../../images/kibana-csa_regions_by_web_traffic.png :alt: Map showing custom regions -:class: screenshot +:screenshot: ::: @@ -69,7 +69,7 @@ Looking at the map, you get a sense of what constitutes a metro area in the eyes :::{image} ../../../images/kibana-csa_regions.png :alt: Map showing metro area -:class: screenshot +:screenshot: ::: @@ -154,7 +154,7 @@ Your web log data now contains `csa.GEOID` and `csa.NAME` fields from the matchi :::{image} ../../../images/kibana-discover_enriched_web_log.png :alt: View of data in Discover -:class: screenshot +:screenshot: ::: @@ -189,7 +189,7 @@ Now that our web traffic contains CSA region identifiers, you’ll visualize CSA :::{image} ../../../images/kibana-csa_regions_by_web_traffic.png :alt: Final map showing custom regions -:class: screenshot +:screenshot: ::: Congratulations! You have completed the tutorial and have the recipe for visualizing custom regions. You can now try replicating this same analysis with your own data. diff --git a/explore-analyze/visualize/maps/terms-join.md b/explore-analyze/visualize/maps/terms-join.md index 77fea4082..369d64c9a 100644 --- a/explore-analyze/visualize/maps/terms-join.md +++ b/explore-analyze/visualize/maps/terms-join.md @@ -22,7 +22,7 @@ The [choropleth layer example](maps-getting-started.md#maps-add-choropleth-layer :::{image} ../../../images/kibana-gs_add_cloropeth_layer.png :alt: gs add cloropeth layer -:class: screenshot +:screenshot: ::: ### How a term join works [_how_a_term_join_works] @@ -33,7 +33,7 @@ The cloropeth example uses the shared key, [ISO 3166-1 alpha-2 code](https://wik :::{image} ../../../images/kibana-terms_join_shared_key_config.png :alt: terms join shared key config -:class: screenshot +:screenshot: ::: @@ -71,7 +71,7 @@ The METRICS configuration defines two metric aggregations: :::{image} ../../../images/kibana-terms_join_metric_config.png :alt: terms join metric config -:class: screenshot +:screenshot: ::: The right source does not provide individual documents, but instead provides the metrics from a terms aggregation. The metrics are calculated from the following sample web logs documents. diff --git a/explore-analyze/visualize/maps/tile-layer.md b/explore-analyze/visualize/maps/tile-layer.md index fcd5094b3..2bfb11b42 100644 --- a/explore-analyze/visualize/maps/tile-layer.md +++ b/explore-analyze/visualize/maps/tile-layer.md @@ -12,7 +12,7 @@ Tile layers display image tiles served from a tile server. :::{image} ../../../images/kibana-tile_layer.png :alt: tile layer -:class: screenshot +:screenshot: ::: To add a tile layer to your map, click **Add layer**, then select one of the following: diff --git a/explore-analyze/visualize/maps/vector-layer.md b/explore-analyze/visualize/maps/vector-layer.md index bc3b94eeb..0e43a8758 100644 --- a/explore-analyze/visualize/maps/vector-layer.md +++ b/explore-analyze/visualize/maps/vector-layer.md @@ -12,7 +12,7 @@ Vector layers display points, lines, and polygons. :::{image} ../../../images/kibana-vector_layer.png :alt: vector layer -:class: screenshot +:screenshot: ::: To add a vector layer to your map, click **Add layer**, then select one of the following: diff --git a/explore-analyze/visualize/maps/vector-style.md b/explore-analyze/visualize/maps/vector-style.md index 7433326be..0d2daf51c 100644 --- a/explore-analyze/visualize/maps/vector-style.md +++ b/explore-analyze/visualize/maps/vector-style.md @@ -19,7 +19,7 @@ This image shows an example of static styling using the [Kibana sample web logs] :::{image} ../../../images/kibana-vector_style_static.png :alt: vector style static -:class: screenshot +:screenshot: ::: @@ -34,7 +34,7 @@ This image shows an example of data driven styling using the [Kibana sample web :::{image} ../../../images/kibana-vector_style_dynamic.png :alt: vector style dynamic -:class: screenshot +:screenshot: ::: @@ -78,7 +78,7 @@ This image shows an example of quantitative data driven styling using the [Kiban :::{image} ../../../images/kibana-quantitative_data_driven_styling.png :alt: quantitative data driven styling -:class: screenshot +:screenshot: ::: @@ -93,6 +93,6 @@ This image shows an example of class styling using the [Kibana sample web logs]( :::{image} ../../../images/kibana-vector_style_class.png :alt: vector style class -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/maps/vector-tooltip.md b/explore-analyze/visualize/maps/vector-tooltip.md index 0bf3f38c3..e0426e657 100644 --- a/explore-analyze/visualize/maps/vector-tooltip.md +++ b/explore-analyze/visualize/maps/vector-tooltip.md @@ -14,7 +14,7 @@ If more than one feature exists at a location, the tooltip displays the attribut :::{image} ../../../images/kibana-multifeature_tooltip.png :alt: multifeature tooltip -:class: screenshot +:screenshot: ::: @@ -35,6 +35,6 @@ This image shows a locked tooltip with features from three layers. The tooltip d :::{image} ../../../images/kibana-locked_tooltip.png :alt: locked tooltip -:class: screenshot +:screenshot: ::: diff --git a/explore-analyze/visualize/text-panels.md b/explore-analyze/visualize/text-panels.md index b4e59e520..21dc0d0fa 100644 --- a/explore-analyze/visualize/text-panels.md +++ b/explore-analyze/visualize/text-panels.md @@ -18,28 +18,28 @@ For example, when you enter: :::{image} ../../images/kibana-markdown_example_1.png :alt: Markdown text with links -:class: screenshot +:screenshot: ::: The following instructions are displayed: :::{image} ../../images/kibana-markdown_example_2.png :alt: Panel with markdown link text -:class: screenshot +:screenshot: ::: Or when you enter: :::{image} ../../images/kibana-markdown_example_3.png :alt: Markdown text with image file -:class: screenshot +:screenshot: ::: The following image is displayed: :::{image} ../../images/kibana-markdown_example_4.png :alt: Panel with markdown image -:class: screenshot +:screenshot: ::: For detailed information about writing on GitHub, click **Help** on the top-right of the Markdown editor. diff --git a/get-started/the-stack.md b/get-started/the-stack.md index 9116a666e..e9445a02e 100644 --- a/get-started/the-stack.md +++ b/get-started/the-stack.md @@ -4,6 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/kibana/current/introduction.html - https://www.elastic.co/guide/en/kibana/current/index.html - https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html + - https://www.elastic.co/guide/en/elastic-stack/current/overview.html --- # The {{stack}} @@ -16,7 +17,13 @@ $$$kibana-navigation-search$$$ What is the {{stack}}? It’s a fast and highly scalable set of components — {{es}}, {{kib}}, {{beats}}, {{ls}}, and others — that together enable you to securely take data from any source, in any format, and then search, analyze, and visualize it. -You have many options for [deploying the {{stack}}](./deployment-options.md) to suit your needs. You can deploy it on your own hardware, in the cloud, or use a managed service on {{ecloud}}. +The products in the {{es}} are designed to be used together and releases are synchronized to simplify the installation and upgrade process. + +You have many options for deploying the {{stack}} to suit your needs. You can deploy it on your own hardware, in the cloud, or use a managed service on {{ecloud}}. + +:::{tip} +To learn how to deploy {{es}}, {{kib}}, and supporting orchestration technologies, refer to [](/deploy-manage/index.md). To learn how to deploy additional ingest and consume components, refer to the documentation for the component. +::: ![Components of the Elastic Stack](../images/stack-components-diagram.svg) diff --git a/index.md b/index.md index 7412976c3..8d6d6d4a9 100644 --- a/index.md +++ b/index.md @@ -1 +1,6 @@ -# Elastic documentation!!!! +--- +navigation_title: Elastic documentation +layout: landing-page +--- + +# Elastic documentation diff --git a/manage-data/data-store/data-streams/manage-data-stream.md b/manage-data/data-store/data-streams/manage-data-stream.md index 1d3f8d729..e948dff3a 100644 --- a/manage-data/data-store/data-streams/manage-data-stream.md +++ b/manage-data/data-store/data-streams/manage-data-stream.md @@ -16,7 +16,7 @@ In {{es-serverless}}, indices matching the `logs-*-*` pattern use the logsDB ind :::{image} ../../../images/serverless-management-data-stream.png :alt: Data stream details -:class: screenshot +:screenshot: ::: * To view more information about a data stream, such as its generation or its current index lifecycle policy, click the stream’s name. From this view, you can navigate to **Discover** to further explore data within the data stream. diff --git a/manage-data/data-store/index-basics.md b/manage-data/data-store/index-basics.md index 2dbeefb86..7eaa763dc 100644 --- a/manage-data/data-store/index-basics.md +++ b/manage-data/data-store/index-basics.md @@ -77,7 +77,7 @@ Investigate your indices and perform operations from the **Indices** view. :::{image} /images/serverless-index-management-indices.png :alt: Index Management indices -:class: screenshot +:screenshot: ::: * To show details and perform operations, click the index name. To perform operations on multiple indices, select their checkboxes and then open the **Manage** menu. For more information on managing indices, refer to [Index APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-indices). @@ -93,7 +93,7 @@ Investigate your data streams and address lifecycle management needs in the **Da :::{image} /images/serverless-management-data-stream.png :alt: Data stream details -:class: screenshot +:screenshot: ::: In {{es-serverless}}, indices matching the `logs-*-*` pattern use the logsDB index mode by default. The logsDB index mode creates a [logs data stream](https://www.elastic.co/guide/en/elasticsearch/reference/master/logs-data-stream.html). @@ -111,7 +111,7 @@ Create, edit, clone, and delete your index templates in the **Index Templates** :::{image} /images/serverless-index-management-index-templates.png :alt: Index templates -:class: screenshot +:screenshot: ::: * To show details and perform operations, click the template name. @@ -127,7 +127,7 @@ Create, edit, clone, and delete your component templates in the **Component Temp :::{image} /images/serverless-management-component-templates.png :alt: Component templates -:class: screenshot +:screenshot: ::: * To show details and perform operations, click the template name. @@ -141,7 +141,7 @@ Add data from your existing indices to incoming documents using the **Enrich Pol :::{image} /images/serverless-management-enrich-policies.png :alt: Enrich policies -:class: screenshot +:screenshot: ::: * To show details click the policy name. diff --git a/manage-data/data-store/templates/index-template-management.md b/manage-data/data-store/templates/index-template-management.md index 666239ed4..4cfb74039 100644 --- a/manage-data/data-store/templates/index-template-management.md +++ b/manage-data/data-store/templates/index-template-management.md @@ -13,7 +13,7 @@ Create, edit, clone, and delete your index templates in the **Index Templates** :::{image} ../../../images/elasticsearch-reference-management-index-templates.png :alt: Index templates -:class: screenshot +:screenshot: ::: In {{serverless-full}}, the default **logs** template uses the logsDB index mode to create a [logs data stream](../data-streams/logs-data-stream.md). @@ -30,7 +30,7 @@ In this tutorial, you’ll create an index template and use it to configure two :::{image} ../../../images/elasticsearch-reference-management_index_create_wizard.png :alt: Create wizard - :class: screenshot + :screenshot: ::: 2. In the **Name** field, enter `my-index-template`. @@ -47,7 +47,7 @@ In this tutorial, you’ll create an index template and use it to configure two :::{image} ../../../images/elasticsearch-reference-management_index_component_template.png :alt: Component templates page - :class: screenshot + :screenshot: ::: 2. Define index settings. These are optional. For this tutorial, leave this section blank. @@ -55,7 +55,7 @@ In this tutorial, you’ll create an index template and use it to configure two :::{image} ../../../images/elasticsearch-reference-management-index-templates-mappings.png :alt: Mapped fields page - :class: screenshot + :screenshot: ::: Alternatively, you can click the **Load JSON** link and define the mapping as JSON: diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md index 052c39501..c1d846580 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md @@ -136,7 +136,7 @@ In this step, you’ll create a Python script that generates logs in JSON format This Python script randomly generates one of twelve log messages, continuously, at a random interval of between 1 and 10 seconds. The log messages are written to file `elvis.json`, each with a timestamp, a log level of *info*, *warning*, *error*, or *critical*, and other data. Just to add some variance to the log data, the *info* message *Elvis has left the building* is set to be the most probable log event. - For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.md). + For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.html). Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. diff --git a/manage-data/ingest/sample-data.md b/manage-data/ingest/sample-data.md index 0d4a0c854..690f4f536 100644 --- a/manage-data/ingest/sample-data.md +++ b/manage-data/ingest/sample-data.md @@ -21,7 +21,7 @@ You can also access and install them from the **Integrations** page. Go to **Int
:::{image} /images/sample-data-sets.png :alt: Sample data sets -:class: screenshot +:screenshot: ::: ## Run the makelogs script @@ -46,5 +46,5 @@ Go to **Integrations** and search for **Upload a file**. On the **Upload file**
:::{image} /images/sample-upload-a-file.png :alt: Upload a sample data file -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/manage-data/ingest/transform-enrich/data-enrichment.md b/manage-data/ingest/transform-enrich/data-enrichment.md index 5c277f11d..adcc892e6 100644 --- a/manage-data/ingest/transform-enrich/data-enrichment.md +++ b/manage-data/ingest/transform-enrich/data-enrichment.md @@ -79,7 +79,7 @@ Use the **Enrich Policies** view to add data from your existing indices to incom :::{image} ../../../images/elasticsearch-reference-management-enrich-policies.png :alt: Enrich policies -:class: screenshot +:screenshot: ::: When creating an enrich policy, the UI walks you through the configuration setup and selecting the fields. Before you can use the policy with an enrich processor or {{esql}} query, you must execute the policy. diff --git a/manage-data/ingest/transform-enrich/example-parse-logs.md b/manage-data/ingest/transform-enrich/example-parse-logs.md index d38317bce..d53d0e87b 100644 --- a/manage-data/ingest/transform-enrich/example-parse-logs.md +++ b/manage-data/ingest/transform-enrich/example-parse-logs.md @@ -26,7 +26,7 @@ These logs contain a timestamp, IP address, and user agent. You want to give the :::{image} ../../../images/elasticsearch-reference-ingest-pipeline-list.png :alt: Kibana's Ingest Pipelines list view - :class: screenshot + :screenshot: ::: 2. Click **Create pipeline > New pipeline**. @@ -55,7 +55,7 @@ These logs contain a timestamp, IP address, and user agent. You want to give the :::{image} ../../../images/elasticsearch-reference-ingest-pipeline-processor.png :alt: Processors for Ingest Pipelines - :class: screenshot + :screenshot: ::: The four processors will run sequentially:
Grok > Date > GeoIP > User agent
You can reorder processors using the arrow icons. diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md b/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md index 001008faa..552e3e795 100644 --- a/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md +++ b/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md @@ -28,7 +28,7 @@ In **{{project-settings}} → {{manage-app}} → {{ingest-pipelines-app}}**, you :::{image} ../../../images/serverless-ingest-pipelines-management.png :alt: {{ingest-pipelines-app}} -:class: screenshot +:screenshot: ::: To create a pipeline, click **Create pipeline → New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md). @@ -42,5 +42,5 @@ Before you use a pipeline in production, you should test it using sample documen :::{image} ../../../images/serverless-ingest-pipelines-test.png :alt: Test a pipeline in {{ingest-pipelines-app}} -:class: screenshot +:screenshot: ::: diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines.md b/manage-data/ingest/transform-enrich/ingest-pipelines.md index ed94272ef..c1e739850 100644 --- a/manage-data/ingest/transform-enrich/ingest-pipelines.md +++ b/manage-data/ingest/transform-enrich/ingest-pipelines.md @@ -39,7 +39,7 @@ In {{kib}}, open the main menu and click **Stack Management > Ingest Pipelines** :::{image} ../../../images/elasticsearch-reference-ingest-pipeline-list.png :alt: Kibana's Ingest Pipelines list view -:class: screenshot +:screenshot: ::: To create a pipeline, click **Create pipeline > New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md). @@ -101,7 +101,7 @@ Before using a pipeline in production, we recommend you test it using sample doc :::{image} ../../../images/elasticsearch-reference-test-a-pipeline.png :alt: Test a pipeline in Kibana -:class: screenshot +:screenshot: ::: You can also test pipelines using the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate). You can specify a configured pipeline in the request path. For example, the following request tests `my-pipeline`. @@ -305,7 +305,7 @@ $$$pipeline-custom-logs-index-template$$$ :::{image} ../../../images/elasticsearch-reference-custom-logs.png :alt: Set up custom log integration in Fleet - :class: screenshot + :screenshot: ::: 5. Use the [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover) to roll over your data stream. This ensures {{es}} applies the index template and its pipeline settings to any new data for the integration. @@ -339,7 +339,7 @@ $$$pipeline-custom-logs-configuration$$$ :::{image} ../../../images/elasticsearch-reference-custom-logs-pipeline.png :alt: Custom pipeline configuration for custom log integration - :class: screenshot + :screenshot: ::: diff --git a/manage-data/ingest/transform-enrich/logstash-pipelines.md b/manage-data/ingest/transform-enrich/logstash-pipelines.md index 8d295472a..003c219e0 100644 --- a/manage-data/ingest/transform-enrich/logstash-pipelines.md +++ b/manage-data/ingest/transform-enrich/logstash-pipelines.md @@ -14,7 +14,7 @@ In **{{project-settings}} → {{manage-app}} → {{ls-pipelines-app}}**, you can :::{image} ../../../images/serverless-logstash-pipelines-management.png :alt: {{ls-pipelines-app}}" -:class: screenshot +:screenshot: ::: On the {{ls}} side, you must enable configuration management and register {{ls}} to use the centrally managed pipeline configurations. diff --git a/manage-data/ingest/upload-data-files.md b/manage-data/ingest/upload-data-files.md index 82b0efa9c..324b200f1 100644 --- a/manage-data/ingest/upload-data-files.md +++ b/manage-data/ingest/upload-data-files.md @@ -24,7 +24,7 @@ To use the Data Visualizer, click **Upload a file** on the {{es}} **Getting Star :::{image} /images/serverless-file-uploader-UI.png :alt: File upload UI -:class: screenshot +:screenshot: ::: Drag a file into the upload area or click **Select or drag and drop a file** to choose a file from your computer. diff --git a/manage-data/lifecycle/data-tiers.md b/manage-data/lifecycle/data-tiers.md index 470789937..4631cf585 100644 --- a/manage-data/lifecycle/data-tiers.md +++ b/manage-data/lifecycle/data-tiers.md @@ -98,7 +98,7 @@ To add a warm, cold, or frozen tier when you create a deployment: :::{image} ../../images/elasticsearch-reference-ess-advanced-config-data-tiers.png :alt: {{ecloud}}'s deployment Advanced configuration page -:class: screenshot +:screenshot: ::: To add a data tier to an existing deployment: diff --git a/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md b/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md index bce55cb0d..1ca8927b1 100644 --- a/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md +++ b/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md @@ -30,7 +30,7 @@ Investigate your indices and perform operations from the **Indices** view. :::{image} ../../../images/elasticsearch-reference-management_index_labels.png :alt: Index Management UI -:class: screenshot +:screenshot: ::: * To show details and perform operations such as close, forcemerge, and flush, click the index name. To perform operations on multiple indices, select their checkboxes and then open the **Manage** menu. For more information on managing indices, refer to [Index APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-indices). @@ -39,7 +39,7 @@ Investigate your indices and perform operations from the **Indices** view. :::{image} ../../../images/elasticsearch-reference-management_index_details.png :alt: Index Management UI - :class: screenshot + :screenshot: ::: @@ -54,7 +54,7 @@ A value in the data retention column indicates that the data stream is managed b :::{image} ../../../images/elasticsearch-reference-management-data-stream-fields.png :alt: Data stream details -:class: screenshot +:screenshot: ::: * To view more information about a data stream, such as its generation or its current index lifecycle policy, click the stream’s name. From this view, you can navigate to **Discover** to further explore data within the data stream. @@ -67,7 +67,7 @@ Create, edit, clone, and delete your index templates in the **Index Templates** :::{image} ../../../images/elasticsearch-reference-management-index-templates.png :alt: Index templates -:class: screenshot +:screenshot: ::: @@ -81,7 +81,7 @@ In this tutorial, you’ll create an index template and use it to configure two :::{image} ../../../images/elasticsearch-reference-management_index_create_wizard.png :alt: Create wizard - :class: screenshot + :screenshot: ::: 2. In the **Name** field, enter `my-index-template`. @@ -98,7 +98,7 @@ In this tutorial, you’ll create an index template and use it to configure two :::{image} ../../../images/elasticsearch-reference-management_index_component_template.png :alt: Component templates page - :class: screenshot + :screenshot: ::: 2. Define index settings. These are optional. For this tutorial, leave this section blank. @@ -106,7 +106,7 @@ In this tutorial, you’ll create an index template and use it to configure two :::{image} ../../../images/elasticsearch-reference-management-index-templates-mappings.png :alt: Mapped fields page - :class: screenshot + :screenshot: ::: Alternatively, you can click the **Load JSON** link and define the mapping as JSON: @@ -196,7 +196,7 @@ Use the **Enrich Policies** view to add data from your existing indices to incom :::{image} ../../../images/elasticsearch-reference-management-enrich-policies.png :alt: Enrich policies -:class: screenshot +:screenshot: ::: When creating an enrich policy, the UI walks you through the configuration setup and selecting the fields. Before you can use the policy with an enrich processor or {{esql}} query, you must execute the policy. diff --git a/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md b/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md index 625837f68..15d76ac59 100644 --- a/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md +++ b/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md @@ -47,7 +47,7 @@ You can create the policy through {{kib}} or with the [create or update policy]( :::{image} ../../../images/elasticsearch-reference-create-policy.png :alt: Create policy page -:class: screenshot +:screenshot: ::: ::::{dropdown} API example diff --git a/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md b/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md index 47db1c63f..7b2f1b4a2 100644 --- a/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md +++ b/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md @@ -42,7 +42,7 @@ To complete this tutorial, you’ll need: :::{image} ../../../images/elasticsearch-reference-tutorial-ilm-ess-add-warm-data-tier.png :alt: Add a warm data tier to your deployment - :class: screenshot + :screenshot: ::: * Self-managed cluster: Assign `data_hot` and `data_warm` roles to nodes as described in [*Data tiers*](../data-tiers.md). @@ -80,7 +80,7 @@ To view or change the rollover settings, click **Advanced settings** for the hot :::{image} ../../../images/elasticsearch-reference-tutorial-ilm-hotphaserollover-default.png :alt: View rollover defaults -:class: screenshot +:screenshot: ::: @@ -96,21 +96,21 @@ The default `logs@lifecycle` policy is designed to prevent the creation of many :::{image} ../../../images/elasticsearch-reference-tutorial-ilm-modify-default-warm-phase-rollover.png :alt: Add a warm phase with custom settings - :class: screenshot + :screenshot: ::: 2. In the warm phase, click the trash icon to enable the delete phase. :::{image} ../../../images/elasticsearch-reference-tutorial-ilm-enable-delete-phase.png :alt: Enable the delete phase - :class: screenshot + :screenshot: ::: In the delete phase, set **Move data into phase when** to **90 days old**. This deletes indices 90 days after rollover. :::{image} ../../../images/elasticsearch-reference-tutorial-ilm-delete-rollover.png :alt: Add a delete phase - :class: screenshot + :screenshot: ::: 3. Click **Save as new policy**. diff --git a/manage-data/lifecycle/rollup/getting-started-kibana.md b/manage-data/lifecycle/rollup/getting-started-kibana.md index 099e1ac41..32d23f074 100644 --- a/manage-data/lifecycle/rollup/getting-started-kibana.md +++ b/manage-data/lifecycle/rollup/getting-started-kibana.md @@ -21,7 +21,7 @@ You can go to the **Rollup Jobs** page using the navigation menu or the [global :::{image} ../../../images/kibana-management_rollup_list.png :alt: List of currently active rollup jobs -:class: screenshot +:screenshot: ::: ## Required permissions [_required_permissions_4] @@ -38,7 +38,7 @@ When defining the index pattern, you must enter a name that is different than th :::{image} ../../../images/kibana-management_create_rollup_job.png :alt: Wizard that walks you through creation of a rollup job -:class: screenshot +:screenshot: ::: ## Start, stop, and delete rollup jobs [manage-rollup-job] @@ -47,7 +47,7 @@ Once you’ve saved a rollup job, you’ll see it the **Rollup Jobs** overview p :::{image} ../../../images/kibana-management_rollup_job_details.png :alt: Rollup job details -:class: screenshot +:screenshot: ::: You can’t change a rollup job after you’ve created it. To select additional fields or redefine terms, you must delete the existing job, and then create a new one with the updated specifications. Be sure to use a different name for the new rollup job—reusing the same name can lead to problems with mismatched job configurations. Refer to [rollup job configuration](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-put-job). @@ -106,7 +106,7 @@ Your next step is to visualize your rolled up data in a vertical bar chart. Most :::{image} ../../../images/kibana-management-create-rollup-bar-chart.png :alt: Create visualization of rolled up data - :class: screenshot + :screenshot: ::: 8. Select **Bar** in the chart type dropdown. @@ -117,5 +117,5 @@ Your next step is to visualize your rolled up data in a vertical bar chart. Most :::{image} ../../../images/kibana-management_rollup_job_dashboard.png :alt: Dashboard with rolled up data - :class: screenshot + :screenshot: ::: diff --git a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md index 6ab3f1e40..dcfe91be5 100644 --- a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md +++ b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md @@ -13,7 +13,7 @@ This section covers the following topics: NodeSets are used to specify the topology of the Elasticsearch cluster. Each NodeSet represents a group of Elasticsearch nodes that share the same Elasticsearch configuration and Kubernetes Pod configuration. ::::{tip} -You can use [YAML anchors](https://yaml.org/spec/1.2/spec.md#id2765878) to declare the configuration change once and reuse it across all the node sets. +You can use [YAML anchors](https://yaml.org/spec/1.2/spec.html#id2765878) to declare the configuration change once and reuse it across all the node sets. :::: diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md b/raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md deleted file mode 100644 index 87742db84..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-autoscaling.md +++ /dev/null @@ -1,125 +0,0 @@ -# Deployment autoscaling [ece-autoscaling] - -Autoscaling helps you to more easily manage your deployments by adjusting their available resources automatically, and currently supports scaling for both data and machine learning nodes, or machine learning nodes only. Check the following sections to learn more: - -* [Overview](../../../deploy-manage/autoscaling.md#ece-autoscaling-intro) -* [When does autoscaling occur?](../../../deploy-manage/autoscaling.md#ece-autoscaling-factors) -* [Notifications](../../../deploy-manage/autoscaling.md#ece-autoscaling-notifications) -* [Restrictions and limitations](../../../deploy-manage/autoscaling.md#ece-autoscaling-restrictions) -* [Enable or disable autoscaling](../../../deploy-manage/autoscaling.md#ece-autoscaling-enable) -* [Update your autoscaling settings](../../../deploy-manage/autoscaling.md#ece-autoscaling-update) - -You can also have a look at our [autoscaling example](../../../deploy-manage/autoscaling/ece-autoscaling-example.md), as well as a sample request to [create an autoscaled deployment through the API](../../../deploy-manage/autoscaling/ece-autoscaling-api-example.md). - - -## Overview [ece-autoscaling-intro] - -When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention. - -::::{note} -Autoscaling is enabled for the Machine Learning tier by default for new deployments. -:::: - - -Currently, autoscaling behavior is as follows: - -* **Data tiers** - - * Each Elasticsearch [data tier](../../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. - * In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your [index lifecycle management policies](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-index-management.html). - * To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field. - * Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually. - -* **Machine learning nodes** - - * Machine learning nodes can scale upward and downward based on the configured machine learning jobs. - * When a machine learning job is opened, or a machine learning trained model is deployed, if there are no machine learning nodes in your deployment, the autoscaling mechanism will automatically add machine learning nodes. Similarly, after a period of no active machine learning jobs, any enabled machine learning nodes are disabled automatically. - * To control the maximum size of your machine learning nodes and ensure they will not scale above a certain size, you can use the maximum size per zone field. - * To control the minimum size of your machine learning nodes and ensure the autoscaling mechanism will not scale machine learning below a certain size, you can use the minimum size per zone field. - * The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models. - - -::::{note} -For any Elastic Cloud Enterprise Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. -:::: - - - -## When does autoscaling occur? [ece-autoscaling-factors] - -Several factors determine when data tiers or machine learning nodes are scaled. - -For a data tier, an autoscaling event can be triggered in the following cases: - -* Based on an assessment of how shards are currently allocated, and the amount of storage and buffer space currently available. - -When past behavior on a hot tier indicates that the influx of data can increase significantly in the near future. Refer to [Reactive storage decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) and [Proactive storage decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) for more detail. - -* Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. - -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-create-job). - -On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. - - -## Notifications [ece-autoscaling-notifications] - -In the event that a data tier or machine learning node scales up to its maximum possible size, a notice appears on the deployment overview page prompting you to adjust your autoscaling settings in order to ensure optimal performance. - -A warning is also issued in the ECE `service-constructor` logs with the field `labels.autoscaling_notification_type` and a value of `data-tier-at-limit` (for a fully scaled data tier) or `ml-tier-at-limit` (for a fully scaled machine learning node). The warning is indexed in the `logging-and-metrics` deployment, so you can use that event to [configure an email notification](../../../explore-analyze/alerts-cases/watcher/actions-email.md). - - -## Restrictions and limitations [ece-autoscaling-restrictions] - -The following are known limitations and restrictions with autoscaling: - -* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. -* In the event that an override is set for the instance size or disk quota multiplier for an instance by means of the [Instance Overrides API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-all-instances-settings-overrides), autoscaling will be effectively disabled. It’s recommended to avoid adjusting the instance size or disk quota multiplier for an instance that uses autoscaling, since the setting prevents autoscaling. - - -## Enable or disable autoscaling [ece-autoscaling-enable] - -To enable or disable autoscaling on a deployment: - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. In your deployment menu, select **Edit**. -4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. -5. Select **Confirm** to have the autoscaling change and any other settings take effect. All plan changes are shown on the Deployment **Activity** page. - -When autoscaling has been enabled, the autoscaled nodes resize according to the [autoscaling settings](../../../deploy-manage/autoscaling.md#ece-autoscaling-update). Current sizes are shown on the deployment overview page. - -When autoscaling has been disabled, you need to adjust the size of data tiers and machine learning nodes manually. - - -## Update your autoscaling settings [ece-autoscaling-update] - -Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. In your deployment menu, select **Edit**. -4. To update a data tier: - - 1. Use the dropdown box to set the **Maximum size per zone** to the largest amount of resources that should be allocated to the data tier automatically. The resources will not scale above this value. - 2. You can also update the **Current size per zone**. If you update this setting to match the **Maximum size per zone**, the data tier will remain fixed at that size. - 3. For a hot data tier you can also adjust the **Forecast window**. This is the duration of time, up to the present, for which past storage usage is assessed in order to predict when additional storage is needed. - 4. Select **Save** to apply the changes to your deployment. - -5. To update machine learning nodes: - - 1. Use the dropdown box to set the **Minimum size per zone** and **Maximum size per zone** to the smallest and largest amount of resources, respectively, that should be allocated to the nodes automatically. The resources allocated to machine learning will not exceed these values. If you set these two settings to the same value, the machine learning node will remain fixed at that size. - 2. Select **Save** to apply the changes to your deployment. - - -You can also view our [example](../../../deploy-manage/autoscaling/ece-autoscaling-example.md) of how the autoscaling settings work. - -::::{note} -On Elastic Cloud Enterprise, system-owned deployment templates include the default values for all deployment autoscaling settings. -:::: diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md index 47296125a..5072efa78 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md @@ -95,7 +95,7 @@ In this step, you’ll create a Python script that generates logs in JSON format This Python script randomly generates one of twelve log messages, continuously, at a random interval of between 1 and 10 seconds. The log messages are written to file `elvis.json`, each with a timestamp, a log level of *info*, *warning*, *error*, or *critical*, and other data. Just to add some variance to the log data, the *info* message *Elvis has left the building* is set to be the most probable log event. - For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.md). + For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.html). Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md b/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md index 0f5cb7449..8da403a0e 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md @@ -151,7 +151,7 @@ link_id: "" # no value :::{image} ../../../images/cloud-enterprise-ce-traffic-filter-ip-rejected-request.png :alt: Show rejected request in the proxy logs -:class: screenshot +:screenshot: ::: To allow such a request to come through the traffic filter, you would register an IP traffic filter with the source IP address `192.168.255.6`, or a matching CIDR mask, e.g. `192.168.255.0/24`. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md b/raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md deleted file mode 100644 index b42cda57c..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-autoscaling.md +++ /dev/null @@ -1,118 +0,0 @@ -# Deployment autoscaling [ech-autoscaling] - -Autoscaling helps you to more easily manage your deployments by adjusting their available resources automatically, and currently supports scaling for both data and machine learning nodes, or machine learning nodes only. Check the following sections to learn more: - -* [Overview](../../../deploy-manage/autoscaling.md#ech-autoscaling-intro) -* [When does autoscaling occur?](../../../deploy-manage/autoscaling.md#ech-autoscaling-factors) -* [Notifications](../../../deploy-manage/autoscaling.md#ech-autoscaling-notifications) -* [Restrictions and limitations](../../../deploy-manage/autoscaling.md#ech-autoscaling-restrictions) -* [Enable or disable autoscaling](../../../deploy-manage/autoscaling.md#ech-autoscaling-enable) -* [Update your autoscaling settings](../../../deploy-manage/autoscaling.md#ech-autoscaling-update) - -You can also have a look at our [autoscaling example](../../../deploy-manage/autoscaling/ech-autoscaling-example.md). - - -## Overview [ech-autoscaling-intro] - -When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention. - -::::{note} -Autoscaling is enabled for the Machine Learning tier by default for new deployments. -:::: - - -Currently, autoscaling behavior is as follows: - -* **Data tiers** - - * Each Elasticsearch [data tier](../../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. - * In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your index lifecycle management policies. - * To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field. - * Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually. - -* **Machine learning nodes** - - * Machine learning nodes can scale upward and downward based on the configured machine learning jobs. - * When a machine learning job is opened, or a machine learning trained model is deployed, if there are no machine learning nodes in your deployment, the autoscaling mechanism will automatically add machine learning nodes. Similarly, after a period of no active machine learning jobs, any enabled machine learning nodes are disabled automatically. - * To control the maximum size of your machine learning nodes and ensure they will not scale above a certain size, you can use the maximum size per zone field. - * To control the minimum size of your machine learning nodes and ensure the autoscaling mechanism will not scale machine learning below a certain size, you can use the minimum size per zone field. - * The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models. - - -::::{note} -For any Elasticsearch Add-On for Heroku Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. -:::: - - - -## When does autoscaling occur? [ech-autoscaling-factors] - -Several factors determine when data tiers or machine learning nodes are scaled. - -For a data tier, an autoscaling event can be triggered in the following cases: - -* Based on an assessment of how shards are currently allocated, and the amount of storage and buffer space currently available. - -When past behavior on a hot tier indicates that the influx of data can increase significantly in the near future. Refer to [Reactive storage decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) and [Proactive storage decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) for more detail. - -* Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. - -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-create-job). - -On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. - - -## Notifications [ech-autoscaling-notifications] - -In the event that a data tier or machine learning node scales up to its maximum possible size, you’ll receive an email, and a notice also appears on the deployment overview page prompting you to adjust your autoscaling settings to ensure optimal performance. - - -## Restrictions and limitations [ech-autoscaling-restrictions] - -The following are known limitations and restrictions with autoscaling: - -* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. - - -## Enable or disable autoscaling [ech-autoscaling-enable] - -To enable or disable autoscaling on a deployment: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Edit**. -4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. -5. Select **Confirm** to have the autoscaling change and any other settings take effect. All plan changes are shown on the Deployment **Activity** page. - -When autoscaling has been enabled, the autoscaled nodes resize according to the [autoscaling settings](../../../deploy-manage/autoscaling.md#ech-autoscaling-update). Current sizes are shown on the deployment overview page. - -When autoscaling has been disabled, you need to adjust the size of data tiers and machine learning nodes manually. - - -## Update your autoscaling settings [ech-autoscaling-update] - -Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Edit**. -4. To update a data tier: - - 1. Use the dropdown box to set the **Maximum size per zone** to the largest amount of resources that should be allocated to the data tier automatically. The resources will not scale above this value. - 2. You can also update the **Current size per zone**. If you update this setting to match the **Maximum size per zone**, the data tier will remain fixed at that size. - 3. For a hot data tier you can also adjust the **Forecast window**. This is the duration of time, up to the present, for which past storage usage is assessed in order to predict when additional storage is needed. - 4. Select **Save** to apply the changes to your deployment. - -5. To update machine learning nodes: - - 1. Use the dropdown box to set the **Minimum size per zone** and **Maximum size per zone** to the smallest and largest amount of resources, respectively, that should be allocated to the nodes automatically. The resources allocated to machine learning will not exceed these values. If you set these two settings to the same value, the machine learning node will remain fixed at that size. - 2. Select **Save** to apply the changes to your deployment. - - -You can also view our [example](../../../deploy-manage/autoscaling/ech-autoscaling-example.md) of how the autoscaling settings work. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vnet.md b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vnet.md index 45495968b..42d5297f9 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vnet.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vnet.md @@ -104,12 +104,12 @@ Follow these high-level steps to add Private Link rules to your deployments. :::{image} ../../../images/cloud-heroku-ec-private-link-azure-json-view.png :alt: Private endpoint JSON View -:class: screenshot +:screenshot: ::: :::{image} ../../../images/cloud-heroku-ec-private-link-azure-properties.png :alt: Private endpoint Properties -:class: screenshot +:screenshot: ::: @@ -273,7 +273,7 @@ This means your deployment on Elastic Cloud can be in a different region than th :::{image} ../../../images/cloud-heroku-ce-azure-inter-region-pl.png :alt: Inter-region Private Link -:class: screenshot +:screenshot: ::: 1. Set up Private Link Endpoint in region 1 for a deployment hosted in region 2. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md index eb6f03094..ade532b0f 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md @@ -5,7 +5,7 @@ Traffic filtering, to only AWS PrivateLink connections, is one of the security l Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in Elasticsearch Add-On for Heroku. ::::{note} -PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-availability-zones) for more details. +PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-availability-zones) for more details. :::: @@ -96,13 +96,13 @@ The mapping will be different for your region. Our production VPC Service for `u 1. Create a VPC endpoint in your VPC using the service name for your region. - Follow the [AWS instructions](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#create-interface-endpoint) for details on creating a VPC interface endpoint to an endpoint service. + Follow the [AWS instructions](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint) for details on creating a VPC interface endpoint to an endpoint service. Use [the service name for your region](../../../deploy-manage/security/aws-privatelink-traffic-filters.md#ech-private-link-service-names-aliases). :::{image} ../../../images/cloud-heroku-ec-private-link-service.png :alt: PrivateLink - :class: screenshot + :screenshot: ::: The security group for the endpoint should at minimum allow for inbound connectivity from your instances CIDR range on ports 443 and 9243. Security groups for the instances should allow for outbound connnectibity to the endpoint on ports 443 and 9243. @@ -113,16 +113,16 @@ The mapping will be different for your region. Our production VPC Service for `u :::{image} ../../../images/cloud-heroku-ec-private-link-private-hosted-zone-example.png :alt: Private hosted zone example - :class: screenshot + :screenshot: ::: 2. Then create a DNS CNAME alias pointing to the PrivateLink Endpoint. Add the record to a private DNS zone in your VPC. Use `*` as the record name, and the VPC endpoint DNS name as a value. - Follow the [AWS instructions](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.md) for details on creating a CNAME record which points to your VPC endpoint DNS name. + Follow the [AWS instructions](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html) for details on creating a CNAME record which points to your VPC endpoint DNS name. :::{image} ../../../images/cloud-heroku-ec-private-link-cname.png :alt: PrivateLink CNAME - :class: screenshot + :screenshot: ::: 3. Test the connection. @@ -179,7 +179,7 @@ Having trouble finding your VPC endpoint ID? You can find it in the AWS console. :::{image} ../../../images/cloud-heroku-ec-private-link-endpoint-id.png :alt: VPC Endpoint ID -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/cloud/cloud/ec-autoscaling.md b/raw-migrated-files/cloud/cloud/ec-autoscaling.md deleted file mode 100644 index a1399649a..000000000 --- a/raw-migrated-files/cloud/cloud/ec-autoscaling.md +++ /dev/null @@ -1,120 +0,0 @@ -# Deployment autoscaling [ec-autoscaling] - -Autoscaling helps you to more easily manage your deployments by adjusting their available resources automatically, and currently supports scaling for both data and machine learning nodes, or machine learning nodes only. Check the following sections to learn more: - -* [Overview](../../../deploy-manage/autoscaling.md#ec-autoscaling-intro) -* [When does autoscaling occur?](../../../deploy-manage/autoscaling.md#ec-autoscaling-factors) -* [Notifications](../../../deploy-manage/autoscaling.md#ec-autoscaling-notifications) -* [Restrictions and limitations](../../../deploy-manage/autoscaling.md#ec-autoscaling-restrictions) -* [Enable or disable autoscaling](../../../deploy-manage/autoscaling.md#ec-autoscaling-enable) -* [Update your autoscaling settings](../../../deploy-manage/autoscaling.md#ec-autoscaling-update) - -You can also have a look at our [autoscaling example](../../../deploy-manage/autoscaling/ec-autoscaling-example.md), as well as a sample request to [create an autoscaled deployment through the API](../../../deploy-manage/autoscaling/ec-autoscaling-api-example.md). - - -## Overview [ec-autoscaling-intro] - -When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention. - -::::{note} -Autoscaling is enabled for the Machine Learning tier by default for new deployments. -:::: - - -Currently, autoscaling behavior is as follows: - -* **Data tiers** - - * Each Elasticsearch [data tier](../../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. - * In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your [index lifecycle management policies](../../../manage-data/lifecycle/index-lifecycle-management.md). - * To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field. - * Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually. - -* **Machine learning nodes** - - * Machine learning nodes can scale upward and downward based on the configured machine learning jobs. - * When a machine learning job is opened, or a machine learning trained model is deployed, if there are no machine learning nodes in your deployment, the autoscaling mechanism will automatically add machine learning nodes. Similarly, after a period of no active machine learning jobs, any enabled machine learning nodes are disabled automatically. - * To control the maximum size of your machine learning nodes and ensure they will not scale above a certain size, you can use the maximum size per zone field. - * To control the minimum size of your machine learning nodes and ensure the autoscaling mechanism will not scale machine learning below a certain size, you can use the minimum size per zone field. - * The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models. - - -::::{note} -The number of availability zones for each component of your {{ech}} deployments is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. -:::: - - - -## When does autoscaling occur? [ec-autoscaling-factors] - -Several factors determine when data tiers or machine learning nodes are scaled. - -For a data tier, an autoscaling event can be triggered in the following cases: - -* Based on an assessment of how shards are currently allocated, and the amount of storage and buffer space currently available. - -When past behavior on a hot tier indicates that the influx of data can increase significantly in the near future. Refer to [Reactive storage decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) and [Proactive storage decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) for more detail. - -* Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. - -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](../../../deploy-manage/autoscaling/autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-create-job). - -On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. - - -## Notifications [ec-autoscaling-notifications] - -In the event that a data tier or machine learning node scales up to its maximum possible size, you’ll receive an email, and a notice also appears on the deployment overview page prompting you to adjust your autoscaling settings to ensure optimal performance. - - -## Restrictions and limitations [ec-autoscaling-restrictions] - -The following are known limitations and restrictions with autoscaling: - -* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. -* Trial deployments cannot be configured to autoscale beyond the normal Trial deployment size limits. The maximum size per zone is increased automatically from the Trial limit when you convert to a paid subscription. -* ELSER deployments do not scale automatically. For more information, refer to [ELSER](../../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md) and [Trained model autoscaling](../../../explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md). - - -## Enable or disable autoscaling [ec-autoscaling-enable] - -To enable or disable autoscaling on a deployment: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Edit**. -4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. -5. Select **Confirm** to have the autoscaling change and any other settings take effect. All plan changes are shown on the Deployment **Activity** page. - -When autoscaling has been enabled, the autoscaled nodes resize according to the [autoscaling settings](../../../deploy-manage/autoscaling.md#ec-autoscaling-update). Current sizes are shown on the deployment overview page. - -When autoscaling has been disabled, you need to adjust the size of data tiers and machine learning nodes manually. - - -## Update your autoscaling settings [ec-autoscaling-update] - -Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In your deployment menu, select **Edit**. -4. To update a data tier: - - 1. Use the dropdown box to set the **Maximum size per zone** to the largest amount of resources that should be allocated to the data tier automatically. The resources will not scale above this value. - 2. You can also update the **Current size per zone**. If you update this setting to match the **Maximum size per zone**, the data tier will remain fixed at that size. - 3. For a hot data tier you can also adjust the **Forecast window**. This is the duration of time, up to the present, for which past storage usage is assessed in order to predict when additional storage is needed. - 4. Select **Save** to apply the changes to your deployment. - -5. To update machine learning nodes: - - 1. Use the dropdown box to set the **Minimum size per zone** and **Maximum size per zone** to the smallest and largest amount of resources, respectively, that should be allocated to the nodes automatically. The resources allocated to machine learning will not exceed these values. If you set these two settings to the same value, the machine learning node will remain fixed at that size. - 2. Select **Save** to apply the changes to your deployment. - - -You can also view our [example](../../../deploy-manage/autoscaling/ec-autoscaling-example.md) of how the autoscaling settings work. diff --git a/raw-migrated-files/cloud/cloud/ec-faq-technical.md b/raw-migrated-files/cloud/cloud/ec-faq-technical.md index 6685499dc..501b1f8b1 100644 --- a/raw-migrated-files/cloud/cloud/ec-faq-technical.md +++ b/raw-migrated-files/cloud/cloud/ec-faq-technical.md @@ -2,14 +2,6 @@ This frequently-asked-questions list answers some of your more common questions about configuring {{ech}}. -* [Can I implement a Hot-Warm architecture?](../../../deploy-manage/index.md#faq-hw-architecture) -* [What about dedicated master nodes?](../../../deploy-manage/index.md#faq-master-nodes) -* [Can I use a Custom SSL certificate?](../../../deploy-manage/index.md#faq-ssl) -* [Can {{ech}} autoscale?](../../../deploy-manage/index.md#faq-autoscale) -* [Do you support IP sniffing?](../../../deploy-manage/index.md#faq-ip-sniffing) -* [Does {{ech}} support encryption at rest?](../../../deploy-manage/index.md#faq-encryption-at-rest) -* [Can I find the static IP addresses for my endpoints on {{ech}}?](../../../deploy-manage/index.md#faq-static-ip-elastic-cloud) - $$$faq-hw-architecture$$$Can I implement a hot-warm architecture? : [*hot-warm architecture*](https://www.elastic.co/blog/hot-warm-architecture) refers to an Elasticsearch setup for larger time-data analytics use cases with two different types of nodes, hot and warm. {{ech}} supports hot-warm architectures in all of the solutions provided by allowing you to add warm nodes to any of your deployments. diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md index 48028eabf..46314e77a 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md @@ -98,7 +98,7 @@ In this step, you’ll create a Python script that generates logs in JSON format This Python script randomly generates one of twelve log messages, continuously, at a random interval of between 1 and 10 seconds. The log messages are written to file `elvis.json`, each with a timestamp, a log level of *info*, *warning*, *error*, or *critical*, and other data. Just to add some variance to the log data, the *info* message *Elvis has left the building* is set to be the most probable log event. - For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.md). + For simplicity, there is just one log file and it is written to the local directory where `elvis.py` is located. In a production environment you may have multiple log files, associated with different modules and loggers, and likely stored in `/var/log` or similar. To learn more about configuring logging in Python, check [Logging facility for Python](https://docs.python.org/3/library/logging.html). Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vnet.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vnet.md index b2c967468..8640740b7 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vnet.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vnet.md @@ -104,12 +104,12 @@ Follow these high-level steps to add Private Link rules to your deployments. :::{image} ../../../images/cloud-ec-private-link-azure-json-view.png :alt: Private endpoint JSON View -:class: screenshot +:screenshot: ::: :::{image} ../../../images/cloud-ec-private-link-azure-properties.png :alt: Private endpoint Properties -:class: screenshot +:screenshot: ::: @@ -274,7 +274,7 @@ This means your deployment on Elastic Cloud can be in a different region than th :::{image} ../../../images/cloud-ce-azure-inter-region-pl.png :alt: Inter-region Private Link -:class: screenshot +:screenshot: ::: 1. Set up Private Link Endpoint in region 1 for a deployment hosted in region 2. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md index 417eb32ca..d337ccb85 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md @@ -5,7 +5,7 @@ Traffic filtering, to only AWS PrivateLink connections, is one of the security l Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in {{ecloud}}. ::::{note} -PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-availability-zones) for more details. +PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-availability-zones) for more details. :::: @@ -96,13 +96,13 @@ The mapping will be different for your region. Our production VPC Service for `u 1. Create a VPC endpoint in your VPC using the service name for your region. - Follow the [AWS instructions](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#create-interface-endpoint) for details on creating a VPC interface endpoint to an endpoint service. + Follow the [AWS instructions](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint) for details on creating a VPC interface endpoint to an endpoint service. Use [the service name for your region](../../../deploy-manage/security/aws-privatelink-traffic-filters.md#ec-private-link-service-names-aliases). :::{image} ../../../images/cloud-ec-private-link-service.png :alt: PrivateLink - :class: screenshot + :screenshot: ::: The security group for the endpoint should at minimum allow for inbound connectivity from your instances CIDR range on ports 443 and 9243. Security groups for the instances should allow for outbound connnectibity to the endpoint on ports 443 and 9243. @@ -113,16 +113,16 @@ The mapping will be different for your region. Our production VPC Service for `u :::{image} ../../../images/cloud-ec-private-link-private-hosted-zone-example.png :alt: Private hosted zone example - :class: screenshot + :screenshot: ::: 2. Then create a DNS CNAME alias pointing to the PrivateLink Endpoint. Add the record to a private DNS zone in your VPC. Use `*` as the record name, and the VPC endpoint DNS name as a value. - Follow the [AWS instructions](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.md) for details on creating a CNAME record which points to your VPC endpoint DNS name. + Follow the [AWS instructions](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-creating.html) for details on creating a CNAME record which points to your VPC endpoint DNS name. :::{image} ../../../images/cloud-ec-private-link-cname.png :alt: PrivateLink CNAME - :class: screenshot + :screenshot: ::: 3. Test the connection. @@ -179,7 +179,7 @@ Having trouble finding your VPC endpoint ID? You can find it in the AWS console. :::{image} ../../../images/cloud-ec-private-link-endpoint-id.png :alt: VPC Endpoint ID -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/_cloud_native_vulnerability_management_dashboard.md b/raw-migrated-files/docs-content/serverless/_cloud_native_vulnerability_management_dashboard.md index 0076e08e5..699b9dfdb 100644 --- a/raw-migrated-files/docs-content/serverless/_cloud_native_vulnerability_management_dashboard.md +++ b/raw-migrated-files/docs-content/serverless/_cloud_native_vulnerability_management_dashboard.md @@ -9,7 +9,7 @@ The Cloud Native Vulnerability Management (CNVM) dashboard gives you an overview :::{image} ../../../images/serverless--cloud-native-security-vuln-management-dashboard.png :alt: The CNVM dashboard -:class: screenshot +:screenshot: ::: ::::{admonition} Requirements diff --git a/raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-file-upload.md b/raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-file-upload.md index 58ef5dc03..12c6eca8e 100644 --- a/raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-file-upload.md +++ b/raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-file-upload.md @@ -26,14 +26,14 @@ You’ll find a link to the Data Visualizer on the {{es}} **Getting Started** pa :::{image} ../../../images/serverless-file-data-visualizer-homepage-link.png :alt: data visualizer link -:class: screenshot +:screenshot: ::: Clicking **Upload a file** opens the Data Visualizer UI. :::{image} ../../../images/serverless-file-uploader-UI.png :alt: File upload UI -:class: screenshot +:screenshot: ::: Drag a file into the upload area or click **Select or drag and drop a file** to choose a file from your computer. diff --git a/raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md b/raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md deleted file mode 100644 index 466bed89d..000000000 --- a/raw-migrated-files/docs-content/serverless/general-ml-nlp-auto-scale.md +++ /dev/null @@ -1,117 +0,0 @@ -# Trained model autoscaling [general-ml-nlp-auto-scale] - -This content applies to: [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md) [![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md) [![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) - -You can enable autoscaling for each of your trained model deployments. Autoscaling allows {{es}} to automatically adjust the resources the model deployment can use based on the workload demand. - -There are two ways to enable autoscaling: - -* through APIs by enabling adaptive allocations -* in Kibana by enabling adaptive resources - -Trained model autoscaling is available for both serverless and Cloud deployments. In serverless deployments, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits. - -Security and Observability projects are only charged for data ingestion and retention. They are not charged for processing power (VCU usage), which is used for more complex operations, like running advanced search models. For example, in Search projects, models such as ELSER require significant processing power to provide more accurate search results. - - -## Enabling autoscaling through APIs - adaptive allocations [enabling-autoscaling-through-apis-adaptive-allocations] - -Model allocations are independent units of work for NLP tasks. If you set a static number of allocations, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources. Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process. This can help you to manage performance and cost more easily. (Refer to the [pricing calculator](https://cloud.elastic.co/pricing) to learn more about the possible costs.) - -When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, additional model allocations are automatically created as needed. When the load is low, a model allocation is automatically removed. You can explicitly set the minimum and maximum number of allocations; autoscaling will occur within these limits. - -::::{note} -If you set the minimum number of allocations to 1, you will be charged even if the system is not using those resources. - -:::: - - -You can enable adaptive allocations by using: - -* the create inference endpoint API for [ELSER](../../../explore-analyze/elastic-inference/inference-api/elser-inference-integration.md ), [E5 and models uploaded through Eland](../../../explore-analyze/elastic-inference/inference-api/elasticsearch-inference-integration.md) that are used as inference services. -* the [start trained model deployment](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-start-trained-model-deployment) or [update trained model deployment](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-update-trained-model-deployment) APIs for trained models that are deployed on machine learning nodes. - -If the new allocations fit on the current machine learning nodes, they are immediately started. If more resource capacity is needed for creating new model allocations, then your machine learning node will be scaled up if machine learning autoscaling is enabled to provide enough resources for the new allocation. The number of model allocations can be scaled down to 0. They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more. Adaptive allocations must be set up independently for each deployment and [inference endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put). - -When you create inference endpoints on Serverless using Kibana, adaptive allocations are automatically turned on, and there is no option to disable them. - - -### Optimizing for typical use cases [optimizing-for-typical-use-cases] - -You can optimize your model deployment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of inference requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes. - -* If you want to optimize for ingest, set the number of threads to `1` (`"threads_per_allocation": 1`). -* If you want to optimize for search, set the number of threads to greater than `1`. Increasing the number of threads will make the search processes more performant. - - -## Enabling autoscaling in {{kib}} - adaptive resources [enabling-autoscaling-in-kibana-adaptive-resources] - -You can enable adaptive resources for your models when starting or updating the model deployment. Adaptive resources make it possible for {{es}} to scale up or down the available resources based on the load on the process. This can help you to manage performance and cost more easily. When adaptive resources are enabled, the number of VCUs that the model deployment uses is set automatically based on the current load. When the load is high, the number of VCUs that the process can use is automatically increased. When the load is low, the number of VCUs that the process can use is automatically decreased. - -You can choose from three levels of resource usage for your trained model deployment; autoscaling will occur within the selected level’s range. - -Refer to the tables in the auto-scaling-matrix section to find out the settings for the level you selected. - -:::{image} ../../../images/serverless-ml-nlp-deployment.png -:alt: ML model deployment with adaptive resources enabled. -::: - -Search projects are given access to more processing resources, while Security and Observability projects have lower limits. This difference is reflected in the UI configuration: Search projects have higher resource limits compared to Security and Observability projects to accommodate their more complex operations. - -On Serverless, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in Kibana for Observability and Security projects. - - -## Model deployment resource matrix [model-deployment-resource-matrix] - -The used resources for trained model deployments depend on three factors: - -* your cluster environment (Serverless, Cloud, or on-premises) -* the use case you optimize the model deployment for (ingest or search) -* whether model autoscaling is enabled with adaptive allocations/resources to have dynamic resources, or disabled for static resources - -The following tables show you the number of allocations, threads, and VCUs available on Serverless when adaptive resources are enabled or disabled. - - -### Deployments on serverless optimized for ingest [deployments-on-serverless-optimized-for-ingest] - -In case of ingest-optimized deployments, we maximize the number of model allocations. - - -#### Adaptive resources enabled [adaptive-resources-enabled] - -| Level | Allocations | Threads | VCUs | -| --- | --- | --- | --- | -| Low | 0 to 2 dynamically | 1 | 0 to 16 dynamically | -| Medium | 1 to 32 dynamically | 1 | 8 to 256 dynamically | -| High | 1 to 512 for Search
1 to 128 for Security and Observability
| 1 | 8 to 4096 for Search
8 to 1024 for Security and Observability
| - - -#### Adaptive resources disabled (Search only) [adaptive-resources-disabled-search-only] - -| Level | Allocations | Threads | VCUs | -| --- | --- | --- | --- | -| Low | Exactly 2 | 1 | 16 | -| Medium | Exactly 32 | 1 | 256 | -| High | 512 for Search
No static allocations for Security and Observability
| 1 | 4096 for Search
No static allocations for Security and Observability
| - - -### Deployments on serverless optimized for Search [deployments-on-serverless-optimized-for-search] - - -#### Adaptive resources enabled [adaptive-resources-enabled-for-search] - -| Level | Allocations | Threads | VCUs | -| --- | --- | --- | --- | -| Low | 0 to 1 dynamically | Always 2 | 0 to 16 dynamically | -| Medium | 1 to 2 (if threads=16), dynamically | Maximum (for example, 16) | 8 to 256 dynamically | -| High | 1 to 32 (if threads=16), dynamically
1 to 128 for Security and Observability
| Maximum (for example, 16) | 8 to 4096 for Search
8 to 1024 for Security and Observability
| - - -#### Adaptive resources disabled [adaptive-resources-disabled-for-search] - -| Level | Allocations | Threads | VCUs | -| --- | --- | --- | --- | -| Low | 1 statically | Always 2 | 16 | -| Medium | 2 statically (if threads=16) | Maximum (for example, 16) | 256 | -| High | 32 statically (if threads=16) for Search
No static allocations for Security and Observability
| Maximum (for example, 16) | 4096 for Search
No static allocations for Security and Observability
| - diff --git a/raw-migrated-files/docs-content/serverless/index-management.md b/raw-migrated-files/docs-content/serverless/index-management.md index 1e850ce92..2aed223d3 100644 --- a/raw-migrated-files/docs-content/serverless/index-management.md +++ b/raw-migrated-files/docs-content/serverless/index-management.md @@ -11,7 +11,7 @@ Go to **{{project-settings}} → {{manage-app}} → {{index-manage-app}}**: :::{image} ../../../images/serverless-index-management-indices.png :alt: {{index-manage-app}} UI -:class: screenshot +:screenshot: ::: The **{{index-manage-app}}** page contains an overview of your indices. @@ -35,7 +35,7 @@ This value is the time period for which your data is guaranteed to be stored. Da :::{image} ../../../images/serverless-management-data-stream.png :alt: Data stream details -:class: screenshot +:screenshot: ::: To view information about the stream’s backing indices, click the number in the **Indices** column. @@ -50,7 +50,7 @@ Create, edit, clone, and delete your index templates in the **Index Templates** :::{image} ../../../images/serverless-index-management-index-templates.png :alt: Index templates -:class: screenshot +:screenshot: ::: The default **logs** template uses the logsDB index mode to create a [logs data stream](../../../manage-data/data-store/data-streams/logs-data-stream.md). @@ -70,7 +70,7 @@ Use the **Enrich Policies** view to add data from your existing indices to incom :::{image} ../../../images/serverless-management-enrich-policies.png :alt: Enrich policies -:class: screenshot +:screenshot: ::: When creating an enrich policy, the UI walks you through the configuration setup and selecting the fields. Before you can use the policy with an enrich processor, you must execute the policy. diff --git a/raw-migrated-files/docs-content/serverless/observability-ai-assistant.md b/raw-migrated-files/docs-content/serverless/observability-ai-assistant.md index 6f6b59d86..4fa495579 100644 --- a/raw-migrated-files/docs-content/serverless/observability-ai-assistant.md +++ b/raw-migrated-files/docs-content/serverless/observability-ai-assistant.md @@ -7,7 +7,7 @@ The AI Assistant uses generative AI to provide: :::{image} ../../../images/serverless-ai-assistant-overview.gif :alt: Observability AI assistant preview -:class: screenshot +:screenshot: ::: The AI Assistant integrates with your large language model (LLM) provider through our supported Elastic connectors: @@ -61,7 +61,7 @@ To set up the AI Assistant: * [OpenAI API keys](https://platform.openai.com/docs/api-reference) * [Azure OpenAI Service API keys](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference) - * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.md) + * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html) * [Google Gemini service account keys](https://cloud.google.com/iam/docs/keys-list-get) 2. From **Project settings** → **Management** → **Connectors**, create a connector for your AI provider: @@ -180,7 +180,7 @@ This opens the AI Assistant flyout, where you can ask the assistant questions ab :::{image} ../../../images/serverless-ai-assistant-chat.png :alt: Observability AI assistant chat -:class: screenshot +:screenshot: ::: ::::{important} @@ -235,14 +235,14 @@ For example, in the log details, you’ll see prompts for **What’s this messag :::{image} ../../../images/serverless-ai-assistant-logs-prompts.png :alt: Observability AI assistant example prompts for logs -:class: screenshot +:screenshot: ::: Clicking a prompt generates a message specific to that log entry. You can continue a conversation from a contextual prompt by clicking **Start chat** to open the AI Assistant chat. :::{image} ../../../images/serverless-ai-assistant-logs.png :alt: Observability AI assistant example -:class: screenshot +:screenshot: ::: @@ -257,7 +257,7 @@ You can use the [Observability AI Assistant connector](kibana://reference/connec :::{image} ../../../images/serverless-obs-ai-assistant-action-high-cpu.png :alt: Add an Observability AI assistant action while creating a rule in the Observability UI -:class: screenshot +:screenshot: ::: You can ask the assistant to generate a report of the alert that fired, recall any information or potential resolutions of past occurrences stored in the knowledge base, provide troubleshooting guidance and resolution steps, and also include other active alerts that may be related. As a last step, you can ask the assistant to trigger an action, such as sending the report (or any other message) to a Slack webhook. @@ -274,7 +274,7 @@ When the alert fires, contextual details about the event—such as when the aler :::{image} ../../../images/serverless-obs-ai-assistant-output.png :alt: AI Assistant conversation created in response to an alert -:class: screenshot +:screenshot: ::: ::::{important} @@ -291,7 +291,7 @@ When asked to send a message to another connector, such as Slack, the AI Assista :::{image} ../../../images/serverless-obs-ai-assistant-slack-message.png :alt: Message sent by Slack by the AI Assistant includes a link to the conversation -:class: screenshot +:screenshot: ::: The Observability AI Assistant connector is called when the alert fires and when it recovers. diff --git a/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md b/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md index c933294f7..52938950e 100644 --- a/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md +++ b/raw-migrated-files/docs-content/serverless/observability-stream-log-files.md @@ -11,12 +11,12 @@ The **Admin** role or higher is required to onboard log data. To learn more, ref
:::{image} ../../../images/serverless-logs-stream-logs-api-key-beats.png :alt: logs stream logs api key beats -:class: screenshot +:screenshot: ::: :::{image} ../../../images/serverless-log-copy-es-endpoint.png :alt: Copy a project's Elasticsearch endpoint -:class: screenshot +:screenshot: :::
@@ -257,7 +257,7 @@ You need to set the values for the following fields: :::{image} ../../../images/serverless-log-copy-es-endpoint.png :alt: Copy a project's Elasticsearch endpoint - :class: screenshot + :screenshot: ::: ::::: @@ -297,7 +297,7 @@ You need to set the values for the following fields: :::{image} ../../../images/serverless-logs-stream-logs-api-key-beats.png :alt: logs stream logs api key beats - :class: screenshot + :screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/security-about-rules.md b/raw-migrated-files/docs-content/serverless/security-about-rules.md index fbe959915..f062e5c86 100644 --- a/raw-migrated-files/docs-content/serverless/security-about-rules.md +++ b/raw-migrated-files/docs-content/serverless/security-about-rules.md @@ -37,7 +37,7 @@ You can create the following types of rules: :::{image} ../../../images/serverless--detections-all-rules.png :alt: Shows the Rules page -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/security-ai-assistant.md b/raw-migrated-files/docs-content/serverless/security-ai-assistant.md index 04b7b0346..505664a01 100644 --- a/raw-migrated-files/docs-content/serverless/security-ai-assistant.md +++ b/raw-migrated-files/docs-content/serverless/security-ai-assistant.md @@ -4,7 +4,7 @@ The Elastic AI Assistant utilizes generative AI to bolster your cybersecurity op :::{image} ../../../images/serverless-assistant-basic-view.png :alt: Image of AI Assistant chat window -:class: screenshot +:screenshot: ::: ::::{important} @@ -55,7 +55,7 @@ To open AI Assistant, select the **AI Assistant** button in the top toolbar from :::{image} ../../../images/serverless-ai-assistant-button.png :alt: AI Assistant button -:class: screenshot +:screenshot: ::: This opens the **Welcome** chat interface, where you can ask general questions about {{elastic-sec}}. @@ -82,14 +82,14 @@ Use these features to adjust and act on your conversations with AI Assistant: :::{image} ../../../images/serverless-quick-prompts.png :alt: Quick Prompts highlighted below a conversation - :class: screenshot + :screenshot: ::: * System Prompts and Quick Prompts can also be configured from the corresponding tabs on the **Security AI settings** page. :::{image} ../../../images/serverless-assistant-settings-system-prompts.png :alt: The Security AI settings menu's System Prompts tab - :class: screenshot + :screenshot: ::: * Quick Prompt availability varies based on context—for example, the **Alert summarization** Quick Prompt appears when you open AI Assistant while viewing an alert. To customize existing Quick Prompts and create new ones, click **Add Quick Prompt**. @@ -141,7 +141,7 @@ You can access anonymization settings directly from the **Attack Discovery** pag :::{image} ../../../images/serverless-assistant-anonymization-menu.png :alt: AI Assistant's settings menu -:class: screenshot +:screenshot: ::: The **Show anonymized** toggle controls whether you see the obfuscated or plaintext versions of the fields you sent to AI Assistant. It doesn’t control what gets obfuscated — that’s determined by the anonymization settings. It also doesn’t affect how event fields appear *before* being sent to AI Assistant. Instead, it controls how fields that were already sent and obfuscated appear to you. diff --git a/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md b/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md index 0ab37ac9d..7d51d50b8 100644 --- a/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md +++ b/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md @@ -16,7 +16,7 @@ Attack Discovery can detect a wide range of threats by finding relationships amo :::{image} ../../../images/serverless-attck-disc-11-alerts-disc.png :alt: An Attack discovery card showing an attack with 11 related alerts -:class: screenshot +:screenshot: ::: In the example above, Attack discovery found connections between thirteen alerts, and used them to identify and describe an attack chain. @@ -30,14 +30,14 @@ From a discovery on the Attack discovery page, click **View in AI Assistant** to :::{image} ../../../images/serverless-attck-disc-remediate-threat.gif :alt: A dialogue with AI Assistant that has the attack discovery as context -:class: screenshot +:screenshot: ::: AI Assistant can quickly compile essential data and provide suggestions to help you generate an incident report and plan an effective response. You can ask it to provide relevant data or answer questions, such as “How can I remediate this threat?” or “What {{esql}} query would isolate actions taken by this user?” :::{image} ../../../images/serverless-attck-disc-esql-query-gen-example.png :alt: An AI Assistant dialogue in which the user asks for a purpose-built ES|QL query -:class: screenshot +:screenshot: ::: The image above shows an {{esql}} query generated by AI Assistant in response to a user prompt. Learn more about [using AI Assistant for ES|QL](../../../solutions/security/ai/generate-customize-learn-about-esorql-queries.md). @@ -56,7 +56,7 @@ If you add a message that contains a discovery to a case, AI Assistant automatic :::{image} ../../../images/serverless-attck-disc-translate-japanese.png :alt: An AI Assistant dialogue in which the assistant translates from English to Japanese -:class: screenshot +:screenshot: ::: AI Assistant can translate its findings into other human languages, helping to enable collaboration among global security teams, and making it easier to operate within multilingual organizations. diff --git a/raw-migrated-files/docs-content/serverless/security-alerts-manage.md b/raw-migrated-files/docs-content/serverless/security-alerts-manage.md index 239387c92..403c006cc 100644 --- a/raw-migrated-files/docs-content/serverless/security-alerts-manage.md +++ b/raw-migrated-files/docs-content/serverless/security-alerts-manage.md @@ -9,7 +9,7 @@ The Alerts page displays all detection alerts. :::{image} ../../../images/serverless--detections-alert-page.png :alt: Alerts page overview -:class: screenshot +:screenshot: ::: @@ -35,7 +35,7 @@ The Alerts page offers various ways for you to organize and triage detection ale :::{image} ../../../images/serverless--detections-additional-filters.png :alt: Alerts table with Additional filters menu highlighted - :class: screenshot + :screenshot: ::: ::::{note} @@ -52,7 +52,7 @@ By default, the drop-down controls on the Alerts page filter alerts by **Status* :::{image} ../../../images/serverless--detections-alert-page-dropdown-controls.png :alt: Alerts page with drop-down controls highlighted -:class: screenshot +:screenshot: ::: ::::{note} @@ -89,7 +89,7 @@ Select up to three fields for grouping alerts. The groups will nest in the order :::{image} ../../../images/serverless--detections-group-alerts.png :alt: Alerts table with Group alerts by drop-down -:class: screenshot +:screenshot: ::: Each group displays information such as the alerts' severity and how many users, hosts, and alerts are in the group. The information displayed varies depending on the selected fields. @@ -101,7 +101,7 @@ To interact with grouped alerts: :::{image} ../../../images/serverless--detections-group-alerts-expand.png :alt: Expanded alert group with alerts table - :class: screenshot + :screenshot: ::: @@ -118,7 +118,7 @@ Click the **Full screen** button in the upper-right to view the table in full-sc :::{image} ../../../images/serverless--detections-alert-table-toolbar-buttons.png :alt: Alerts table with toolbar buttons highlighted -:class: screenshot +:screenshot: ::: Use the view options drop-down in the upper-right of the Alerts table to control how alerts are displayed: @@ -128,7 +128,7 @@ Use the view options drop-down in the upper-right of the Alerts table to control :::{image} ../../../images/serverless--detections-event-rendered-view.png :alt: Alerts table with the Event rendered view enabled -:class: screenshot +:screenshot: ::: ::::{tip} @@ -197,7 +197,7 @@ To apply or remove alert tags on multiple alerts, select the alerts you want to :::{image} ../../../images/serverless--detections-bulk-apply-alert-tag.png :alt: Bulk action menu with multiple alerts selected -:class: screenshot +:screenshot: ::: @@ -230,14 +230,14 @@ Show users that have been assigned to alerts by adding the **Assignees** column :::{image} ../../../images/serverless--detections-alert-assigned-alerts.png :alt: Alert assignees in the Alerts table -:class: screenshot +:screenshot: ::: Assigned users are automatically displayed in the alert details flyout. Up to two assigned users can be shown in the flyout. If an alert is assigned to three or more users, a numbered badge displays instead. :::{image} ../../../images/serverless--detections-alert-flyout-assignees.png :alt: Alert assignees in the alert details flyout -:class: screenshot +:screenshot: ::: @@ -247,7 +247,7 @@ Click the **Assignees** filter above the Alerts table, then select the users you :::{image} ../../../images/serverless--detections-alert-filter-assigned-alerts.png :alt: Filtering assigned alerts -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/security-building-block-rules.md b/raw-migrated-files/docs-content/serverless/security-building-block-rules.md index bef6002fe..30e94b21a 100644 --- a/raw-migrated-files/docs-content/serverless/security-building-block-rules.md +++ b/raw-migrated-files/docs-content/serverless/security-building-block-rules.md @@ -12,7 +12,7 @@ To create a rule that searches alert indices, select **Index Patterns** as the r :::{image} ../../../images/serverless--detections-alert-indices-ui.png :alt: detections alert indices ui -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md b/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md index c75e24ecc..8f3b15d1a 100644 --- a/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md +++ b/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md @@ -131,7 +131,7 @@ Finally, configure the connector in {{kib}}: Your LLM connector is now configured. For more information on using Elastic AI Assistant, refer to [AI Assistant](https://docs.elastic.co/security/ai-assistant). ::::{important} -If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.md), your ARN becomes the model ID, and the connector settings **URL** value must be [encoded](https://www.urlencoder.org/) to work. For example, if the non-encoded ARN is `arn:aws:bedrock:us-east-2:123456789102:provisioned-model/3Ztr7hbzmkrqy1`, the encoded ARN would be `arn%3Aaws%3Abedrock%3Aus-east-2%3A123456789102%3Aprovisioned-model%2F3Ztr7hbzmkrqy1`. +If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.html), your ARN becomes the model ID, and the connector settings **URL** value must be [encoded](https://www.urlencoder.org/) to work. For example, if the non-encoded ARN is `arn:aws:bedrock:us-east-2:123456789102:provisioned-model/3Ztr7hbzmkrqy1`, the encoded ARN would be `arn%3Aaws%3Abedrock%3Aus-east-2%3A123456789102%3Aprovisioned-model%2F3Ztr7hbzmkrqy1`. :::: diff --git a/raw-migrated-files/docs-content/serverless/security-detection-engine-overview.md b/raw-migrated-files/docs-content/serverless/security-detection-engine-overview.md index 54f4b170f..215712e7b 100644 --- a/raw-migrated-files/docs-content/serverless/security-detection-engine-overview.md +++ b/raw-migrated-files/docs-content/serverless/security-detection-engine-overview.md @@ -4,7 +4,7 @@ Use the detection engine to create and manage rules and view the alerts these ru :::{image} ../../../images/serverless--detections-alert-page.png :alt: Alerts page -:class: screenshot +:screenshot: ::: In addition to creating [your own rules](../../../solutions/security/detect-and-alert/create-detection-rule.md), enable [Elastic prebuilt rules](../../../solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md#load-prebuilt-rules) to immediately start detecting suspicious activity. For detailed information on all the prebuilt rules, see the [Prebuilt rules reference](security-docs://reference/prebuilt-rules/index.md). Once the prebuilt rules are loaded and running, [Tune detection rules](../../../solutions/security/detect-and-alert/tune-detection-rules.md) and [Add and manage exceptions](../../../solutions/security/detect-and-alert/add-manage-exceptions.md) explain how to modify the rules to reduce false positives and get a better set of actionable alerts. You can also use exceptions and value lists when creating or modifying your own rules. diff --git a/raw-migrated-files/docs-content/serverless/security-prebuilt-rules-management.md b/raw-migrated-files/docs-content/serverless/security-prebuilt-rules-management.md index 717ec62c7..8265d28e0 100644 --- a/raw-migrated-files/docs-content/serverless/security-prebuilt-rules-management.md +++ b/raw-migrated-files/docs-content/serverless/security-prebuilt-rules-management.md @@ -27,7 +27,7 @@ Follow these guidelines to start using the {{security-app}}'s [prebuilt rules](s :::{image} ../../../images/serverless--detections-prebuilt-rules-add-badge.png :alt: The Add Elastic Rules page - :class: screenshot + :screenshot: ::: 2. Click **Add Elastic rules**. @@ -51,7 +51,7 @@ Follow these guidelines to start using the {{security-app}}'s [prebuilt rules](s :::{image} ../../../images/serverless--detections-prebuilt-rules-add.png :alt: The Add Elastic Rules page - :class: screenshot + :screenshot: ::: 4. For any rules you haven’t already enabled, go back to the **Rules** page, search or filter for the rules you want to run, and do either of the following: @@ -112,7 +112,7 @@ Elastic regularly updates prebuilt rules to optimize their performance and ensur :::{image} ../../../images/serverless--detections-prebuilt-rules-update.png :alt: The Rule Updates tab on the Rules page - :class: screenshot + :screenshot: ::: 2. (Optional) To examine the details of a rule’s latest version before you update it, select the rule name. This opens the rule details flyout. @@ -123,7 +123,7 @@ Elastic regularly updates prebuilt rules to optimize their performance and ensur :::{image} ../../../images/serverless-prebuilt-rules-update-diff.png :alt: Prebuilt rule comparison - :class: screenshot + :screenshot: ::: 3. Do one of the following to update prebuilt rules on the **Rules** page: diff --git a/raw-migrated-files/docs-content/serverless/security-rules-coverage.md b/raw-migrated-files/docs-content/serverless/security-rules-coverage.md index 1eb7a0df2..6ed97c1dd 100644 --- a/raw-migrated-files/docs-content/serverless/security-rules-coverage.md +++ b/raw-migrated-files/docs-content/serverless/security-rules-coverage.md @@ -14,7 +14,7 @@ You can map custom rules to tactics in **Advanced settings** when creating or ed :::{image} ../../../images/serverless--detections-rules-coverage.png :alt: MITRE ATT&CK® coverage page -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/security-rules-create.md b/raw-migrated-files/docs-content/serverless/security-rules-create.md index 2db0ebb2f..8a802820b 100644 --- a/raw-migrated-files/docs-content/serverless/security-rules-create.md +++ b/raw-migrated-files/docs-content/serverless/security-rules-create.md @@ -45,7 +45,7 @@ At any step, you can [preview the rule](../../../solutions/security/detect-and-a :::{image} ../../../images/serverless--detections-rule-query-example.png :alt: Rule query example - :class: screenshot + :screenshot: ::: 3. You can use saved queries and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. @@ -180,7 +180,7 @@ To create or edit {{ml}} rules, you need an appropriate user role. Additionally, :::{image} ../../../images/serverless--detections-eql-rule-query-example.png :alt: detections eql rule query example - :class: screenshot + :screenshot: ::: ::::{note} @@ -260,7 +260,7 @@ To create or edit {{ml}} rules, you need an appropriate user role. Additionally, :::{image} ../../../images/serverless--detections-indicator-rule-example.png :alt: Indicator match rule settings - :class: screenshot + :screenshot: ::: ::::{tip} @@ -308,7 +308,7 @@ You uploaded a value list of known ransomware domains, and you want to be notifi :::{image} ../../../images/serverless--detections-indicator_value_list.png :alt: detections indicator value list -:class: screenshot +:screenshot: ::: @@ -515,7 +515,7 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](../../../sol :::{image} ../../../images/serverless--detections-severity-mapping-ui.png :alt: detections severity mapping ui - :class: screenshot + :screenshot: ::: ::::{note} @@ -534,7 +534,7 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](../../../sol :::{image} ../../../images/serverless--detections-risk-source-field-ui.png :alt: detections risk source field ui - :class: screenshot + :screenshot: ::: ::::{note} @@ -602,7 +602,7 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](../../../sol :::{image} ../../../images/serverless--detections-schedule-rule.png :alt: detections schedule rule - :class: screenshot + :screenshot: ::: 3. Continue with [setting the rule’s schedule](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule). @@ -669,7 +669,7 @@ To use actions for alert notifications, you need the appropriate user role. For :::{image} ../../../images/serverless--detections-selected-action-type.png :alt: detections selected action type - :class: screenshot + :screenshot: ::: 5. Use the default notification message or customize it. You can add more context to the message by clicking the icon above the message text box and selecting from a list of available [alert notification variables](../../../solutions/security/detect-and-alert/create-detection-rule.md#rule-action-variables). @@ -811,7 +811,7 @@ Click the **Rule preview** button while creating or editing a rule. The preview :::{image} ../../../images/serverless--detections-preview-rule.png :alt: Rule preview -:class: screenshot +:screenshot: ::: The preview also includes the effects of rule exceptions and override fields. In the histogram, alerts are stacked by `event.category` (or `host.name` for machine learning rules), and alerts with multiple values are counted more than once. diff --git a/raw-migrated-files/docs-content/serverless/security-rules-ui-management.md b/raw-migrated-files/docs-content/serverless/security-rules-ui-management.md index 36d1a53df..47a1aaeec 100644 --- a/raw-migrated-files/docs-content/serverless/security-rules-ui-management.md +++ b/raw-migrated-files/docs-content/serverless/security-rules-ui-management.md @@ -4,7 +4,7 @@ The Rules page allows you to view and manage all prebuilt and custom detection r :::{image} ../../../images/serverless--detections-all-rules.png :alt: The Rules page -:class: screenshot +:screenshot: ::: On the Rules page, you can: @@ -163,7 +163,7 @@ You can snooze rule notifications from the **Installed Rules** tab, the rule det :::{image} ../../../images/serverless--detections-rule-snoozing.png :alt: Rules snooze options -:class: screenshot +:screenshot: ::: @@ -234,14 +234,14 @@ Additionally, the **Setup guide** section provides guidance on setting up the ru :::{image} ../../../images/serverless--detections-rule-details-prerequisites.png :alt: Rule details page with Related integrations -:class: screenshot +:screenshot: ::: You can also check rules' related integrations in the **Installed Rules** and **Rule Monitoring** tables. Click the **integrations** badge to display the related integrations in a popup. :::{image} ../../../images/serverless--detections-rules-table-related-integrations.png :alt: Rules table with related integrations popup -:class: screenshot +:screenshot: ::: ::::{tip} diff --git a/raw-migrated-files/docs-content/serverless/security-signals-to-cases.md b/raw-migrated-files/docs-content/serverless/security-signals-to-cases.md index d04caf58a..292e7fcca 100644 --- a/raw-migrated-files/docs-content/serverless/security-signals-to-cases.md +++ b/raw-migrated-files/docs-content/serverless/security-signals-to-cases.md @@ -16,7 +16,7 @@ From the Alerts table, you can attach one or more alerts to a [new case](../../. :::{image} ../../../images/serverless--detections-add-alert-to-case.gif :alt: Animation of adding an alert to a case -:class: screenshot +:screenshot: ::: @@ -64,5 +64,5 @@ To add alerts to an existing case: :::{image} ../../../images/serverless--detections-add-alert-to-existing-case.png :alt: Select case dialog listing existing cases - :class: screenshot + :screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md b/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md index 16789a336..9041a6d1e 100644 --- a/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md +++ b/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md @@ -45,5 +45,5 @@ After you review the report, click **Add to existing case** at the top of AI Ass :::{image} ../../../images/serverless-ai-triage-add-to-case.png :alt: An AI Assistant dialogue with the add to existing case button highlighted -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/security-tune-detection-signals.md b/raw-migrated-files/docs-content/serverless/security-tune-detection-signals.md index 447d83607..f93ed327e 100644 --- a/raw-migrated-files/docs-content/serverless/security-tune-detection-signals.md +++ b/raw-migrated-files/docs-content/serverless/security-tune-detection-signals.md @@ -35,7 +35,7 @@ For example, to prevent the **Unusual Process Execution Path - Alternate Data St :::{image} ../../../images/serverless--detections-prebuilt-rules-rule-details-page.png :alt: Rule details page - :class: screenshot + :screenshot: ::: 3. Select the **Rule exceptions** tab, then click **Add rule exception**. @@ -47,7 +47,7 @@ For example, to prevent the **Unusual Process Execution Path - Alternate Data St :::{image} ../../../images/serverless--detections-prebuilt-rules-process-exception.png :alt: Add Rule Exception UI - :class: screenshot + :screenshot: ::: 5. Click **Add rule exception**. @@ -80,7 +80,7 @@ Another useful technique is to assign lower risk scores to rules triggered by au :::{image} ../../../images/serverless--detections-prebuilt-rules-process-specific-exception.png :alt: Example of is not exception in the Add Rule Exception UI - :class: screenshot + :screenshot: ::: 4. Click **Add rule exception**. diff --git a/raw-migrated-files/docs-content/serverless/security-view-alert-details.md b/raw-migrated-files/docs-content/serverless/security-view-alert-details.md index 51e2c2027..8d6fadd92 100644 --- a/raw-migrated-files/docs-content/serverless/security-view-alert-details.md +++ b/raw-migrated-files/docs-content/serverless/security-view-alert-details.md @@ -9,7 +9,7 @@ To learn more about an alert, click the **View details** button from the Alerts :::{image} ../../../images/serverless--detections-open-alert-details-flyout.gif :alt: Expandable flyout -:class: screenshot +:screenshot: ::: Use the alert details flyout to begin an investigation, open a case, or plan a response. Click **Take action** at the bottom of the flyout to find more options for interacting with the alert. @@ -26,7 +26,7 @@ The right panel provides an overview of the alert. Expand any of the collapsed s :::{image} ../../../images/serverless--detections-alert-details-flyout-right-panel.png :alt: Right panel of the alert details flyout -:class: screenshot +:screenshot: ::: From the right panel, you can also: @@ -65,7 +65,7 @@ Some areas in the flyout provide previews when you click on them. For example, c :::{image} ../../../images/serverless--detections-alert-details-flyout-preview-panel.gif :alt: Preview panel of the alert details flyout -:class: screenshot +:screenshot: ::: @@ -89,7 +89,7 @@ The About section is located on the **Overview** tab in the right panel. It prov :::{image} ../../../images/serverless--detections-about-section-rp.png :alt: About section of the Overview tab -:class: screenshot +:screenshot: ::: The About section has the following information: @@ -111,7 +111,7 @@ The Investigation section is located on the **Overview** tab in the right panel. :::{image} ../../../images/serverless--detections-investigation-section-rp.png :alt: Investigation section of the Overview tab -:class: screenshot +:screenshot: ::: The Investigation section provides the following information: @@ -132,7 +132,7 @@ The Visualizations section is located on the **Overview** tab in the right panel :::{image} ../../../images/serverless--detections-visualizations-section-rp.png :alt: Visualizations section of the Overview tab -:class: screenshot +:screenshot: ::: Click **Visualizations** to display the following previews: @@ -160,14 +160,14 @@ The **Visualize** tab allows you to maintain the context of the Alerts table, wh :::{image} ../../../images/serverless--detections-visualize-tab-lp.png :alt: Expanded view of visualization details -:class: screenshot +:screenshot: ::: As you examine the alert’s related processes, you can also preview the alerts and events which are associated with those processes. Then, if you want to learn more about a particular alert or event, you can click **Show full alert details** to open the full details flyout. :::{image} ../../../images/serverless--detections-visualize-tab-lp-alert-details.gif :alt: Examine alert details from event analyzer -:class: screenshot +:screenshot: ::: @@ -177,7 +177,7 @@ The Insights section is located on the **Overview** tab in the right panel. It o :::{image} ../../../images/serverless--detections-insights-section-rp.png :alt: Insights section of the Overview tab -:class: screenshot +:screenshot: ::: @@ -187,7 +187,7 @@ The Entities overview provides high-level details about the user and host that a :::{image} ../../../images/serverless--detections-entities-overview.png :alt: Overview of the entity details section in the right panel -:class: screenshot +:screenshot: ::: @@ -197,7 +197,7 @@ From the right panel, click **Entities** to open a detailed view of the host and :::{image} ../../../images/serverless--detections-expanded-entities-view.png :alt: Expanded view of entity details -:class: screenshot +:screenshot: ::: @@ -207,7 +207,7 @@ The Threat intelligence overview shows matched indicators, which provide threat :::{image} ../../../images/serverless--detections-threat-intelligence-overview.png :alt: Overview of threat intelligence on the alert -:class: screenshot +:screenshot: ::: The Threat intelligence overview provides the following information: @@ -228,7 +228,7 @@ The expanded threat intelligence view queries indices specified in the `security :::{image} ../../../images/serverless--detections-expanded-threat-intelligence-view.png :alt: Expanded view of threat intelligence on the alert -:class: screenshot +:screenshot: ::: The expanded Threat intelligence view shows individual indicators within the alert document. You can expand and collapse indicator details by clicking the arrow button at the end of the indicator label. Each indicator is labeled with values from the `matched.field` and `matched.atomic` fields and displays the threat intelligence provider. @@ -269,7 +269,7 @@ The Correlations overview shows how an alert is related to other alerts and offe :::{image} ../../../images/serverless--detections-correlations-overview.png :alt: Overview of available correlation data -:class: screenshot +:screenshot: ::: The Correlations overview provides the following information: @@ -287,7 +287,7 @@ From the right panel, click **Correlations** to open the expanded Correlations v :::{image} ../../../images/serverless--detections-expanded-correlations-view.png :alt: Expanded view of correlation data -:class: screenshot +:screenshot: ::: In the expanded view, corelation data is organized into several tables: @@ -316,7 +316,7 @@ Update the date time picker for the table to show data from a different time ran :::{image} ../../../images/serverless--detections-expanded-prevalence-view.png :alt: Expanded view of prevalence data -:class: screenshot +:screenshot: ::: The expanded Prevalence view provides the following details: @@ -335,7 +335,7 @@ The **Response** section is located on the **Overview** tab in the right panel. :::{image} ../../../images/serverless--detections-response-action-rp.png :alt: Response section of the Overview tab -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/security-visualize-alerts.md b/raw-migrated-files/docs-content/serverless/security-visualize-alerts.md index 417f431b5..a6759a388 100644 --- a/raw-migrated-files/docs-content/serverless/security-visualize-alerts.md +++ b/raw-migrated-files/docs-content/serverless/security-visualize-alerts.md @@ -9,7 +9,7 @@ Visualize and group detection alerts by specific parameters in the visualization :::{image} ../../../images/serverless--detections-alert-page-visualizations.png :alt: Alerts page with visualizations section highlighted -:class: screenshot +:screenshot: ::: Use the left buttons to select a view type (**Summary**, **Trend***, ***Counts**, or **Treemap**), and use the right menus to select the ECS fields to use for grouping: @@ -37,7 +37,7 @@ Click the collapse icon (![Markdown](../../../images/serverless-arrowDown.svg "" :::{image} ../../../images/serverless--detections-alert-page-viz-collapsed.png :alt: Alerts page with visualizations section collapsed -:class: screenshot +:screenshot: ::: @@ -53,7 +53,7 @@ You can hover and click on elements within the summary — such as severity leve :::{image} ../../../images/serverless--detections-alerts-viz-summary.png :alt: Summary visualization for alerts -:class: screenshot +:screenshot: ::: @@ -69,7 +69,7 @@ The **Group by top** menu is unavailable for the trend view. :::{image} ../../../images/serverless--detections-alerts-viz-trend.png :alt: Trend visualization for alerts -:class: screenshot +:screenshot: ::: @@ -79,7 +79,7 @@ The counts view shows the count of alerts in each group. By default, it groups a :::{image} ../../../images/serverless--detections-alerts-viz-counts.png :alt: Counts visualization for alerts -:class: screenshot +:screenshot: ::: @@ -89,7 +89,7 @@ The treemap view shows the distribution of alerts as nested, proportionally-size :::{image} ../../../images/serverless--detections-alerts-viz-treemap.png :alt: Treemap visualization for alerts -:class: screenshot +:screenshot: ::: Larger tiles represent more frequent alerts, and each tile’s color is based on the alerts' risk score: @@ -111,5 +111,5 @@ You can click on the treemap to narrow down the alerts displayed in both the tre :::{image} ../../../images/serverless--detections-treemap-click.gif :alt: Animation of clicking the treemap -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/docs-content/serverless/spaces.md b/raw-migrated-files/docs-content/serverless/spaces.md index 6224a3a62..c02dbc4d9 100644 --- a/raw-migrated-files/docs-content/serverless/spaces.md +++ b/raw-migrated-files/docs-content/serverless/spaces.md @@ -10,7 +10,7 @@ You can identify the space you’re in or switch to a different space from the h :::{image} ../../../images/serverless-space-breadcrumb.png :alt: Space breadcrumb -:class: screenshot +:screenshot: ::: You can view and manage the spaces of a project from the **Spaces** page in **Management**. diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md index cf3a71e4f..58ab529f1 100644 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md +++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/fips-140-compliance.md @@ -35,7 +35,7 @@ The following is a high-level overview of the required configuration: ### Java security provider [java-security-provider] -Detailed instructions for installation and configuration of a FIPS certified Java security provider is beyond the scope of this document. Specifically, a FIPS certified [JCA](https://docs.oracle.com/en/java/javase/17/security/java-cryptography-architecture-jca-reference-guide.md) and [JSSE](https://docs.oracle.com/en/java/javase/17/security/java-secure-socket-extension-jsse-reference-guide.md) implementation is required so that the JVM uses FIPS validated implementations of NIST recommended cryptographic algorithms. +Detailed instructions for installation and configuration of a FIPS certified Java security provider is beyond the scope of this document. Specifically, a FIPS certified [JCA](https://docs.oracle.com/en/java/javase/17/security/java-cryptography-architecture-jca-reference-guide.html) and [JSSE](https://docs.oracle.com/en/java/javase/17/security/java-secure-socket-extension-jsse-reference-guide.html) implementation is required so that the JVM uses FIPS validated implementations of NIST recommended cryptographic algorithms. Elasticsearch has been tested with Bouncy Castle’s [bc-fips 1.0.2.5](https://repo1.maven.org/maven2/org/bouncycastle/bc-fips/1.0.2.5/bc-fips-1.0.2.5.jar) and [bctls-fips 1.0.19](https://repo1.maven.org/maven2/org/bouncycastle/bctls-fips/1.0.19/bctls-fips-1.0.19.jar). Please refer to the {{es}} [JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm) for details on which combinations of JVM and security provider are supported in FIPS mode. Elasticsearch does not ship with a FIPS certified provider. It is the responsibility of the user to install and configure the security provider to ensure compliance with FIPS 140-2. Using a FIPS certified provider will ensure that only approved cryptographic algorithms are used. @@ -131,7 +131,7 @@ To verify that the security provider is installed and in use, you can use any of ## Upgrade considerations [fips-upgrade-considerations] -{{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.md)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. +{{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.html)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. Alternatively, consider using {{ech}} in the [FedRAMP-certified GovCloud region](https://www.elastic.co/industries/public-sector/fedramp). diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/install-elasticsearch.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/install-elasticsearch.md index 138b35bf6..a260ea552 100644 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/install-elasticsearch.md +++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/install-elasticsearch.md @@ -78,7 +78,7 @@ The bundled JVM is treated the same as any other dependency of {{es}} in terms o :::: -If you decide to run {{es}} using a version of Java that is different from the bundled one, prefer to use the latest release of a [LTS version of Java](https://www.oracle.com/technetwork/java/eol-135779.md) which is [listed in the support matrix](https://elastic.co/support/matrix). Although such a configuration is supported, if you encounter a security issue or other bug in your chosen JVM then Elastic may not be able to help unless the issue is also present in the bundled JVM. Instead, you must seek assistance directly from the supplier of your chosen JVM. You must also take responsibility for reacting to security and bug announcements from the supplier of your chosen JVM. {{es}} may not perform optimally if using a JVM other than the bundled one. {{es}} is closely coupled to certain OpenJDK-specific features, so it may not work correctly with JVMs that are not OpenJDK. {{es}} will refuse to start if you attempt to use a known-bad JVM version. +If you decide to run {{es}} using a version of Java that is different from the bundled one, prefer to use the latest release of a [LTS version of Java](https://www.oracle.com/technetwork/java/eol-135779.html) which is [listed in the support matrix](https://elastic.co/support/matrix). Although such a configuration is supported, if you encounter a security issue or other bug in your chosen JVM then Elastic may not be able to help unless the issue is also present in the bundled JVM. Instead, you must seek assistance directly from the supplier of your chosen JVM. You must also take responsibility for reacting to security and bug announcements from the supplier of your chosen JVM. {{es}} may not perform optimally if using a JVM other than the bundled one. {{es}} is closely coupled to certain OpenJDK-specific features, so it may not work correctly with JVMs that are not OpenJDK. {{es}} will refuse to start if you attempt to use a known-bad JVM version. To use your own version of Java, set the `ES_JAVA_HOME` environment variable to the path to your own JVM installation. The bundled JVM is located within the `jdk` subdirectory of the {{es}} home directory. You may remove this directory if using your own JVM. diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md index 14e4941be..c6c33b574 100644 --- a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md +++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md @@ -20,7 +20,7 @@ The following examples use the: * models available through [Azure AI Studio](https://ai.azure.com/explore/models?selectedTask=embeddings) or [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) * `text-embedding-004` model for [Google Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api) * `mistral-embed` model for [Mistral](https://docs.mistral.ai/getting-started/models/) -* `amazon.titan-embed-text-v1` model for [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.md) +* `amazon.titan-embed-text-v1` model for [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html) * `ops-text-embedding-zh-001` model for [AlibabaCloud AI](https://help.aliyun.com/zh/open-search/search-platform/developer-reference/text-embedding-api-details) You can use any Cohere and OpenAI models, they are all supported by the {{infer}} API. For a list of recommended models available on HuggingFace, refer to [the supported model list](../../../explore-analyze/elastic-inference/inference-api/huggingface-inference-integration.md). @@ -556,7 +556,7 @@ PUT amazon-bedrock-embeddings 1. The name of the field to contain the generated tokens. It must be referenced in the {{infer}} pipeline configuration in the next step. 2. The field to contain the tokens is a `dense_vector` field. -3. The output dimensions of the model. This value may be different depending on the underlying model used. See the [Amazon Titan model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-multiemb-models.md) or the [Cohere Embeddings model](https://docs.cohere.com/reference/embed) documentation. +3. The output dimensions of the model. This value may be different depending on the underlying model used. See the [Amazon Titan model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-multiemb-models.html) or the [Cohere Embeddings model](https://docs.cohere.com/reference/embed) documentation. 4. For Amazon Bedrock embeddings, the `dot_product` function should be used to calculate similarity for Amazon titan models, or `cosine` for Cohere models. 5. The name of the field from which to create the dense vector representation. In this example, the name of the field is `content`. It must be referenced in the {{infer}} pipeline configuration in the next step. 6. The field type which is text in this example. diff --git a/raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md b/raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md index 2a9d8664b..3726d6e5d 100644 --- a/raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md +++ b/raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md @@ -118,21 +118,21 @@ The following image shows the **Agent monitoring** settings for the default agen :::{image} ../../../images/fleet-agent-monitoring-settings.png :alt: Screen capture of agent monitoring settings in the default agent policy -:class: screenshot +:screenshot: ::: There are also pre-built dashboards for agent metrics that you can access under **Assets** in the {{agent}} integration: :::{image} ../../../images/fleet-agent-monitoring-assets.png :alt: Screen capture of {{agent}} monitoring assets -:class: screenshot +:screenshot: ::: The **[{{agent}}] Agent metrics** dashboard shows an aggregated view of agent metrics: :::{image} ../../../images/fleet-agent-metrics-dashboard.png :alt: Screen capture showing {{agent}} metrics -:class: screenshot +:screenshot: ::: For more information, refer to [Monitor {{agent}}s](/reference/ingestion-tools/fleet/monitor-elastic-agent.md). diff --git a/raw-migrated-files/kibana/kibana/apm-settings-kb.md b/raw-migrated-files/kibana/kibana/apm-settings-kb.md index 31d2aaba5..1cb435e86 100644 --- a/raw-migrated-files/kibana/kibana/apm-settings-kb.md +++ b/raw-migrated-files/kibana/kibana/apm-settings-kb.md @@ -16,7 +16,7 @@ Starting in version 8.2.0, APM indices are {{kib}} Spaces-aware; Changes to APM :::{image} ../../../images/kibana-apm-settings.png :alt: APM app settings in Kibana -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md b/raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md index 8bcb91a19..a03ce29f3 100644 --- a/raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md +++ b/raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md @@ -6,7 +6,7 @@ All integrations are available in a single view on the **Integrations** page. :::{image} ../../../images/kibana-add-integration.png :alt: Integrations page from which you can choose integrations to start collecting and analyzing data -:class: screenshot +:screenshot: ::: ::::{note} @@ -41,7 +41,7 @@ Sample data sets come with sample visualizations, dashboards, and more to help y :::{image} ../../../images/kibana-add-sample-data.png :alt: eCommerce -:class: screenshot +:screenshot: ::: @@ -73,7 +73,7 @@ The upload feature is not intended for use as part of a repeated production proc :::{image} ../../../images/kibana-add-data-fv.png :alt: Uploading a file in {{kib}} -:class: screenshot +:screenshot: ::: The {{stack-security-features}} provide roles and privileges that control which users can upload files. To upload a file in {{kib}} and import it into an {{es}} index, you’ll need: diff --git a/raw-migrated-files/kibana/kibana/console-kibana.md b/raw-migrated-files/kibana/kibana/console-kibana.md index a3e003bdd..1f2346df9 100644 --- a/raw-migrated-files/kibana/kibana/console-kibana.md +++ b/raw-migrated-files/kibana/kibana/console-kibana.md @@ -4,7 +4,7 @@ :::{image} ../../../images/kibana-console.png :alt: Console -:class: screenshot +:screenshot: ::: To go to **Console***, find ***Dev Tools** in the navigation menu or use the [global search bar](/explore-analyze/find-and-organize/find-apps-and-objects.md). @@ -13,7 +13,7 @@ You can also find Console directly on certain Search solution and Elasticsearch :::{image} ../../../images/kibana-persistent-console.png :alt: Console -:class: screenshot +:screenshot: ::: @@ -91,7 +91,7 @@ Click **Variables** to create, edit, and delete variables. :::{image} ../../../images/kibana-variables.png :alt: Variables -:class: screenshot +:screenshot: ::: You can refer to these variables in the paths and bodies of your requests. Each variable can be referenced multiple times. diff --git a/raw-migrated-files/kibana/kibana/management.md b/raw-migrated-files/kibana/kibana/management.md deleted file mode 100644 index c4d72e942..000000000 --- a/raw-migrated-files/kibana/kibana/management.md +++ /dev/null @@ -1,69 +0,0 @@ -# Stack Management [management] - -**Stack Management** is home to UIs for managing all things Elastic Stack— indices, clusters, licenses, UI settings, data views, spaces, and more. - -Access to individual features is governed by {{es}} and {{kib}} privileges. Consult your administrator if you do not have the appropriate access. - - -## Ingest [manage-ingest] - -| | | -| --- | --- | -| [Ingest Pipelines](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md) | Create and manage ingest pipelines that let you perform common transformationsand enrichments on your data. | -| [Logstash Pipelines](logstash://reference/logstash-centralized-pipeline-management.md) | Create, edit, and delete your Logstash pipeline configurations. | - - -## Data [manage-data] - -| | | -| --- | --- | -| [Index Management](../../../manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md) | View index settings, mappings, and statistics and perform operations, such as refreshing,flushing, and clearing the cache. Practicing good index management ensuresthat your data is stored cost effectively. | -| [Index Lifecycle Policies](../../../manage-data/lifecycle/index-lifecycle-management.md) | Create a policy for defining the lifecycle of an index as it agesthrough the hot, warm, cold, and delete phases.Such policies help you control operation costsbecause you can put data in different resource tiers. | -| [Snapshot and Restore](../../../deploy-manage/tools/snapshot-and-restore.md) | Define a policy that creates, schedules, and automatically deletes snapshots to ensure that youhave backups of your cluster in case something goes wrong. | -| [Rollup Jobs](../../../manage-data/lifecycle/rollup.md) | [8.11.0] Create a job that periodically aggregates data from one or more indices, and thenrolls it into a new, compact index. Rollup indices are a good way to store months oryears of historical data in combination with your raw data. | -| [Transforms](../../../explore-analyze/transforms.md) | Use transforms to pivot existing {{es}} indices into summarized or entity-centric indices. | -| [Cross-Cluster Replication](/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md) | Replicate indices on a remote cluster and copy them to a follower index on a local cluster.This is important fordisaster recovery. It also keeps data local for faster queries. | -| [Remote Clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md) | Manage your remote clusters for use with cross-cluster search and cross-cluster replication.You can add and remove remote clusters, and check their connectivity. | - - -## Alerts and Insights [manage-alerts-insights] - -| | | -| --- | --- | -| [{{rules-ui}}](../../../explore-analyze/alerts-cases.md) | Centrally [manage your rules](../../../explore-analyze/alerts-cases/alerts/create-manage-rules.md) across {{kib}}. | -| [Cases](../../../explore-analyze/alerts-cases/cases.md) | Create and manage cases to investigate issues. | -| [{{connectors-ui}}](../../../deploy-manage/manage-connectors.md) | Create and [manage reusable connectors](../../../deploy-manage/manage-connectors.md) for triggering actions. | -| [Reporting](../../../explore-analyze/report-and-share.md) | Monitor the generation of reports—PDF, PNG, and CSV—and download reports that you previously generated.A report can contain a dashboard, visualization, table with Discover search results, or Canvas workpad. | -| Machine Learning Jobs | View, export, and import your [{{anomaly-detect}}](../../../explore-analyze/machine-learning/anomaly-detection.md) and[{{dfanalytics}}](../../../explore-analyze/machine-learning/data-frame-analytics.md) jobs. Open the Single MetricViewer or Anomaly Explorer to see your {{anomaly-detect}} results. | -| [Watcher](../../../explore-analyze/alerts-cases/watcher.md) | Detect changes in your data by creating, managing, and monitoring alerts.For example, you might create an alert when the maximum total CPU usage on a machine goesabove a certain percentage. | -| [Maintenance windows](../../../explore-analyze/alerts-cases/alerts/maintenance-windows.md) | Suppress rule notifications for scheduled periods of time. | - - -## Security [manage-security] - -| | | -| --- | --- | -| [Users](../../../deploy-manage/security.md) | View the users that have been defined on your cluster.Add or delete users and assign roles that give usersspecific privileges. | -| [Roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) | View the roles that exist on your cluster. Customizethe actions that a user with the role can perform, on a cluster, index, and space level. | -| [API Keys](../../../deploy-manage/api-keys/elasticsearch-api-keys.md) | Create secondary credentials so that you can send requests on behalf of the user.Secondary credentials have the same or lower access rights. | -| [Role Mappings](../../../deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md) | Assign roles to your users using a set of rules. Role mappings are requiredwhen authenticating via an external identity provider, such as Active Directory,Kerberos, PKI, OIDC, and SAML. | - - -## {{kib}} [manage-kibana] - -| | | -| --- | --- | -| [Data Views](../../../explore-analyze/find-and-organize/data-views.md) | Manage the fields in the data views that retrieve your data from {{es}}. | -| [Saved Objects](/explore-analyze/find-and-organize/saved-objects.md) | Copy, edit, delete, import, and export your saved objects.These include dashboards, visualizations, maps, data views, Canvas workpads, and more. | -| [Tags](../../../explore-analyze/find-and-organize/tags.md) | Create, manage, and assign tags to your saved objects. | -| [Search Sessions](../../../explore-analyze/discover/search-sessions.md) | Manage your saved search sessions, groups of queries that run in the background.Search sessions are useful when your queries take longer than usual to process,for example, when you have a large volume of data or when the performance of your storage location is slow. | -| [Spaces](../../../deploy-manage/manage-spaces.md) | Create spaces to organize your dashboards and other saved objects into categories.A space is isolated from all other spaces,so you can tailor it to your needs without impacting others. | -| [Advanced Settings](kibana://reference/advanced-settings.md) | Customize {{kib}} to suit your needs. Change the format for displaying dates, turn on dark mode,set the timespan for notification messages, and much more. | - - -## Stack [manage-stack] - -| | | -| --- | --- | -| [License Management](../../../deploy-manage/license/manage-your-license-in-self-managed-cluster.md) | View the status of your license, start a trial, or install a new license. Forthe full list of features that are included in your license,see the [subscription page](https://www.elastic.co/subscriptions). | - diff --git a/raw-migrated-files/kibana/kibana/search-ai-assistant.md b/raw-migrated-files/kibana/kibana/search-ai-assistant.md index 12dcb41c5..78bee857f 100644 --- a/raw-migrated-files/kibana/kibana/search-ai-assistant.md +++ b/raw-migrated-files/kibana/kibana/search-ai-assistant.md @@ -52,14 +52,14 @@ To open AI Assistant, select the **AI Assistant** button in the top toolbar in t :::{image} ../../../images/kibana-ai-assistant-button.png :alt: AI Assistant button -:class: screenshot +:screenshot: ::: This opens the AI Assistant chat interface flyout. :::{image} ../../../images/kibana-ai-assistant-welcome-chat.png :alt: AI Assistant Welcome chat -:class: screenshot +:screenshot: ::: You can get started by selecting **✨ Suggest** to get some example prompts, or by typing into the chat field. diff --git a/raw-migrated-files/kibana/kibana/secure-reporting.md b/raw-migrated-files/kibana/kibana/secure-reporting.md index 93446ae51..764f7d7bb 100644 --- a/raw-migrated-files/kibana/kibana/secure-reporting.md +++ b/raw-migrated-files/kibana/kibana/secure-reporting.md @@ -67,7 +67,7 @@ When security is enabled, you grant users access to {{report-features}} with [{{ :::{image} ../../../images/kibana-kibana-privileges-with-reporting.png :alt: Kibana privileges with Reporting options, Gold or higher license - :class: screenshot + :screenshot: ::: ::::{note} @@ -133,7 +133,7 @@ With a Basic license, you can grant users access with custom roles to {{report-f :::{image} ../../../images/kibana-kibana-privileges-with-reporting-basic.png :alt: Kibana privileges with Reporting options, Basic license -:class: screenshot +:screenshot: ::: With a Basic license, sub-feature application privileges are unavailable, but you can use the [role API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role) to grant access to CSV {{report-features}}: diff --git a/raw-migrated-files/kibana/kibana/set-time-filter.md b/raw-migrated-files/kibana/kibana/set-time-filter.md index 38ce2095c..eef82661f 100644 --- a/raw-migrated-files/kibana/kibana/set-time-filter.md +++ b/raw-migrated-files/kibana/kibana/set-time-filter.md @@ -12,14 +12,14 @@ Display data within a specified time range when your index contains time-based e :::{image} ../../../images/kibana-time-filter.png :alt: Time filter menu - :class: screenshot + :screenshot: ::: 3. To set start and end times, click the bar next to the time filter. In the popup, select **Absolute**, **Relative** or **Now**, then specify the required options. :::{image} ../../../images/kibana-time-relative.png :alt: Time filter showing relative time - :class: screenshot + :screenshot: ::: diff --git a/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md b/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md index dd8abaf4c..02e041db2 100644 --- a/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md +++ b/raw-migrated-files/observability-docs/observability/obs-ai-assistant.md @@ -12,7 +12,7 @@ The AI Assistant uses generative AI to provide: :::{image} ../../../images/observability-obs-assistant2.gif :alt: Observability AI assistant preview -:class: screenshot +:screenshot: ::: The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors: @@ -76,7 +76,7 @@ To set up the AI Assistant: * [OpenAI API keys](https://platform.openai.com/docs/api-reference) * [Azure OpenAI Service API keys](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference) - * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.md) + * [Amazon Bedrock authentication keys and secrets](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html) * [Google Gemini service account keys](https://cloud.google.com/iam/docs/keys-list-get) 2. Create a connector for your AI provider. Refer to the connector documentation to learn how: @@ -236,7 +236,7 @@ This opens the AI Assistant flyout, where you can ask the assistant questions ab :::{image} ../../../images/observability-obs-ai-chat.png :alt: Observability AI assistant chat -:class: screenshot +:screenshot: ::: ::::{important} @@ -309,14 +309,14 @@ For example, in the log details, you’ll see prompts for **What’s this messag :::{image} ../../../images/observability-obs-ai-logs-prompts.png :alt: Observability AI assistant logs prompts -:class: screenshot +:screenshot: ::: Clicking a prompt generates a message specific to that log entry: :::{image} ../../../images/observability-obs-ai-logs.gif :alt: Observability AI assistant example -:class: screenshot +:screenshot: ::: Continue a conversation from a contextual prompt by clicking **Start chat** to open the AI Assistant chat. @@ -333,7 +333,7 @@ Use the [Observability AI Assistant connector](kibana://reference/connectors-kib :::{image} ../../../images/observability-obs-ai-assistant-action-high-cpu.png :alt: Add an Observability AI assistant action while creating a rule in the Observability UI - :class: screenshot + :screenshot: ::: @@ -348,7 +348,7 @@ When the alert fires, contextual details about the event—such as when the aler :::{image} ../../../images/observability-obs-ai-assistant-output.png :alt: AI Assistant conversation created in response to an alert -:class: screenshot +:screenshot: ::: ::::{important} @@ -369,7 +369,7 @@ The `server.publicBaseUrl` setting must be correctly specified under {{kib}} set :::{image} ../../../images/observability-obs-ai-assistant-slack-message.png :alt: Message sent by Slack by the AI Assistant includes a link to the conversation -:class: screenshot +:screenshot: ::: The Observability AI Assistant connector is called when the alert fires and when it recovers. diff --git a/raw-migrated-files/stack-docs/elastic-stack/air-gapped-install.md b/raw-migrated-files/stack-docs/elastic-stack/air-gapped-install.md index 7b7490dbf..c7540fe97 100644 --- a/raw-migrated-files/stack-docs/elastic-stack/air-gapped-install.md +++ b/raw-migrated-files/stack-docs/elastic-stack/air-gapped-install.md @@ -486,7 +486,7 @@ Agent policies and integration settings can be managed using the {{kib}} UI. For :::{image} ../../../images/elastic-stack-air-gapped-configure-logging.png :alt: Configuration of a logging integration in an agent policy -:class: screenshot +:screenshot: ::: diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index 963189d26..383d1c2ee 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -28,7 +28,6 @@ toc: - file: cloud/cloud-enterprise/ece-add-user-settings.md - file: cloud/cloud-enterprise/ece-administering-deployments.md - file: cloud/cloud-enterprise/ece-api-console.md - - file: cloud/cloud-enterprise/ece-autoscaling.md - file: cloud/cloud-enterprise/ece-change-deployment.md - file: cloud/cloud-enterprise/ece-configuring-keystore.md - file: cloud/cloud-enterprise/ece-create-deployment.md @@ -70,7 +69,6 @@ toc: - file: cloud/cloud-heroku/ech-access-kibana.md - file: cloud/cloud-heroku/ech-activity-page.md - file: cloud/cloud-heroku/ech-add-user-settings.md - - file: cloud/cloud-heroku/ech-autoscaling.md - file: cloud/cloud-heroku/ech-configuring-keystore.md - file: cloud/cloud-heroku/ech-custom-repository.md - file: cloud/cloud-heroku/ech-delete-deployment.md @@ -104,7 +102,6 @@ toc: - file: cloud/cloud/ec-access-kibana.md - file: cloud/cloud/ec-activity-page.md - file: cloud/cloud/ec-add-user-settings.md - - file: cloud/cloud/ec-autoscaling.md - file: cloud/cloud/ec-billing-stop.md - file: cloud/cloud/ec-cloud-ingest-data.md - file: cloud/cloud/ec-configuring-keystore.md @@ -163,7 +160,6 @@ toc: - file: docs-content/serverless/elasticsearch-ingest-data-file-upload.md - file: docs-content/serverless/elasticsearch-ingest-data-through-api.md - file: docs-content/serverless/general-billing-stop-project.md - - file: docs-content/serverless/general-ml-nlp-auto-scale.md - file: docs-content/serverless/general-sign-up-trial.md - file: docs-content/serverless/index-management.md - file: docs-content/serverless/intro.md @@ -268,7 +264,6 @@ toc: - file: kibana/kibana/esql.md - file: kibana/kibana/install.md - file: kibana/kibana/logging-settings.md - - file: kibana/kibana/management.md - file: kibana/kibana/reporting-production-considerations.md - file: kibana/kibana/search-ai-assistant.md - file: kibana/kibana/secure-reporting.md diff --git a/redirects.yml b/redirects.yml index c81e1c8ef..9f65b8250 100644 --- a/redirects.yml +++ b/redirects.yml @@ -6,14 +6,23 @@ redirects: 'solutions/search/search-approaches/near-real-time-search.md': '!manage-data/data-store/near-real-time-search.md' ## deploy-manage + 'deploy-manage/autoscaling/ec-autoscaling-api-example.md': '!deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md' + 'deploy-manage/autoscaling/ece-autoscaling-api-example.md': '!deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md' 'deploy-manage/deploy/elastic-cloud/ec-configure-deployment-settings.md': '!deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md' 'deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md': anchors: 'anonymous-authentication': 'basic-authentication': 'http-authentication': + 'deploy-manage/manage-spaces.md': + anchors: + 'spaces-control-feature-visibility': 'deploy-manage/deploy/cloud-enterprise/deploy-large-installation-cloud.md': '!deploy-manage/deploy/cloud-enterprise/deploy-large-installation.md' +## explore-analyze + 'explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md': '!deploy-manage/autoscaling/trained-model-autoscaling.md' + + ## reference 'reference/security/elastic-defend/index.md': 'solutions/security/configure-elastic-defend.md' 'reference/security/elastic-defend/elastic-endpoint-deploy-reqs.md': 'solutions/security/configure-elastic-defend/elastic-defend-requirements.md' diff --git a/reference/glossary/index.md b/reference/glossary/index.md index 1fbd12c4c..678adc788 100644 --- a/reference/glossary/index.md +++ b/reference/glossary/index.md @@ -286,7 +286,7 @@ $$$glossary-external-alert$$$ external alert ## F [f-glos] $$$glossary-feature-controls$$$ Feature Controls -: Enables administrators to customize which features are available in each [space](/reference/glossary/index.md#glossary-space). See [Feature Controls](/deploy-manage/manage-spaces.md#spaces-control-feature-visibility). +: Enables administrators to customize which features are available in each [space](/reference/glossary/index.md#glossary-space). See [](/deploy-manage/manage-spaces.md). $$$glossary-feature-importance$$$ feature importance : In supervised {{ml}} methods such as {{regression}} and {{classification}}, feature importance indicates the degree to which a specific feature affects a prediction. See [{{regression-cap}} feature importance](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-regression.md#dfa-regression-feature-importance) and [{{classification-cap}} feature importance](/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-classification.md#dfa-classification-feature-importance). @@ -302,7 +302,7 @@ $$$glossary-field-reference$$$ field reference $$$glossary-field$$$ field : 1. Key-value pair in a [document](/reference/glossary/index.md#glossary-document). See [Mapping](/manage-data/data-store/mapping.md). -2. In {{ls}}, this term refers to an [event](/reference/glossary/index.md#glossary-event) property. For example, each event in an apache access log has properties, such as a status code (200, 404), request path ("/", "index.html"), HTTP verb (GET, POST), client IP address, and so on. {{ls}} uses the term "fields" to refer to these properties. +1. In {{ls}}, this term refers to an [event](/reference/glossary/index.md#glossary-event) property. For example, each event in an apache access log has properties, such as a status code (200, 404), request path ("/", "index.html"), HTTP verb (GET, POST), client IP address, and so on. {{ls}} uses the term "fields" to refer to these properties. $$$glossary-filter-plugin$$$ filter plugin diff --git a/reference/ingestion-tools/fleet/add-fleet-server-cloud.md b/reference/ingestion-tools/fleet/add-fleet-server-cloud.md index ec01dde2d..dafe974b7 100644 --- a/reference/ingestion-tools/fleet/add-fleet-server-cloud.md +++ b/reference/ingestion-tools/fleet/add-fleet-server-cloud.md @@ -71,7 +71,7 @@ Don’t see the agent? Make sure your deployment includes an {{integrations-serv :::{image} images/integrations-server-hosted-container.png :alt: Hosted {integrations-server} -:class: screenshot +:screenshot: ::: ::::: diff --git a/reference/ingestion-tools/fleet/add-fleet-server-mixed.md b/reference/ingestion-tools/fleet/add-fleet-server-mixed.md index 572a45324..eaa2b972f 100644 --- a/reference/ingestion-tools/fleet/add-fleet-server-mixed.md +++ b/reference/ingestion-tools/fleet/add-fleet-server-mixed.md @@ -117,7 +117,7 @@ To add a {{fleet-server}}: :::{image} images/add-fleet-server-advanced.png :alt: In-product instructions for adding a {{fleet-server}} in advanced mode - :class: screenshot + :screenshot: ::: 4. Follow the in-product instructions to add a {{fleet-server}}. diff --git a/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md b/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md index ffbec8cf8..51add6fb4 100644 --- a/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md +++ b/reference/ingestion-tools/fleet/add-fleet-server-on-prem.md @@ -101,7 +101,7 @@ To add a {{fleet-server}}: :::{image} images/add-fleet-server.png :alt: In-product instructions for adding a {{fleet-server}} in quick start mode - :class: screenshot + :screenshot: ::: * Use **Advanced** if you want to either: @@ -120,7 +120,7 @@ To add a {{fleet-server}}: :::{image} images/add-fleet-server-advanced.png :alt: In-product instructions for adding a {{fleet-server}} in advanced mode - :class: screenshot + :screenshot: ::: 4. Step through the in-product instructions to configure and install {{fleet-server}}. diff --git a/reference/ingestion-tools/fleet/agent-health-status.md b/reference/ingestion-tools/fleet/agent-health-status.md index ae3654316..c65fe21c7 100644 --- a/reference/ingestion-tools/fleet/agent-health-status.md +++ b/reference/ingestion-tools/fleet/agent-health-status.md @@ -26,7 +26,7 @@ The frequency of check-ins can be configured to a new value with the condition t :::{image} images/agent-health-status.png :alt: Diagram of connectivity between agents -:class: screenshot +:screenshot: ::: diff --git a/reference/ingestion-tools/fleet/agent-policy.md b/reference/ingestion-tools/fleet/agent-policy.md index 190a4f9ad..bdf2f2ed4 100644 --- a/reference/ingestion-tools/fleet/agent-policy.md +++ b/reference/ingestion-tools/fleet/agent-policy.md @@ -123,7 +123,7 @@ You can apply policies to one or more {{agent}}s. To apply a policy: :::{image} images/apply-agent-policy.png :alt: Assign to new policy dropdown - :class: screenshot + :screenshot: ::: Unable to select multiple agents? Confirm that your subscription level supports selective agent policy reassignment in {{fleet}}. For more information, refer to [{{stack}} subscriptions](https://www.elastic.co/subscriptions). @@ -183,7 +183,7 @@ To add a custom field: :::{image} images/agent-policy-custom-field.png :alt: Sceen capture showing the UI to add a custom field and value - :class: screenshot + :screenshot: ::: 5. Click **Add another field** for additional fields. Click **Save changes** when you’re done. @@ -269,7 +269,7 @@ Assuming your [{{stack}} subscription level](https://www.elastic.co/subscription :::{image} images/agent-output-settings.png :alt: Screen capture showing the {{ls}} output policy selected in an agent policy - :class: screenshot + :screenshot: ::: 4. Save your changes. @@ -283,7 +283,7 @@ If you want to connect multiple agents to a specific on-premises {{fleet-server} :::{image} images/add-fleet-server-to-policy.png :alt: Screen capture showing how to add a {{fleet-server}} to a policy when creating or updating the policy. -:class: screenshot +:screenshot: ::: When the policy is saved, all agents assigned to the policy are configured to use the new {{fleet-server}} as the controller. @@ -310,7 +310,7 @@ Though secret values stored in {{fleet}} are hidden, they can be updated. To upd :::{image} images/fleet-policy-hidden-secret.png :alt: Screen capture showing a hidden secret value as part of an integration policy - :class: screenshot + :screenshot: ::: 4. Click **Save integration**. The original secret value is overwritten in the policy. diff --git a/reference/ingestion-tools/fleet/agent-processors.md b/reference/ingestion-tools/fleet/agent-processors.md index 873a56d81..13626e568 100644 --- a/reference/ingestion-tools/fleet/agent-processors.md +++ b/reference/ingestion-tools/fleet/agent-processors.md @@ -32,7 +32,7 @@ The processors described in this section are valid: :::{image} images/add-processor.png :alt: Screen showing how to add a processor to an integration policy - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/reference/ingestion-tools/fleet/certificates-rotation.md b/reference/ingestion-tools/fleet/certificates-rotation.md index 313e5f58f..62968310c 100644 --- a/reference/ingestion-tools/fleet/certificates-rotation.md +++ b/reference/ingestion-tools/fleet/certificates-rotation.md @@ -188,5 +188,5 @@ To rotate a CA certificate on {{es}} for connections from {{agent}}: :::{image} images/certificate-rotation-agent-es.png :alt: Screen capture of the Edit Output UI: Elasticsearch CA trusted fingerprint - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/create-standalone-agent-policy.md b/reference/ingestion-tools/fleet/create-standalone-agent-policy.md index 59fc48688..5f262cfcb 100644 --- a/reference/ingestion-tools/fleet/create-standalone-agent-policy.md +++ b/reference/ingestion-tools/fleet/create-standalone-agent-policy.md @@ -21,7 +21,7 @@ You don’t need {{fleet}} to perform the following steps, but on self-managed c :::{image} images/add-integration-standalone.png :alt: Add Nginx integration screen with agent policy selected - :class: screenshot + :screenshot: ::: ::::{note} @@ -37,7 +37,7 @@ You don’t need {{fleet}} to perform the following steps, but on self-managed c :::{image} images/add-agent-to-hosts.png :alt: Popup window showing the option to add {{agent}} to your hosts - :class: screenshot + :screenshot: ::: 7. (Optional) To add more integrations to the agent policy, click **Add {{agent}} later** and go back to the **Integrations** page. Repeat the previous steps for each integration. @@ -47,7 +47,7 @@ You don’t need {{fleet}} to perform the following steps, but on self-managed c :::{image} images/download-agent-policy.png :alt: Add data screen with option to download the default agent policy - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/data-streams-scenario1.md b/reference/ingestion-tools/fleet/data-streams-scenario1.md index 38bfa66a8..9018035f8 100644 --- a/reference/ingestion-tools/fleet/data-streams-scenario1.md +++ b/reference/ingestion-tools/fleet/data-streams-scenario1.md @@ -36,7 +36,7 @@ The **Index Templates** view in {{kib}} shows you all of the index templates ava :::{image} images/component-templates-list.png :alt: List of component templates available for the index template - :class: screenshot + :screenshot: ::: 5. Select `logs@custom` in the list to view the component template properties. @@ -60,7 +60,7 @@ The **Index Templates** view in {{kib}} shows you all of the index templates ava :::{image} images/review-component-template01.png :alt: Review details for the new component template - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/data-streams-scenario2.md b/reference/ingestion-tools/fleet/data-streams-scenario2.md index 91250430e..c7841f4e6 100644 --- a/reference/ingestion-tools/fleet/data-streams-scenario2.md +++ b/reference/ingestion-tools/fleet/data-streams-scenario2.md @@ -31,7 +31,7 @@ The **Index Templates** view in {{kib}} shows you all of the index templates ava :::{image} images/index-template-system-auth.png :alt: List of component templates available for the logs-system.auth index template - :class: screenshot + :screenshot: ::: 5. In the **Summary**, select `logs-system.auth@custom` from the list to view the component template properties. @@ -56,7 +56,7 @@ The **Index Templates** view in {{kib}} shows you all of the index templates ava :::{image} images/review-component-template02.png :alt: Review details for the new component template - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/data-streams-scenario3.md b/reference/ingestion-tools/fleet/data-streams-scenario3.md index f0d5ed9ea..692c7912f 100644 --- a/reference/ingestion-tools/fleet/data-streams-scenario3.md +++ b/reference/ingestion-tools/fleet/data-streams-scenario3.md @@ -29,7 +29,7 @@ The **Data Streams** view in {{kib}} shows you the data streams, index templates :::{image} images/data-stream-info.png :alt: Data streams info - :class: screenshot + :screenshot: ::: @@ -65,7 +65,7 @@ metrics-system.network-production@custom :::{image} images/create-component-template.png :alt: Create component template - :class: screenshot + :screenshot: ::: @@ -93,7 +93,7 @@ Please note the following: * When duplicating the index template, do not change :::{image} images/create-index-template.png :alt: Create index template -:class: screenshot +:screenshot: ::: diff --git a/reference/ingestion-tools/fleet/data-streams.md b/reference/ingestion-tools/fleet/data-streams.md index d3f73b6d8..ba3732961 100644 --- a/reference/ingestion-tools/fleet/data-streams.md +++ b/reference/ingestion-tools/fleet/data-streams.md @@ -57,7 +57,7 @@ All data streams, and the pre-built dashboards that they ship with, are viewable :::{image} images/kibana-fleet-datastreams.png :alt: Data streams page -:class: screenshot +:screenshot: ::: ::::{tip} diff --git a/reference/ingestion-tools/fleet/elastic-agent-container.md b/reference/ingestion-tools/fleet/elastic-agent-container.md index 4a01cbab9..73f28fcf0 100644 --- a/reference/ingestion-tools/fleet/elastic-agent-container.md +++ b/reference/ingestion-tools/fleet/elastic-agent-container.md @@ -199,7 +199,7 @@ You can also add `type=tmpfs` to the mount parameter (`--mount type=tmpfs,destin :::{image} images/kibana-fleet-agents.png :alt: {{agent}}s {{fleet}} page - :class: screenshot + :screenshot: ::: 3. To view data flowing in, go to **Analytics → Discover** and select the index `metrics-*`, or even more specific, `metrics-kubernetes.*`. If you can’t see these indexes, [create a data view](/explore-analyze/find-and-organize/data-views.md) for them. diff --git a/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md b/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md index ca4854a47..8f50804f8 100644 --- a/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md +++ b/reference/ingestion-tools/fleet/elastic-agent-unprivileged.md @@ -88,14 +88,14 @@ Most Elastic integrations support running {{agent}} in unprivileged mode. For th :::{image} images/integration-root-requirement.png :alt: Elastic Defend integration page showing root requirement -:class: screenshot +:screenshot: ::: As well, a warning is displayed in {{kib}} if you try to add an integration that requires root privileges to an {{agent}} policy that has agents enrolled in unprivileged mode. :::{image} images/unprivileged-agent-warning.png :alt: Warning indicating that root privileged agent is required for an integration -:class: screenshot +:screenshot: ::: Examples of integrations that require {{agent}} to have administrative privileges are: @@ -120,7 +120,7 @@ To view the status of an {{agent}}: :::{image} images/agent-privilege-mode.png :alt: Agent details tab showing the agent is running as non-root - :class: screenshot + :screenshot: ::: @@ -133,14 +133,14 @@ The number of agents enrolled with the policy is shown. Hover over the link to v :::{image} images/privileged-and-unprivileged-agents.png :alt: Agent policy tab showing 1 unprivileged agent and 0 privileged enrolled agents -:class: screenshot +:screenshot: ::: In the event that the {{agent}} policy has integrations installed that require root privileges, but there are agents running without root privileges, this is shown in the tooltip. :::{image} images/root-integration-and-unprivileged-agents.png :alt: Agent policy tab showing 1 unprivileged agent and 0 privileged enrolled agents -:class: screenshot +:screenshot: ::: diff --git a/reference/ingestion-tools/fleet/epr-proxy-setting.md b/reference/ingestion-tools/fleet/epr-proxy-setting.md index 8731c8252..3170c0738 100644 --- a/reference/ingestion-tools/fleet/epr-proxy-setting.md +++ b/reference/ingestion-tools/fleet/epr-proxy-setting.md @@ -13,6 +13,8 @@ Also your organization might have network traffic restrictions that prevent {{ki xpack.fleet.registryProxyUrl: your-nat-gateway.corp.net ``` +If your HTTP proxy requires authentication, you can include the credentials in the URI, such as `https://username:password@your-nat-gateway.corp.net`, only when using HTTPS. + ## What information is sent to the {{package-registry}}? [_what_information_is_sent_to_the_package_registry] In production environments, {{kib}}, through the {{fleet}} plugin, is the only service interacting with the {{package-registry}}. Communication happens when interacting with the Integrations UI, and when upgrading {{kib}}. The shared information is about discovery of Elastic packages and their available versions. In general, the only deployment-specific data that is shared is the {{kib}} version. diff --git a/reference/ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md b/reference/ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md index ce8448e82..8775efc4b 100644 --- a/reference/ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md +++ b/reference/ingestion-tools/fleet/example-kubernetes-fleet-managed-agent-helm.md @@ -100,7 +100,7 @@ To get started, you need: :::{image} images/helm-example-nodes-enrollment-confirmation.png :alt: Screen capture of Add Agent UI showing that the agent has enrolled in Fleet - :class: screenshot + :screenshot: ::: 12. In {{fleet}}, open the **Agents** tab and see that an **Agent-pernode-demo-#** agent is running. @@ -109,7 +109,7 @@ To get started, you need: :::{image} images/helm-example-nodes-logs-and-metrics.png :alt: Screen capture of the Logs and Metrics view on the Integrations pane - :class: screenshot + :screenshot: ::: @@ -132,7 +132,7 @@ Now that you’ve {{agent}} and data is flowing, you can set up the {{k8s}} inte :::{image} images/helm-example-fleet-metrics-dashboard.png :alt: Screen capture of the Metrics Kubernetes pods dashboard - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md b/reference/ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md index 0941ef411..2e1a8c990 100644 --- a/reference/ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md +++ b/reference/ingestion-tools/fleet/example-kubernetes-standalone-agent-helm.md @@ -111,14 +111,14 @@ To get started, you need: :::{image} images/helm-example-nodes-metrics-dashboard.png :alt: Screen capture of the Metrics Kubernetes nodes dashboard - :class: screenshot + :screenshot: ::: 12. On the {{k8s}} integration page, open the **Assets** tab and select the **[Metrics Kubernetes] Pods** dashboard. As with the nodes dashboard, on this dashboard you can view the status of your {{k8s}} pods, including various metrics on memory, CPU, and network throughput. :::{image} images/helm-example-pods-metrics-dashboard.png :alt: Screen capture of the Metrics Kubernetes pods dashboard - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/example-standalone-monitor-nginx-serverless.md b/reference/ingestion-tools/fleet/example-standalone-monitor-nginx-serverless.md index b3a67b2df..ef2e3ac90 100644 --- a/reference/ingestion-tools/fleet/example-standalone-monitor-nginx-serverless.md +++ b/reference/ingestion-tools/fleet/example-standalone-monitor-nginx-serverless.md @@ -42,7 +42,7 @@ To start, we’ll set up a basic [nginx web server](https://docs.nginx.com/nginx :::{image} images/guide-nginx-welcome.png :alt: Browser window showing Welcome to nginx! - :class: screenshot + :screenshot: ::: @@ -61,7 +61,7 @@ Now that your web server is running, let’s get set up to monitor it in {{eclou :::{image} images/guide-sign-up-trial.png :alt: Start your free Elastic Cloud trial - :class: screenshot + :screenshot: ::: 3. After you’ve [logged in](https://cloud.elastic.co/login), select **Create project**. diff --git a/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md b/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md index afbc73544..c1e3cfc2e 100644 --- a/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md +++ b/reference/ingestion-tools/fleet/example-standalone-monitor-nginx.md @@ -42,7 +42,7 @@ To start, we’ll set up a basic [nginx web server](https://docs.nginx.com/nginx :::{image} images/guide-nginx-welcome.png :alt: Browser window showing Welcome to nginx! - :class: screenshot + :screenshot: ::: @@ -61,7 +61,7 @@ Now that your web server is running, let’s get set up to monitor it in {{eclou :::{image} images/guide-sign-up-trial.png :alt: Start your free Elastic Cloud trial - :class: screenshot + :screenshot: ::: 3. After you’ve [logged in](https://cloud.elastic.co/login), select **Create deployment** and give your deployment a name. You can leave the default options or select a different cloud provider, region, hardware profile, or version. diff --git a/reference/ingestion-tools/fleet/filesource-provider.md b/reference/ingestion-tools/fleet/filesource-provider.md new file mode 100644 index 000000000..f0bae3128 --- /dev/null +++ b/reference/ingestion-tools/fleet/filesource-provider.md @@ -0,0 +1,21 @@ +# Filesource provider [filesource-provider] + +Watches for changes of specified files and updates the values of the variables when the content of the files changes. + +This allows information from the filesystem to be used as variables in the {{agent}} configuration. This information is allowed only when the provider has been explicitly configured to read this information from the disk. The policy cannot just read any file, it has to be explicitly configured to allow it. + +For example, the following configuration watches for changes to `file1`: + +```yaml +provides: + filesource: + sources: + file1: + path: ./file1 + +inputs: + - id: filestream + type: filestream + paths: + - ${filesource.file1} +``` \ No newline at end of file diff --git a/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md b/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md index 593078c45..4c0623eee 100644 --- a/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md +++ b/reference/ingestion-tools/fleet/filter-agent-list-by-tags.md @@ -13,7 +13,7 @@ To filter the Agents list by tag, in {{kib}}, go to **{{fleet}} > Agents** and c :::{image} images/agent-tags.png :alt: Agents list filtered to show agents with the staging tag -:class: screenshot +:screenshot: ::: If you haven’t added tags to any {{agent}}s yet, the list will be empty. @@ -32,7 +32,7 @@ To manage tags in {{fleet}}: :::{image} images/add-remove-tags.png :alt: Screenshot of add / remove tags menu - :class: screenshot + :screenshot: ::: ::::{tip} diff --git a/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md b/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md index 8360fbb43..d45ad2e64 100644 --- a/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md +++ b/reference/ingestion-tools/fleet/fleet-enrollment-tokens.md @@ -37,7 +37,7 @@ To create an enrollment token: :::{image} images/create-token.png :alt: Enrollment tokens tab in {fleet} - :class: screenshot + :screenshot: ::: 3. Click **Create enrollment token**. @@ -45,7 +45,7 @@ To create an enrollment token: :::{image} images/show-token.png :alt: Enrollment tokens tab with Show token icon highlighted - :class: screenshot + :screenshot: ::: @@ -70,7 +70,7 @@ To revoke an enrollment token: :::{image} images/revoke-token.png :alt: Enrollment tokens tab with Revoke token highlighted - :class: screenshot + :screenshot: ::: 3. Click **Revoke enrollment token**. You can no longer use this token to enroll {{agent}}s. However, the currently enrolled agents will continue to function. diff --git a/reference/ingestion-tools/fleet/fleet-roles-privileges.md b/reference/ingestion-tools/fleet/fleet-roles-privileges.md index 628304293..43c1029cf 100644 --- a/reference/ingestion-tools/fleet/fleet-roles-privileges.md +++ b/reference/ingestion-tools/fleet/fleet-roles-privileges.md @@ -56,12 +56,12 @@ To create a new role with access to {{fleet}} and Integrations: 1. To grant the role full access to use and manage {{fleet}} and integrations, set both the **Fleet** and **Integrations** privileges to `All`. :::{image} images/kibana-fleet-privileges-all.png :alt: Kibana privileges flyout showing Fleet and Integrations access set to All - :class: screenshot + :screenshot: ::: 2. Similarly, to create a read-only user for {{fleet}} and Integrations, set both the **Fleet** and **Integrations** privileges to `Read`. :::{image} images/kibana-fleet-privileges-read.png :alt: Kibana privileges flyout showing Fleet and Integrations access set to All - :class: screenshot + :screenshot: ::: Once you've created a new role you can assign it to any {{es}} user. You can edit the role at any time by returning to the **Roles** page in {{kib}}. \ No newline at end of file diff --git a/reference/ingestion-tools/fleet/fleet-server-monitoring.md b/reference/ingestion-tools/fleet/fleet-server-monitoring.md index 06d0fbebf..50cee9ea6 100644 --- a/reference/ingestion-tools/fleet/fleet-server-monitoring.md +++ b/reference/ingestion-tools/fleet/fleet-server-monitoring.md @@ -20,7 +20,7 @@ To monitor {{fleet-server}}: :::{image} images/fleet-server-agent-policy-page.png :alt: {{fleet-server}} agent policy - :class: screenshot + :screenshot: ::: 5. To confirm your change, click **Save changes**. @@ -31,7 +31,7 @@ In the following example, `fleetserver` was configured as the namespace, and you :::{image} images/datastream-namespace.png :alt: Data stream -:class: screenshot +:screenshot: ::: Go to **Analytics > Dashboard** and search for the predefined dashboard called **[Elastic Agent] Agent metrics**. Choose this dashboard, and run a query based on the `fleetserver` namespace. @@ -40,7 +40,7 @@ The following dashboard shows data for the query `data_stream.namespace: "fleets :::{image} images/dashboard-datastream01.png :alt: Dashboard Data stream -:class: screenshot +:screenshot: ::: Note that as an alternative to running the query, you can hide all metrics except `fleet_server` in the dashboard. diff --git a/reference/ingestion-tools/fleet/fleet-server-scalability.md b/reference/ingestion-tools/fleet/fleet-server-scalability.md index f57be8778..d7eb316b2 100644 --- a/reference/ingestion-tools/fleet/fleet-server-scalability.md +++ b/reference/ingestion-tools/fleet/fleet-server-scalability.md @@ -25,7 +25,7 @@ First modify your {{fleet}} deployment settings in {{ecloud}}: :::{image} images/fleet-server-hosted-container.png :alt: {{fleet-server}} hosted agent - :class: screenshot + :screenshot: ::: @@ -36,14 +36,14 @@ Next modify the {{fleet-server}} configuration by editing the agent policy: :::{image} images/elastic-cloud-agent-policy.png :alt: {{ecloud}} policy - :class: screenshot + :screenshot: ::: 3. Under {{fleet-server}}, modify **Max Connections** and other [advanced settings](#fleet-server-configuration) as described in [Scaling recommendations ({{ecloud}})](#scaling-recommendations). :::{image} images/fleet-server-configuration.png :alt: {{fleet-server}} configuration - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md b/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md index 051c16c63..3a5e23834 100644 --- a/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md +++ b/reference/ingestion-tools/fleet/grant-access-to-elasticsearch.md @@ -69,7 +69,7 @@ To create an API key for {{agent}}: :::{image} images/copy-api-key.png :alt: Message with field for copying API key - :class: screenshot + :screenshot: ::: 2. Copy the API key. You will need this for the next step, and you will not be able to view it again. @@ -111,7 +111,7 @@ Although it’s recommended that you use an API key instead of a username and pa :::{image} images/create-standalone-agent-role.png :alt: Create role settings for a standalone agent role - :class: screenshot + :screenshot: ::: 5. Create the role and assign it to a user. For more information about creating roles, refer to [{{kib}} role management](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md). diff --git a/reference/ingestion-tools/fleet/index.md b/reference/ingestion-tools/fleet/index.md index 124ec106c..a36e976d5 100644 --- a/reference/ingestion-tools/fleet/index.md +++ b/reference/ingestion-tools/fleet/index.md @@ -92,7 +92,7 @@ Standalone mode requires you to manually configure and manage the agent locally. :::{image} images/fleet-start.png :alt: {{fleet}} app in {{kib}} -:class: screenshot +:screenshot: ::: {{fleet}} serves as the communication channel back to the {{agents}}. Agents check in for the latest updates on a regular basis. You can have any number of agents enrolled into each agent policy, which allows you to scale up to thousands of hosts. diff --git a/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md b/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md index 860d01f0d..a2acbb02f 100644 --- a/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md +++ b/reference/ingestion-tools/fleet/ingest-pipeline-kubernetes.md @@ -21,7 +21,7 @@ For {{agent}} versions >[8.10.4], the default configuration for metadata enrichm :::{image} images/add_resource_metadata.png :alt: Configure add_resource_metadata -:class: screenshot +:screenshot: ::: Example: Enabling the enrichment through `add_resource_metadata` in a Managed {{agent}} Policy. @@ -40,14 +40,14 @@ Create the following custom ingest pipeline with two processors: :::{image} images/ingest_pipeline_custom_k8s.png :alt: Custom ingest pipeline -:class: screenshot +:screenshot: ::: ### Processor for deployment [_processor_for_deployment] :::{image} images/gsub_deployment.png :alt: Gsub Processor for deployment -:class: screenshot +:screenshot: ::: @@ -55,7 +55,7 @@ Create the following custom ingest pipeline with two processors: :::{image} images/gsub_cronjob.png :alt: Gsub Processor for cronjob -:class: screenshot +:screenshot: ::: The final `metrics-kubernetes.state_pod@custom` ingest pipeline: diff --git a/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md b/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md index b9a63e105..9edeb35fe 100644 --- a/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md +++ b/reference/ingestion-tools/fleet/install-fleet-managed-elastic-agent.md @@ -67,7 +67,7 @@ To install an {{agent}} and enroll it in {{fleet}}: :::{image} images/kibana-agent-flyout.png :alt: Add agent flyout in {kib} - :class: screenshot + :screenshot: ::: @@ -84,7 +84,7 @@ To confirm that {{agent}} is installed and running, open the **Agents** tab in { :::{image} images/kibana-fleet-agents.png :alt: {{fleet}} showing enrolled agents -:class: screenshot +:screenshot: ::: ::::{tip} diff --git a/reference/ingestion-tools/fleet/managed-integrations-content.md b/reference/ingestion-tools/fleet/managed-integrations-content.md index e9c81ba2d..af1be51da 100644 --- a/reference/ingestion-tools/fleet/managed-integrations-content.md +++ b/reference/ingestion-tools/fleet/managed-integrations-content.md @@ -11,7 +11,7 @@ Most integration content installed by {{fleet}} isn't editable. This content is :::{image} images/system-managed.png :alt: An image of the new managed badge. -:class: screenshot +:screenshot: ::: When a managed dashboard is cloned, any linked or referenced panels become part of the clone without relying on external sources. The panels are integrated into the cloned dashboard as stand alone components. For example, with a cloned dashboard, the cloned panels become entirely self-contained copies without any dependencies on the original configuration. Clones can be customized and modified without accidentally affecting the original. diff --git a/reference/ingestion-tools/fleet/migrate-elastic-agent.md b/reference/ingestion-tools/fleet/migrate-elastic-agent.md index c06f749c7..ab146f145 100644 --- a/reference/ingestion-tools/fleet/migrate-elastic-agent.md +++ b/reference/ingestion-tools/fleet/migrate-elastic-agent.md @@ -24,7 +24,7 @@ Refer to the full [Snapshot and restore](/deploy-manage/tools/snapshot-and-resto :::{image} images/migrate-agent-take-snapshot.png :alt: Deployments Snapshots page - :class: screenshot + :screenshot: ::: @@ -41,7 +41,7 @@ You can create a new cluster based on the snapshot taken in the previous step, a :::{image} images/migrate-agent-new-deployment.png :alt: Create a deployment page - :class: screenshot + :screenshot: ::: @@ -55,7 +55,7 @@ when the target cluster is available you’ll need to adjust a few settings. Tak :::{image} images/migrate-agent-agents-offline.png :alt: Agents tab in Fleet showing offline agents - :class: screenshot + :screenshot: ::: 3. Open the {{fleet}} **Settings** tab. @@ -65,7 +65,7 @@ when the target cluster is available you’ll need to adjust a few settings. Tak :::{image} images/migrate-agent-host-output-settings.png :alt: Settings tab in Fleet showing source deployment host and output settings - :class: screenshot + :screenshot: ::: The next steps explain how to obtain the relevant {{fleet-server}} host and {{es}} output details applicable to the new target cluster in {{ecloud}}. @@ -88,7 +88,7 @@ when the target cluster is available you’ll need to adjust a few settings. Tak :::{image} images/migrate-agent-elasticsearch-output.png :alt: Outputs section showing the new Elasticsearch host setting - :class: screenshot + :screenshot: ::: In this example, the `New Elasticsearch` output and the `Elastic Cloud internal output` now have the same cluster ID, namely `fcccb85b651e452aa28703a59aea9b00`. @@ -114,7 +114,7 @@ The easiest way to find the `deployment-id` is from the deployment URL: :::{image} images/migrate-agent-deployment-id.png :alt: Deployment management page - :class: screenshot + :screenshot: ::: In this example, the new deployment ID is `eed4ae8e2b604fae8f8d515479a16b7b`. @@ -127,7 +127,7 @@ The easiest way to find the `deployment-id` is from the deployment URL: :::{image} images/migrate-agent-fleet-server-host.png :alt: Fleet server hosts showing the new host URL - :class: screenshot + :screenshot: ::: @@ -188,7 +188,7 @@ Now that the {{fleet}} settings are correctly set up, it pays to ensure that the :::{image} images/migrate-agent-policy-settings.png :alt: An agent policy's settings showing the newly created entities - :class: screenshot + :screenshot: ::: @@ -218,7 +218,7 @@ This is best performed one policy at a time. For a given policy, you need to cap :::{image} images/migrate-agent-install-command.png :alt: Install command from the Add Agent UI - :class: screenshot + :screenshot: ::: 5. On the host machines where the current agents are installed, enroll the agents again using this copied URL and the enrollment token: @@ -231,14 +231,14 @@ This is best performed one policy at a time. For a given policy, you need to cap :::{image} images/migrate-agent-install-command-output.png :alt: Install command output - :class: screenshot + :screenshot: ::: 6. The agent on each host will now check into the new {{fleet-server}} and appear in the new target cluster. In the source cluster, the agents will go offline as they won’t be sending any check-ins. :::{image} images/migrate-agent-newly-enrolled-agents.png :alt: Newly enrolled agents in the target cluster - :class: screenshot + :screenshot: ::: 7. Repeat this procedure for each {{agent}} policy. diff --git a/reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md b/reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md index 0fac2008d..21a2d2bb1 100644 --- a/reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md +++ b/reference/ingestion-tools/fleet/migrate-from-beats-to-elastic-agent.md @@ -86,21 +86,21 @@ After deploying an {{agent}} to a host, view details about the agent and inspect :::{image} images/migration-agent-status-healthy01.png :alt: Screen showing that agent status is Healthy - :class: screenshot + :screenshot: ::: 2. Click the host name to examine the {{agent}} details. This page shows the integrations that are currently installed, the policy the agent is enrolled in, and information about the host machine: :::{image} images/migration-agent-details01.png :alt: Screen showing that agent status is Healthy - :class: screenshot + :screenshot: ::: 3. Go back to the main {{fleet}} page and click the **Data streams** tab. You should be able to see the data streams for various logs and metrics from the host. This is out-of-the-box without any extra configuration or dashboard creation: :::{image} images/migration-agent-data-streams01.png :alt: Screen showing data streams created by the {agent} - :class: screenshot + :screenshot: ::: 4. Go to **Analytics > Discover** and examine the data streams. Note that documents indexed by {{agent}} match these patterns: @@ -114,14 +114,14 @@ After deploying an {{agent}} to a host, view details about the agent and inspect :::{image} images/migration-event-from-filebeat.png :alt: Screen showing event from {filebeat} - :class: screenshot + :screenshot: ::: Next, filter on `logs-*`. Notice that the document contains `data_stream.*` fields that come from logs ingested by the {{agent}}. :::{image} images/migration-event-from-agent.png :alt: Screen showing event from {agent} - :class: screenshot + :screenshot: ::: ::::{note} @@ -140,14 +140,14 @@ For example, if the agent policy you created earlier includes the System integra :::{image} images/migration-add-nginx-integration.png :alt: Screen showing the Nginx integration - :class: screenshot + :screenshot: ::: 2. Configure the integration, then apply it to the agent policy you used earlier. Make sure you expand collapsed sections to see all the settings like log paths. :::{image} images/migration-add-integration-policy.png :alt: Screen showing Nginx configuration - :class: screenshot + :screenshot: ::: When you save and deploy your changes, the agent policy is updated to include a new integration policy for Nginx. All {{agent}}s enrolled in the agent policy get the updated policy, and the {{agent}} running on your host will begin collecting Nginx data. @@ -184,7 +184,7 @@ To add processors to an integration policy: :::{image} images/migration-add-processor.png :alt: Screen showing how to add a processor to an integration policy - :class: screenshot + :screenshot: ::: For example, the following processor adds geographically specific metadata to host events: @@ -216,7 +216,7 @@ If you must preserve the raw event, edit the integration policy, and for each en :::{image} images/migration-preserve-raw-event.png :alt: Screen showing how to add a processor to an integration policy -:class: screenshot +:screenshot: ::: Do this for every data stream with a raw event you want to preserve. @@ -294,7 +294,7 @@ For more information, see the [Aliases documentation](/manage-data/data-store/al :::{image} images/migration-index-lifecycle-policies.png :alt: Screen showing how to add a processor to an integration policy -:class: screenshot +:screenshot: ::: If you used {{ilm}} with {{beats}}, you’ll see index lifecycle policies like **filebeat** and **metricbeat** in the list. After migrating to {{agent}}, you’ll see polices named **logs** and **metrics**, which encapsulate the {{ilm}} policies for all `logs-*` and `metrics-*` index templates. diff --git a/reference/ingestion-tools/fleet/monitor-elastic-agent.md b/reference/ingestion-tools/fleet/monitor-elastic-agent.md index 31b76775f..9c1ed439d 100644 --- a/reference/ingestion-tools/fleet/monitor-elastic-agent.md +++ b/reference/ingestion-tools/fleet/monitor-elastic-agent.md @@ -30,7 +30,7 @@ To view the overall status of your {{fleet}}-managed agents, in {{kib}}, go to * :::{image} images/kibana-fleet-agents.png :alt: Agents tab showing status of each {agent} -:class: screenshot +:screenshot: ::: ::::{important} @@ -59,7 +59,7 @@ To filter the list of agents by status, click the **Status** dropdown and select :::{image} images/agent-status-filter.png :alt: Agent Status dropdown with multiple statuses selected -:class: screenshot +:screenshot: ::: For advanced filtering, use the search bar to create structured queries using [{{kib}} Query Language](elasticsearch://reference/query-languages/kql.md). For example, enter `local_metadata.os.family : "darwin"` to see only agents running on macOS. @@ -104,7 +104,7 @@ On the **Agents** tab, click **Agent activity**. All agent operations are shown, :::{image} images/agent-activity.png :alt: Agent activity panel -:class: screenshot +:screenshot: ::: @@ -119,7 +119,7 @@ When {{fleet}} reports an agent status like `Offline` or `Unhealthy`, you might :::{image} images/view-agent-logs.png :alt: View agent logs under agent details - :class: screenshot + :screenshot: ::: @@ -130,7 +130,7 @@ On the **Logs** tab you can filter, search, and explore the agent logs: :::{image} images/kibana-fleet-datasets.png :alt: {{fleet}} showing datasets for logging - :class: screenshot + :screenshot: ::: * Change the log level to filter the view by log levels. Want to see debugging logs? Refer to [Change the logging level](#change-logging-level). @@ -146,7 +146,7 @@ The logging level for monitored agents is set to `info` by default. You can chan :::{image} images/agent-set-logging-level.png :alt: Logs tab showing the agent logging level setting - :class: screenshot + :screenshot: ::: 2. Select an **Agent logging level**: @@ -171,14 +171,14 @@ The logging level for monitored agents is set to `info` by default. You can chan :::{image} images/collect-agent-diagnostics1.png :alt: Collect agent diagnostics under agent details - :class: screenshot + :screenshot: ::: 4. In the **Request Diagnostics** pop-up, select **Collect additional CPU metrics** if you’d like detailed CPU data. :::{image} images/collect-agent-diagnostics2.png :alt: Collect agent diagnostics confirmation pop-up - :class: screenshot + :screenshot: ::: 5. Click the **Request diagnostics** button. @@ -201,7 +201,7 @@ To view agent metrics: :::{image} images/selected-agent-metrics-dashboard.png :alt: Screen capture showing {{agent}} metrics - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md b/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md index 5947cbf51..151365032 100644 --- a/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md +++ b/reference/ingestion-tools/fleet/running-on-kubernetes-managed-by-fleet.md @@ -205,7 +205,7 @@ If you’d like to run {{agent}} on Kubernetes on a read-only file system, you c :::{image} images/kibana-fleet-agents.png :alt: {{agent}}s {{fleet}} page - :class: screenshot + :screenshot: ::: 3. To view data flowing in, go to **Analytics → Discover** and select the index `metrics-*`, or even more specific, `metrics-kubernetes.*`. If you can’t see these indexes, [create a data view](/explore-analyze/find-and-organize/data-views.md) for them. diff --git a/reference/ingestion-tools/fleet/scaling-on-kubernetes.md b/reference/ingestion-tools/fleet/scaling-on-kubernetes.md index 8eb64ee71..864bf27b7 100644 --- a/reference/ingestion-tools/fleet/scaling-on-kubernetes.md +++ b/reference/ingestion-tools/fleet/scaling-on-kubernetes.md @@ -54,7 +54,7 @@ Additionally, by default one agent is elected as **leader** (for more informatio :::{image} images/k8sscaling.png :alt: {{agent}} as daemonset -:class: screenshot +:screenshot: ::: The above schema explains how {{agent}} collects and sends metrics to {{es}}. Because of Leader Agent being responsible to also collecting cluster-lever metrics, this means that it requires additional resources. @@ -189,7 +189,7 @@ If {{agent}} is configured as managed, in {{kib}} you can observe under **Fleet> :::{image} images/kibana-fleet-agents.png :alt: {{agent}} Status -:class: screenshot +:screenshot: ::: Additionally you can verify the process status with following commands: @@ -260,14 +260,14 @@ Filter for Pod dataset: :::{image} images/pod-latency.png :alt: {{k8s}} Pod Metricset -:class: screenshot +:screenshot: ::: Filter for State_Pod dataset :::{image} images/state-pod.png :alt: {{k8s}} State Pod Metricset -:class: screenshot +:screenshot: ::: Identify how many events have been sent to {{es}}: diff --git a/reference/ingestion-tools/fleet/secure-logstash-connections.md b/reference/ingestion-tools/fleet/secure-logstash-connections.md index 72129488e..e4808a100 100644 --- a/reference/ingestion-tools/fleet/secure-logstash-connections.md +++ b/reference/ingestion-tools/fleet/secure-logstash-connections.md @@ -181,7 +181,7 @@ This section describes how to add a {{ls}} output and configure SSL settings in :::{image} images/add-logstash-output.png :alt: Screen capture of a folder called `logstash` that contains two files: logstash.crt and logstash.key -:class: screenshot +:screenshot: ::: When you’re done, save and apply the settings. @@ -200,7 +200,7 @@ When you’re done, save and apply the settings. :::{image} images/agent-output-settings.png :alt: Screen capture showing the {{ls}} output policy selected in an agent policy - :class: screenshot + :screenshot: ::: 3. Save your changes. diff --git a/reference/ingestion-tools/fleet/upgrade-elastic-agent.md b/reference/ingestion-tools/fleet/upgrade-elastic-agent.md index 518a9e1d6..42d824045 100644 --- a/reference/ingestion-tools/fleet/upgrade-elastic-agent.md +++ b/reference/ingestion-tools/fleet/upgrade-elastic-agent.md @@ -59,7 +59,7 @@ To upgrade your {{agent}}s, go to **Management > {{fleet}} > Agents** in {{kib}} :::{image} images/upgrade-available-indicator.png :alt: Indicator on the UI showing that the agent can be upgraded - :class: screenshot + :screenshot: ::: You can also click the **Upgrade available** button to filter the list agents to only those that currently can be upgraded. @@ -68,7 +68,7 @@ To upgrade your {{agent}}s, go to **Management > {{fleet}} > Agents** in {{kib}} :::{image} images/upgrade-single-agent.png :alt: Menu for upgrading a single {agent} - :class: screenshot + :screenshot: ::: 3. In the Upgrade agent window, select or specify an upgrade version and click **Upgrade agent**. @@ -77,7 +77,7 @@ To upgrade your {{agent}}s, go to **Management > {{fleet}} > Agents** in {{kib}} :::{image} images/upgrade-agent-custom.png :alt: Menu for upgrading a single {agent} - :class: screenshot + :screenshot: ::: @@ -103,7 +103,7 @@ You can do rolling upgrades to avoid exhausting network resources when updating :::{image} images/schedule-upgrade.png :alt: Menu for scheduling {{agent}} upgrades - :class: screenshot + :screenshot: ::: If the schedule option is grayed out, it may not be available at your subscription level. For more information, refer to [{{stack}} subscriptions](https://www.elastic.co/subscriptions). @@ -122,7 +122,7 @@ Agents on version 8.12 and higher that are currently upgrading additionally show :::{image} images/upgrade-states.png :alt: Detailed state of an upgrading agent -:class: screenshot +:screenshot: ::: The following table explains the upgrade states in the order that they can occur. @@ -145,19 +145,19 @@ Beside the upgrade status indicator, you can hover your cursor over the informat :::{image} images/upgrade-detailed-state01.png :alt: Granular upgrade details shown as hover text (agent has requested an upgrade) -:class: screenshot +:screenshot: ::: :::{image} images/upgrade-detailed-state02.png :alt: Granular upgrade details shown as hover text (agent is restarting to apply the update) -:class: screenshot +:screenshot: ::: Note that when you upgrade agents from versions below 8.12, the upgrade details are not provided. :::{image} images/upgrade-non-detailed.png :alt: An earlier release agent showing only the updating state without additional details -:class: screenshot +:screenshot: ::: When upgrading many agents, you can fine tune the maintenance window by viewing stats and metrics about the upgrade: @@ -175,7 +175,7 @@ If an upgrade fails, you can view the agent logs to find the reason: :::{image} images/upgrade-failure.png :alt: Agent logs showing upgrade failure - :class: screenshot + :screenshot: ::: diff --git a/reference/ingestion-tools/fleet/upgrade-integration.md b/reference/ingestion-tools/fleet/upgrade-integration.md index ded7e2fd0..37fc5b8ff 100644 --- a/reference/ingestion-tools/fleet/upgrade-integration.md +++ b/reference/ingestion-tools/fleet/upgrade-integration.md @@ -27,7 +27,7 @@ In larger deployments, you should test integration upgrades on a sample {{agent} :::{image} images/upgrade-integration.png :alt: Settings tab under Integrations shows how to upgrade the integration - :class: screenshot + :screenshot: ::: 3. Before upgrading the integration, decide whether to upgrade integration policies to the latest version, too. To use new features and capabilities, you’ll need to upgrade existing integration policies. However, the upgrade may introduce changes, such as field changes, that require you to resolve conflicts. @@ -74,7 +74,7 @@ To keep integration policies up to data automatically: :::{image} images/upgrade-integration-policies-automatically.png :alt: Settings tab under Integrations shows how to keep integration policies up to date automatically - :class: screenshot + :screenshot: ::: If this option isn’t available on the **Settings** tab, this feature is not available for the integration you’re viewing. @@ -89,7 +89,7 @@ If you can’t upgrade integration policies when you upgrade the integration, up :::{image} images/upgrade-package-policy.png :alt: Policies tab under Integrations shows how to upgrade the package policy - :class: screenshot + :screenshot: ::: 2. Click **Upgrade** to begin the upgrade process. @@ -98,7 +98,7 @@ If you can’t upgrade integration policies when you upgrade the integration, up :::{image} images/upgrade-policy-editor.png :alt: Upgrade integration example in the policy editor - :class: screenshot + :screenshot: ::: 3. Make any required configuration changes and, if necessary, resolve conflicts. For more information, refer to [Resolve conflicts](#resolve-conflicts). @@ -118,14 +118,14 @@ If {{fleet}} detects a conflict while automatically upgrading an integration pol :::{image} images/upgrade-resolve-conflicts.png :alt: Resolve field conflicts in the policy editor - :class: screenshot + :screenshot: ::: 1. Under **Review field conflicts**, notice that you can click **previous configuration** to view the raw JSON representation of the old integration policy and compare values. This feature is useful when fields have been deprecated or removed between releases. :::{image} images/upgrade-view-previous-config.png :alt: View previous configuration to resolve conflicts - :class: screenshot + :screenshot: ::: 2. In the policy editor, fix any errors and click **Upgrade integration**. diff --git a/reference/ingestion-tools/observability/apm.md b/reference/ingestion-tools/observability/apm.md index 8353c6290..6199aeafd 100644 --- a/reference/ingestion-tools/observability/apm.md +++ b/reference/ingestion-tools/observability/apm.md @@ -9,7 +9,7 @@ Elastic APM is an application performance monitoring system built on the {{stack :::{image} ../../../images/observability-apm-app-landing.png :alt: Applications UI in {kib} -:class: screenshot +:screenshot: ::: Elastic APM also automatically collects unhandled errors and exceptions. Errors are grouped based primarily on the stack trace, so you can identify new errors as they appear and keep an eye on how many times specific errors happen. diff --git a/reference/security/endpoint-command-reference.md b/reference/security/endpoint-command-reference.md new file mode 100644 index 000000000..47a600333 --- /dev/null +++ b/reference/security/endpoint-command-reference.md @@ -0,0 +1,332 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/endpoint-command-ref.html + - https://www.elastic.co/guide/en/serverless/current/security-endpoint-command-ref.html +--- + +# Endpoint command reference [endpoint-command-ref] + +This page lists the commands for managing and troubleshooting {{elastic-endpoint}}, the installed component that performs {{elastic-defend}}'s threat monitoring and prevention. + +::::{note} +* {{elastic-endpoint}} is not added to the `PATH` system variable, so you must prepend the commands with the full OS-dependent path: + + * On Windows: `"C:\Program Files\Elastic\Endpoint\elastic-endpoint.exe"` + * On macOS: `/Library/Elastic/Endpoint/elastic-endpoint` + * On Linux: `/opt/Elastic/Endpoint/elastic-endpoint` + +* You must run the commands with elevated privileges—using `sudo` to run as the root user on Linux and macOS, or running as Administrator on Windows. + +:::: + + +The following {{elastic-endpoint}} commands are available: + +* [diagnostics](#elastic-endpoint-diagnostics-command) +* [help](#elastic-endpoint-help-command) +* [inspect](#elastic-endpoint-inspect-command) +* [install](#elastic-endpoint-install-command) +* [memorydump](#elastic-endpoint-memorydump-command) +* [run](#elastic-endpoint-run-command) +* [send](#elastic-endpoint-send-command) +* [status](#elastic-endpoint-status-command) +* [test](#elastic-endpoint-test-command) +* [top](#elastic-endpoint-top-command) +* [uninstall](#elastic-endpoint-uninstall-command) +* [version](#elastic-endpoint-version-command) + +Each of the commands accepts the following logging options: + +* `--log [stdout,stderr,debugview,file]` +* `--log-level [error,info,debug]` + + +## elastic-endpoint diagnostics [elastic-endpoint-diagnostics-command] + +Gather diagnostics information from {{elastic-endpoint}}. This command produces an archive that contains: + +* `version.txt`: Version information +* `elastic-endpoint.yaml`: Current policy +* `metrics.json`: Metrics document +* `policy_response.json`: Last policy response +* `system_info.txt`: System information +* `analysis.txt`: Diagnostic analysis report +* `logs` directory: Copy of {{elastic-endpoint}} log files + + +### Example [_example] + +```shell +elastic-endpoint diagnostics +``` + + +## elastic-endpoint help [elastic-endpoint-help-command] + +Show help for the available commands. + + +### Example [_example_2] + +```shell +elastic-endpoint help +``` + + +## elastic-endpoint inspect [elastic-endpoint-inspect-command] + +Show the current {{elastic-endpoint}} configuration. + + +### Example [_example_3] + +```shell +elastic-endpoint inspect +``` + + +## elastic-endpoint install [elastic-endpoint-install-command] + +Install {{elastic-endpoint}} as a system service. + +::::{note} +We do not recommend installing {{elastic-endpoint}} using this command. {{elastic-endpoint}} is managed by {{agent}} and cannot function as a standalone service. Therefore, there is no separate installation package for {{elastic-endpoint}}, and it should not be installed independently. +:::: + + + +### Options [_options] + +`--resources ` +: Specify a resources `.zip` file to be used during the installation. This option is required. + +`--upgrade` +: Upgrade the existing installation. + + +### Example [_example_4] + +```shell +elastic-endpoint install --upgrade --resources endpoint-security-resources.zip +``` + + +## elastic-endpoint memorydump [elastic-endpoint-memorydump-command] + +Save a memory dump of the {{elastic-endpoint}} service. + + +### Options [_options_2] + +`--compress` +: Compress the saved memory dump. + +`--timeout ` +: Specify the memory collection timeout, in seconds; the default is 60 seconds. + + +### Example [_example_5] + +```shell +elastic-endpoint memorydump --timeout 120 +``` + + +## elastic-endpoint run [elastic-endpoint-run-command] + +Run `elastic-endpoint` as a foreground process if no other instance is already running. + + +### Example [_example_6] + +```shell +elastic-endpoint run +``` + + +## elastic-endpoint send [elastic-endpoint-send-command] + +Send the requested document to the {{stack}}. + + +### Subcommands [_subcommands] + +`metadata` +: Send an off-schedule metrics document to the {{stack}}. + + +### Example [_example_7] + +```shell +elastic-endpoint send metadata +``` + + +## elastic-endpoint status [elastic-endpoint-status-command] + +Retrieve the current status of the running {{elastic-endpoint}} service. The command also returns the last known status of {{agent}}. + + +### Options [_options_3] + +`--output` +: Control the level of detail and formatting of the information. Valid values are: + + * `human`: Returns limited information when {{elastic-endpoint}}'s status is `Healthy`. If any policy actions weren’t successfully applied, the relevant details are displayed. + * `full`: Always returns the full status information. + * `json`: Always returns the full status information. + + + +### Example [_example_8] + +```shell +elastic-endpoint status --output json +``` + + +## elastic-endpoint test [elastic-endpoint-test-command] + +Perform the requested test. + + +### Subcommands [_subcommands_2] + +`output` +: Test whether {{elastic-endpoint}} can connect to remote resources. + + +### Example [_example_9] + +```shell +elastic-endpoint test output +``` + + +### Example output [_example_output] + +```txt +Testing output connections + +Using proxy: + +Elasticsearch server: https://example.elastic.co:443 + Status: Success + +Global artifact server: https://artifacts.security.elastic.co + Status: Success + +Fleet server: https://fleet.example.elastic.co:443 + Status: Success +``` + + +## elastic-endpoint top [elastic-endpoint-top-command] + +Show a breakdown of the executables that triggered {{elastic-endpoint}} CPU usage within the last interval. This displays which {{elastic-endpoint}} features are resource-intensive for a particular executable. + +::::{note} +The meaning and output of this command are similar, but not identical, to the POSIX `top` command. The `elastic-endpoint top` command aggregates multiple processes by executable. The utilization values aren’t measured by the OS scheduler but by a wall clock in user mode. The output helps identify outliers causing excessive CPU utilization, allowing you to fine-tune the {{elastic-defend}} policy and exception lists in your deployment. +:::: + + + +### Options [_options_4] + +`--interval ` +: Specify the data collection interval, in seconds; the default is 5 seconds. + +`--limit ` +: Specify the number of updates to collect; by default, data is collected until interrupted by **Ctrl+C**. + +`--normalized` +: Normalize CPU usage values to a total of 100% across all CPUs on multi-CPU systems. + + +### Example [_example_10] + +```shell +elastic-endpoint top --interval 10 --limit 5 +``` + + +### Example output [_example_output_2] + +```txt +| PROCESS | OVERALL | API | BHVR | DIAG BHVR | DNS | FILE | LIB | MEM SCAN | MLWR | NET | PROC | RANSOM | REG | +============================================================================================================================================================= +| MSBuild.exe | 3146.0 | 0.0 | 0.8 | 0.7 | 0.0 | 2330.9 | 0.0 | 226.2 | 586.9 | 0.0 | 0.0 | 0.4 | 0.0 | +| Microsoft.Management.Services.IntuneWindowsAgen... | 30.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2 | 29.8 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| svchost.exe | 27.3 | 0.0 | 0.1 | 0.1 | 0.0 | 0.4 | 0.2 | 0.0 | 26.6 | 0.0 | 0.0 | 0.0 | 0.0 | +| LenovoVantage-(LenovoServiceBridgeAddin).exe | 0.1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| Lenovo.Modern.ImController.PluginHost.Device.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| msedgewebview2.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| msedge.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| powershell.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| WmiPrvSE.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| Lenovo.Modern.ImController.PluginHost.Device.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| Slack.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| uhssvc.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| explorer.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| taskhostw.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| Widgets.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| elastic-endpoint.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | +| sppsvc.exe | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | + +Endpoint service (16 CPU): 113.0% out of 1600% + +Collecting data. Press Ctrl-C to cancel +``` + + +#### Column abbreviations [_column_abbreviations] + +* `API`: Event Tracing for Windows (ETW) API events +* `AUTH`: Authentication events +* `BHVR`: Malicious behavior protection +* `CRED`: Credential access events +* `DIAG BHVR`: Diagnostic malicious behavior protection +* `DNS`: DNS events +* `FILE`: File events +* `LIB`: Library load events +* `MEM SCAN`: Memory scanning +* `MLWR`: Malware protection +* `NET`: Network events +* `PROC`: Process events +* `PROC INJ`: Process injection +* `RANSOM`: Ransomware protection +* `REG`: Registry events + + +## elastic-endpoint uninstall [elastic-endpoint-uninstall-command] + +Uninstall {{elastic-endpoint}}. + +::::{note} +{{elastic-endpoint}} is managed by {{agent}}. To remove {{elastic-endpoint}} from the target machine permanently, remove the {{elastic-defend}} integration from the {{fleet}} policy. The [elastic-agent uninstall](../../solutions/security/configure-elastic-defend/uninstall-elastic-agent.md) command also uninstalls {{elastic-endpoint}}; therefore, in practice, the `elastic-endpoint uninstall` command is used only to troubleshoot broken installations. +:::: + + +### Options [_options_5] + +`--uninstall-token ` +: Provide the uninstall token. The token is required if [agent tamper protection](../../solutions/security/configure-elastic-defend/prevent-elastic-agent-uninstallation.md) is enabled. + + +### Example [_example_11] + +```shell +elastic-endpoint uninstall --uninstall-token 12345678901234567890123456789012 +``` + + +## elastic-endpoint version [elastic-endpoint-version-command] + +Show the version of {{elastic-endpoint}}. + + +### Example [_example_12] + +```shell +elastic-endpoint version +``` \ No newline at end of file diff --git a/reference/security/fields-and-object-schemas/timeline-object-schema.md b/reference/security/fields-and-object-schemas/timeline-object-schema.md index 233e51566..b2f3fb64c 100644 --- a/reference/security/fields-and-object-schemas/timeline-object-schema.md +++ b/reference/security/fields-and-object-schemas/timeline-object-schema.md @@ -21,7 +21,7 @@ This screenshot maps the Timeline UI components to their JSON objects: :::{image} ../../../images/security-timeline-object-ui.png :alt: timeline object ui -:class: screenshot +:screenshot: ::: 1. [Title](#timeline-object-title) (`title`) diff --git a/reference/security/images/link.svg b/reference/security/images/link.svg new file mode 100644 index 000000000..310607d54 --- /dev/null +++ b/reference/security/images/link.svg @@ -0,0 +1,3 @@ + + + diff --git a/reference/security/index.md b/reference/security/index.md index 965a63c8a..438e51703 100644 --- a/reference/security/index.md +++ b/reference/security/index.md @@ -6,5 +6,7 @@ This section of the documentation contains reference information for [{{elastic- * Downloadable rule updates * Prebuilt jobs * Fields and object schemas +* Endpoint command reference +* Prebuilt anomaly detection jobs You can use [APIs](/solutions/security/apis.md) to interface with {{elastic-sec}} features. diff --git a/reference/security/prebuilt-anomaly-detection-jobs.md b/reference/security/prebuilt-anomaly-detection-jobs.md new file mode 100644 index 000000000..07fec7faa --- /dev/null +++ b/reference/security/prebuilt-anomaly-detection-jobs.md @@ -0,0 +1,216 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/security/current/prebuilt-ml-jobs.html +--- + +# Prebuilt anomaly detection jobs [prebuilt-ml-jobs] + +These {{anomaly-jobs}} automatically detect file system and network anomalies on your hosts. They appear in the **Anomaly Detection** interface of the {{security-app}} in {{kib}} when you have data that matches their configuration. For more information, refer to [Anomaly detection with machine learning](../../solutions/security/advanced-entity-analytics/anomaly-detection.md). + + +## Security: Authentication [security-authentication] + +Detect anomalous activity in your ECS-compatible authentication logs. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +By default, when you create these job in the {{security-app}}, it uses a {{data-source}} that applies to multiple indices. To get the same results if you use the {{ml-app}} app, create a similar [{{data-source}}](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/manifest.json#L7) then select it in the job wizard. + +| Name | Description | Job | Datafeed | +| --- | --- | --- | --- | +| auth_high_count_logon_events | Looks for an unusually large spike in successful authentication events. This can be due to password spraying, user enumeration, or brute force activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_events.json) | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_events.json)| +| auth_high_count_logon_events_for_a_source_ip | Looks for an unusually large spike in successful authentication events from a particular source IP address. This can be due to password spraying, user enumeration or brute force activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_events_for_a_source_ip.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_events_for_a_source_ip.json)| +| auth_high_count_logon_fails | Looks for an unusually large spike in authentication failure events. This can be due to password spraying, user enumeration, or brute force activity and may be a precursor to account takeover or credentialed access. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_high_count_logon_fails.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_high_count_logon_fails.json)| +| auth_rare_hour_for_a_user | Looks for a user logging in at a time of day that is unusual for the user. This can be due to credentialed access via a compromised account when the user and the threat actor are in different time zones. In addition, unauthorized user activity often takes place during non-business hours. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_hour_for_a_user.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_hour_for_a_user.json)| +| auth_rare_source_ip_for_a_user | Looks for a user logging in from an IP address that is unusual for the user. This can be due to credentialed access via a compromised account when the user and the threat actor are in different locations. An unusual source IP address for a username could also be due to lateral movement when a compromised account is used to pivot between hosts. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_source_ip_for_a_user.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_source_ip_for_a_user.json)| +| auth_rare_user | Looks for an unusual user name in the authentication logs. An unusual user name is one way of detecting credentialed access by means of a new or dormant user account. A user account that is normally inactive, because the user has left the organization, which becomes active, may be due to credentialed access using a compromised account password. Threat actors will sometimes also create new users as a means of persisting in a compromised web application. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/auth_rare_user.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_auth_rare_user.json)| +| suspicious_login_activity | Detect unusually high number of authentication attempts. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/datafeed_suspicious_login_activity.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_auth/ml/suspicious_login_activity.json)| + + +## Security: CloudTrail [security-cloudtrail-jobs] + +Detect suspicious activity recorded in your CloudTrail logs. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_cloudtrail/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +| Name | Description | Job | Datafeed | +| --- | --- | --- | --- | +| high_distinct_count_error_message | Looks for a spike in the rate of an error message which may simply indicate an impending service failure but these can also be byproducts of attempted or successful persistence, privilege escalation, defense evasion, discovery, lateral movement, or collection activity by a threat actor. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/high_distinct_count_error_message.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_high_distinct_count_error_message.json)| +| rare_error_code | Looks for unusual errors. Rare and unusual errors may simply indicate an impending service failure but they can also be byproducts of attempted or successful persistence, privilege escalation, defense evasion, discovery, lateral movement, or collection activity by a threat actor. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_error_code.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_error_code.json)| +| rare_method_for_a_city | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a geolocation (city) that is unusual. This can be the result of compromised credentials or keys. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_city.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_city.json)| +| rare_method_for_a_country | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a geolocation (country) that is unusual. This can be the result of compromised credentials or keys. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_country.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_country.json)| +| rare_method_for_a_username | Looks for AWS API calls that, while not inherently suspicious or abnormal, are sourcing from a user context that does not normally call the method. This can be the result of compromised credentials or keys as someone uses a valid account to persist, move laterally, or exfil data. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/rare_method_for_a_username.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_cloudtrail/ml/datafeed_rare_method_for_a_username.json)| + + +## Security: Host [security-host-jobs] + +Anomaly detection jobs for host-based threat hunting and detection. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +To access the host traffic anomalies dashboard in Kibana, go to: `Security -> Dashboards -> Host Traffic Anomalies`. + +| Name | Description | Job | Datafeed | +| --- | --- | --- | --- | +| high_count_events_for_a_host_name | Looks for a sudden spike in host based traffic. This can be due to a range of security issues, such as a compromised system, DDoS attacks, malware infections, privilege escalation, or data exfiltration. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/high_count_events_for_a_host_name.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/datafeed_high_count_events_for_a_host_name.json)| +| low_count_events_for_a_host_name | Looks for a sudden drop in host based traffic. This can be due to a range of security issues, such as a compromised system, a failed service, or a network misconfiguration. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/low_count_events_for_a_host_name.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/platform/plugins/shared/ml/server/models/data_recognizer/modules/security_host/ml/datafeed_low_count_events_for_a_host_name.json)| + + +## Security: Linux [security-linux-jobs] + +Anomaly detection jobs for Linux host-based threat hunting and detection. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +| Name | Description | Job | Datafeed | +| --- | --- | --- | --- | +| v3_linux_anomalous_network_activity | Looks for unusual processes using the network which could indicate command-and-control, lateral movement, persistence, or data exfiltration activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_network_activity.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_network_activity.json)| +| v3_linux_anomalous_network_port_activity | Looks for unusual destination port activity that could indicate command-and-control, persistence mechanism, or data exfiltration activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_network_port_activity.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_network_port_activity.json)| +| v3_linux_anomalous_process_all_hosts | Looks for processes that are unusual to all Linux hosts. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_process_all_hosts.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_process_all_hosts.json)| +| v3_linux_anomalous_user_name | Rare and unusual users that are not normally active may indicate unauthorized changes or activity by an unauthorized user which may be credentialed access or lateral movement. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_anomalous_user_name.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_anomalous_user_name.json)| +| v3_linux_network_configuration_discovery | Looks for commands related to system network configuration discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used by a threat actor to engage in system network configuration discovery to increase their understanding of connected networks and hosts. This information may be used to shape follow-up behaviors such as lateral movement or additional discovery. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_network_configuration_discovery.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_datafeed_linux_network_configuration_discovery.json)| +| v3_linux_network_connection_discovery | Looks for commands related to system network connection discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used by a threat actor to engage in system network connection discovery to increase their understanding of connected services and systems. This information may be used to shape follow-up behaviors such as lateral movement or additional discovery. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_network_connection_discovery.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_datafeed_linux_network_connection_discovery.json)| +| v3_linux_rare_metadata_process | Looks for anomalous access to the metadata service by an unusual process. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_metadata_process.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_metadata_process.json)| +| v3_linux_rare_metadata_user | Looks for anomalous access to the metadata service by an unusual user. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_metadata_user.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_metadata_user.json)| +| v3_linux_rare_sudo_user | Looks for sudo activity from an unusual user context. Unusual user context changes can be due to privilege escalation. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_sudo_user.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/securiity_linux/ml/datafeed_v3_linux_rare_sudo_user.json)| +| v3_linux_rare_user_compiler | Looks for compiler activity by a user context which does not normally run compilers. This can be ad-hoc software changes or unauthorized software deployment. This can also be due to local privilege elevation via locally run exploits or malware activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_rare_user_compiler.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_rare_user_compiler.json)| +| v3_linux_system_information_discovery | Looks for commands related to system information discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system information discovery to gather detailed information about system configuration and software versions. This may be a precursor to the selection of a persistence mechanism or a method of privilege elevation. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_information_discovery.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_information_discovery.json)| +| v3_linux_system_process_discovery | Looks for commands related to system process discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system process discovery to increase their understanding of software applications running on a target host or network. This may be a precursor to the selection of a persistence mechanism or a method of privilege elevation. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_process_discovery.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_process_discovery.json)| +| v3_linux_system_user_discovery | Looks for commands related to system user or owner discovery from an unusual user context. This can be due to uncommon troubleshooting activity or due to a compromised account. A compromised account may be used to engage in system owner or user discovery to identify currently active or primary users of a system. This may be a precursor to additional discovery, credential dumping, or privilege elevation activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_linux_system_user_discovery.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_linux_system_user_discovery.json)| +| v3_rare_process_by_host_linux | Looks for processes that are unusual to a particular Linux host. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/v3_rare_process_by_host_linux.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_linux/ml/datafeed_v3_rare_process_by_host_linux.json)| + + +## Security: Network [security-network-jobs] + +Detect anomalous network activity in your ECS-compatible network logs. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +By default, when you create these jobs in the {{security-app}}, it uses a {{data-source}} that applies to multiple indices. To get the same results if you use the {{ml-app}} app, create a similar [{{data-source}}](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/manifest.json#L7) then select it in the job wizard. + +| Name | Description | Job | Datafeed | +| --- | --- | --- | --- | +| high_count_by_destination_country | Looks for an unusually large spike in network activity to one destination country in the network logs. This could be due to unusually large amounts of reconnaissance or enumeration traffic. Data exfiltration activity may also produce such a surge in traffic to a destination country which does not normally appear in network traffic or business work-flows. Malware instances and persistence mechanisms may communicate with command-and-control (C2) infrastructure in their country of origin, which may be an unusual destination country for the source network. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_by_destination_country.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_by_destination_country.json)| +| high_count_network_denies | Looks for an unusually large spike in network traffic that was denied by network ACLs or firewall rules. Such a burst of denied traffic is usually either 1) a misconfigured application or firewall or 2) suspicious or malicious activity. Unsuccessful attempts at network transit, in order to connect to command-and-control (C2), or engage in data exfiltration, may produce a burst of failed connections. This could also be due to unusually large amounts of reconnaissance or enumeration traffic. Denial-of-service attacks or traffic floods may also produce such a surge in traffic. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_network_denies.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_network_denies.json)| +| high_count_network_events | Looks for an unusually large spike in network traffic. Such a burst of traffic, if not caused by a surge in business activity, can be due to suspicious or malicious activity. Large-scale data exfiltration may produce a burst of network traffic; this could also be due to unusually large amounts of reconnaissance or enumeration traffic. Denial-of-service attacks or traffic floods may also produce such a surge in traffic. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/high_count_network_events.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_high_count_network_events.json)| +| rare_destination_country | Looks for an unusual destination country name in the network logs. This can be due to initial access, persistence, command-and-control, or exfiltration activity. For example, when a user clicks on a link in a phishing email or opens a malicious document, a request may be sent to download and run a payload from a server in a country which does not normally appear in network traffic or business work-flows. Malware instances and persistence mechanisms may communicate with command-and-control (C2) infrastructure in their country of origin, which may be an unusual destination country for the source network. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/rare_destination_country.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_network/ml/datafeed_rare_destination_country.json)| + + +## Security: {{packetbeat}} [security-packetbeat-jobs] + +Detect suspicious network activity in {{packetbeat}} data. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_packetbeat/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +| Name | Description | Job | Datafeed | +| --- | --- | --- | --- | +| packetbeat_dns_tunneling | Looks for unusual DNS activity that could indicate command-and-control or data exfiltration activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_dns_tunneling.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_dns_tunneling.json)| +| packetbeat_rare_dns_question | Looks for unusual DNS activity that could indicate command-and-control activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_dns_question.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_dns_question.json)| +| packetbeat_rare_server_domain | Looks for unusual HTTP or TLS destination domain activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_server_domain.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_server_domain.json)| +| packetbeat_rare_urls | Looks for unusual web browsing URL activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_urls.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_urls.json)| +| packetbeat_rare_user_agent | Looks for unusual HTTP user agent activity that could indicate execution, persistence, command-and-control or data exfiltration activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/packetbeat_rare_user_agent.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/siem_packetbeat/ml/datafeed_packetbeat_rare_user_agent.json)| + + +## Security: Windows [security-windows-jobs] + +Anomaly detection jobs for Windows host-based threat hunting and detection. + +In the {{ml-app}} app, these configurations are available only when data exists that matches the query specified in the [manifest file](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/manifest.json). In the {{security-app}}, it looks in the {{data-source}} specified in the [`securitySolution:defaultIndex` advanced setting](kibana://reference/advanced-settings.md#securitysolution-defaultindex) for data that matches the query. + +If there are additional requirements such as installing the Windows System Monitor (Sysmon) or auditing process creation in the Windows security event log, they are listed for each job. + +| Name | Description | Job | Datafeed | +| --- | --- | --- | --- | +| v3_rare_process_by_host_windows | Looks for processes that are unusual to a particular Windows host. Such unusual processes may indicate unauthorized software, malware, or persistence mechanisms. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_rare_process_by_host_windows.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_rare_process_by_host_windows.json)| +| v3_windows_anomalous_network_activity | Looks for unusual processes using the network which could indicate command-and-control, lateral movement, persistence, or data exfiltration activity. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_network_activity.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_network_activity.json)| +| v3_windows_anomalous_path_activity | Looks for activity in unusual paths that may indicate execution of malware or persistence mechanisms. Windows payloads often execute from user profile paths. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_path_activity.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_path_activity.json)| +| v3_windows_anomalous_process_all_hosts | Looks for processes that are unusual to all Windows hosts. Such unusual processes may indicate execution of unauthorized software, malware, or persistence mechanisms. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_process_all_hosts.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_process_all_hosts.json)| +| v3_windows_anomalous_process_creation | Looks for unusual process relationships which may indicate execution of malware or persistence mechanisms. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_process_creation.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_process_creation.json)| +| v3_windows_anomalous_script | Looks for unusual powershell scripts that may indicate execution of malware, or persistence mechanisms. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_script.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_script.json)| +| v3_windows_anomalous_service | Looks for rare and unusual Windows service names which may indicate execution of unauthorized services, malware, or persistence mechanisms. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_service.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_service.json)| +| v3_windows_anomalous_user_name | Rare and unusual users that are not normally active may indicate unauthorized changes or activity by an unauthorized user which may be credentialed access or lateral movement. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_anomalous_user_name.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_anomalous_user_name.json)| +| v3_windows_rare_metadata_process | Looks for anomalous access to the metadata service by an unusual process. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_metadata_process.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_metadata_process.json)| +| v3_windows_rare_metadata_user | Looks for anomalous access to the metadata service by an unusual user. The metadata service may be targeted in order to harvest credentials or user data scripts containing secrets. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_metadata_user.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_metadata_user.json)| +| v3_windows_rare_user_runas_event | Unusual user context switches can be due to privilege escalation. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_user_runas_event.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_user_runas_event.json)| +| v3_windows_rare_user_type10_remote_login | Unusual RDP (remote desktop protocol) user logins can indicate account takeover or credentialed access. | [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/v3_windows_rare_user_type10_remote_login.json)| [![A link icon](images/link.svg)](https://github.com/elastic/kibana/blob/master/x-pack/plugins/ml/server/models/data_recognizer/modules/security_windows/ml/datafeed_v3_windows_rare_user_type10_remote_login.json)| + + +## Security: Elastic Integrations [security-integrations-jobs] + +[Elastic Integrations](integration-docs://reference/index.md) are a streamlined way to add Elastic assets to your environment, such as data ingestion, {{transforms}}, and in this case, {{ml}} capabilities for Security. + +The following Integrations use {{ml}} to analyze patterns of user and entity behavior, and help detect and alert when there is related suspicious activity in your environment. + +* [Data Exfiltration Detection](integration-docs://reference/ded.md) +* [Domain Generation Algorithm Detection](integration-docs://reference/dga.md) +* [Lateral Movement Detection](integration-docs://reference/lmd.md) +* [Living off the Land Attack Detection](integration-docs://reference/problemchild.md) + +**Domain Generation Algorithm (DGA) Detection** + +{{ml-cap}} solution package to detect domain generation algorithm (DGA) activity in your network data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. + +To download, refer to the [documentation](integration-docs://reference/dga.md). + +| Name | Description | +| --- | --- | +| dga_high_sum_probability | Detect domain generation algorithm (DGA) activity in your network data. | + +The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/dga/kibana/ml_module/dga-ml.json). + +**Living off the Land Attack (LotL) Detection** + +{{ml-cap}} solution package to detect Living off the Land (LotL) attacks in your environment. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. (Also known as ProblemChild). + +To download, refer to the [documentation](integration-docs://reference/problemchild.md). + +| Name | Description | +| --- | --- | +| problem_child_rare_process_by_host | Looks for a process that has been classified as malicious on a host that does not commonly manifest malicious process activity. | +| problem_child_high_sum_by_host | Looks for a set of one or more malicious child processes on a single host. | +| problem_child_rare_process_by_user | Looks for a process that has been classified as malicious where the user context is unusual and does not commonly manifest malicious process activity. | +| problem_child_rare_process_by_parent | Looks for rare malicious child processes spawned by a parent process. | +| problem_child_high_sum_by_user | Looks for a set of one or more malicious processes, started by the same user. | +| problem_child_high_sum_by_parent | Looks for a set of one or more malicious child processes spawned by the same parent process. | + +The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/problemchild/kibana/ml_module/problemchild-ml.json). + +**Data Exfiltration Detection (DED)** + +{{ml-cap}} package to detect data exfiltration in your network and file data. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. + +To download, refer to the [documentation](integration-docs://reference/ded.md). + +| Name | Description | +| --- | --- | +| ded_high_sent_bytes_destination_geo_country_iso_code | Detects data exfiltration to an unusual geo-location (by country iso code). | +| ded_high_sent_bytes_destination_ip | Detects data exfiltration to an unusual geo-location (by IP address). | +| ded_high_sent_bytes_destination_port | Detects data exfiltration to an unusual destination port. | +| ded_high_sent_bytes_destination_region_name | Detects data exfiltration to an unusual geo-location (by region name). | +| ded_high_bytes_written_to_external_device | Detects data exfiltration activity by identifying high bytes written to an external device. | +| ded_rare_process_writing_to_external_device | Detects data exfiltration activity by identifying a file write started by a rare process to an external device. | +| ded_high_bytes_written_to_external_device_airdrop | Detects data exfiltration activity by identifying high bytes written to an external device via Airdrop. | + +The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/ded/kibana/ml_module/ded-ml.json). + +**Lateral Movement Detection (LMD)** + +{{ml-cap}} package to detect lateral movement based on file transfer activity and Windows RDP events. Refer to the [subscription page](https://www.elastic.co/subscriptions) to learn more about the required subscription. + +To download, refer to the [documentation](integration-docs://reference/lmd.md). + +| Name | Description | +| --- | --- | +| lmd_high_count_remote_file_transfer | Detects unusually high file transfers to a remote host in the network. | +| lmd_high_file_size_remote_file_transfer | Detects unusually high size of files shared with a remote host in the network. | +| lmd_rare_file_extension_remote_transfer | Detects data exfiltration to an unusual destination port. | +| lmd_rare_file_path_remote_transfer | Detects unusual folders and directories on which a file is transferred. | +| lmd_high_mean_rdp_session_duration | Detects unusually high mean of RDP session duration. | +| lmd_high_var_rdp_session_duration | Detects unusually high variance in RDP session duration. | +| lmd_high_sum_rdp_number_of_processes | Detects unusually high number of processes started in a single RDP session. | +| lmd_unusual_time_weekday_rdp_session_start | Detects an RDP session started at an usual time or weekday. | +| lmd_high_rdp_distinct_count_source_ip_for_destination | Detects a high count of source IPs making an RDP connection with a single destination IP. | +| lmd_high_rdp_distinct_count_destination_ip_for_source | Detects a high count of destination IPs establishing an RDP connection with a single source IP. | +| lmd_high_mean_rdp_process_args | Detects unusually high number of process arguments in an RDP session. | + +The job configurations and datafeeds can be found [here](https://github.com/elastic/integrations/blob/main/packages/lmd/kibana/ml_module/lmd-ml.json). diff --git a/reference/toc.yml b/reference/toc.yml index 1674199d3..63067ec16 100644 --- a/reference/toc.yml +++ b/reference/toc.yml @@ -7,6 +7,8 @@ toc: - file: security/fields-and-object-schemas/siem-field-reference.md - file: security/fields-and-object-schemas/timeline-object-schema.md - file: security/fields-and-object-schemas/alert-schema.md + - file: security/endpoint-command-reference.md + - file: security/prebuilt-anomaly-detection-jobs.md - file: observability/index.md children: - file: observability/fields-and-object-schemas.md @@ -125,6 +127,7 @@ toc: - file: ingestion-tools/fleet/agent-provider.md - file: ingestion-tools/fleet/host-provider.md - file: ingestion-tools/fleet/env-provider.md + - file: ingestion-tools/fleet/filesource-provider.md - file: ingestion-tools/fleet/kubernetes_secrets-provider.md - file: ingestion-tools/fleet/kubernetes_leaderelection-provider.md - file: ingestion-tools/fleet/local-dynamic-provider.md diff --git a/solutions/observability/apps/analyze-data-from-synthetic-monitors.md b/solutions/observability/apps/analyze-data-from-synthetic-monitors.md index f4a8be32a..bd84be8f8 100644 --- a/solutions/observability/apps/analyze-data-from-synthetic-monitors.md +++ b/solutions/observability/apps/analyze-data-from-synthetic-monitors.md @@ -26,7 +26,7 @@ When you use a single monitor configuration to create monitors in multiple locat :::{image} ../../../images/observability-synthetics-monitor-page.png :alt: Synthetics UI -:class: screenshot +:screenshot: ::: To get started with your analysis in the Overview tab, you can search for monitors or use the filter options including current status (up, down, or disabled), monitor type (for example, journey or HTTP), location, and more. @@ -52,7 +52,7 @@ When you go to an individual monitor’s page, you’ll see much more detail abo :::{image} ../../../images/observability-synthetics-analyze-individual-monitor-header.png :alt: Header at the top of the individual monitor page for all monitor types in the {synthetics-app} -:class: screenshot +:screenshot: ::: Each individual monitor’s page has three tabs: Overview, History, and Errors. @@ -64,7 +64,7 @@ The **Overview** tab has information about the monitor availability, duration, a :::{image} ../../../images/observability-synthetics-analyze-individual-monitor-details.png :alt: Details in the Overview tab on the individual monitor page for all monitor types in the {synthetics-app} -:class: screenshot +:screenshot: ::: @@ -76,14 +76,14 @@ For browser monitors, you can click on any run in the **Test runs** list to see :::{image} ../../../images/observability-synthetics-analyze-individual-monitor-history.png :alt: The History tab on the individual monitor page for all monitor types in the {synthetics-app} -:class: screenshot +:screenshot: ::: If the monitor is configured to [retest on failure](../../../solutions/observability/apps/configure-synthetics-projects.md#synthetics-configuration-monitor), you’ll see retests listed in the **Test runs** table. Runs that are retests include a rerun icon (![Refresh icon](../../../images/observability-refresh.svg "")) next to the result badge. :::{image} ../../../images/observability-synthetics-retest.png :alt: A failed run and a retest in the table of test runs in the {synthetics-app} -:class: screenshot +:screenshot: ::: @@ -97,7 +97,7 @@ For browser monitors, you can click on any run in the **Error** list to open an :::{image} ../../../images/observability-synthetics-analyze-individual-monitor-errors.png :alt: The Errors tab on the individual monitor page for all monitor types in the {synthetics-app} -:class: screenshot +:screenshot: ::: @@ -121,7 +121,7 @@ The journey page on the Overview tab includes: :::{image} ../../../images/observability-synthetics-analyze-journeys-over-time.png :alt: Individual journey page for browser monitors in the {synthetics-app} -:class: screenshot +:screenshot: ::: From here, you can either drill down into: @@ -140,7 +140,7 @@ Navigate through each step using **![Previous icon](../../../images/observabilit :::{image} ../../../images/observability-synthetics-analyze-one-run-code-executed.png :alt: Step carousel on a page detailing one run of a browser monitor in the {synthetics-app} -:class: screenshot +:screenshot: ::: Scroll down to dig into the steps in this journey run. Click the ![Arrow right icon](../../../images/observability-arrowRight.svg "") icon next to the step number to show details. The details include metrics for the step in the current run and the step in the last successful run. Read more about step-level metrics below in [Timing](../../../solutions/observability/apps/analyze-data-from-synthetic-monitors.md#synthetics-analyze-one-step-timing) and [Metrics](../../../solutions/observability/apps/analyze-data-from-synthetic-monitors.md#synthetics-analyze-one-step-metrics). @@ -171,7 +171,7 @@ Screenshots can be particularly helpful to identify what went wrong when a step :::{image} ../../../images/observability-synthetics-analyze-one-step-screenshot.png :alt: Screenshot for one step in a browser monitor in the {synthetics-app} -:class: screenshot +:screenshot: ::: @@ -193,7 +193,7 @@ This gives you an overview of how much time is spent (and how that time is spent :::{image} ../../../images/observability-synthetics-analyze-one-step-timing.png :alt: Network timing visualization for one step in a browser monitor in the {synthetics-app} -:class: screenshot +:screenshot: ::: @@ -217,7 +217,7 @@ Next to each metric, there’s an icon that indicates whether the value is highe :::{image} ../../../images/observability-synthetics-analyze-one-step-metrics.png :alt: Metrics visualization for one step in a browser monitor in the {synthetics-app} -:class: screenshot +:screenshot: ::: @@ -229,7 +229,7 @@ This provides a different kind of analysis. For example, you might have a large :::{image} ../../../images/observability-synthetics-analyze-one-step-object.png :alt: Object visualization for one step in a browser monitor in the {synthetics-app} -:class: screenshot +:screenshot: ::: @@ -243,7 +243,7 @@ Understanding each phase of a request can help you improve your site’s speed b :::{image} ../../../images/observability-synthetics-analyze-one-step-network.png :alt: Network requests waterfall visualization for one step in a browser monitor in the {synthetics-app} -:class: screenshot +:screenshot: ::: Without leaving the waterfall chart, you can view data points relating to each resource: resource details, request headers, response headers, and certificate headers. On the waterfall chart, select a resource name, or any part of each row, to display the resource details overlay. diff --git a/solutions/observability/apps/analyze-monitors.md b/solutions/observability/apps/analyze-monitors.md index 0ee441300..b63044d84 100644 --- a/solutions/observability/apps/analyze-monitors.md +++ b/solutions/observability/apps/analyze-monitors.md @@ -18,7 +18,7 @@ The **Status** panel displays a summary of the latest information regarding your :::{image} ../../../images/observability-uptime-status-panel.png :alt: Uptime status panel -:class: screenshot +:screenshot: ::: The **Monitoring from** list displays service availability per monitoring location, along with the amount of time elapsed since data was received from that location. The availability percentage is the percentage of successful checks made during the selected time period. @@ -34,7 +34,7 @@ Included on this chart is the {{anomaly-detect}} ({{ml}}) integration. For more :::{image} ../../../images/observability-monitor-duration-chart.png :alt: Monitor duration chart -:class: screenshot +:screenshot: ::: @@ -44,7 +44,7 @@ The **Pings over time** chart is a graphical representation of the check statuse :::{image} ../../../images/observability-pings-over-time.png :alt: Pings over time chart -:class: screenshot +:screenshot: ::: @@ -56,6 +56,6 @@ This table can help you gain insights into more granular details about recent in :::{image} ../../../images/observability-uptime-history.png :alt: Monitor history list -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/api-keys.md b/solutions/observability/apps/api-keys.md index 4dcdcf440..010afda02 100644 --- a/solutions/observability/apps/api-keys.md +++ b/solutions/observability/apps/api-keys.md @@ -98,7 +98,7 @@ Click **Create APM Agent key** and copy the Base64 encoded API key. You will nee :::{image} ../../../images/observability-apm-ui-api-key.png :alt: Applications UI API key -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/apm-agent-central-configuration.md b/solutions/observability/apps/apm-agent-central-configuration.md index 967b5382c..df3b37fe4 100644 --- a/solutions/observability/apps/apm-agent-central-configuration.md +++ b/solutions/observability/apps/apm-agent-central-configuration.md @@ -16,7 +16,7 @@ To get started, choose the services and environments you wish to configure. The :::{image} ../../../images/observability-apm-agent-configuration.png :alt: APM Agent configuration in Kibana -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/apm-agent-explorer.md b/solutions/observability/apps/apm-agent-explorer.md index e674a97aa..6c2aabfb8 100644 --- a/solutions/observability/apps/apm-agent-explorer.md +++ b/solutions/observability/apps/apm-agent-explorer.md @@ -17,13 +17,13 @@ APM agent explorer provides a centralized panel to identify APM agent deployment :::{image} ../../../images/observability-apm-agent-explorer.png :alt: APM agent explorer -:class: screenshot +:screenshot: ::: Select an APM agent to expand it and view the details of each agent instance. :::{image} ../../../images/observability-apm-agent-explorer-flyout.png :alt: APM agent explorer flyout -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/apm-server-binary.md b/solutions/observability/apps/apm-server-binary.md index 6ef23282c..daf19e4ec 100644 --- a/solutions/observability/apps/apm-server-binary.md +++ b/solutions/observability/apps/apm-server-binary.md @@ -767,7 +767,7 @@ Once you have at least one {{apm-agent}} sending data to APM Server, you can sta :::{image} ../../../images/observability-kibana-apm-sample-data.png :alt: Applications UI with data -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/application-performance-monitoring-apm.md b/solutions/observability/apps/application-performance-monitoring-apm.md index d15a32ea9..627afdcb1 100644 --- a/solutions/observability/apps/application-performance-monitoring-apm.md +++ b/solutions/observability/apps/application-performance-monitoring-apm.md @@ -10,7 +10,7 @@ Elastic APM is an application performance monitoring system built on the {{stack :::{image} ../../../images/observability-apm-app-landing.png :alt: Applications UI in {kib} -:class: screenshot +:screenshot: ::: Elastic APM also automatically collects unhandled errors and exceptions. Errors are grouped based primarily on the stack trace, so you can identify new errors as they appear and keep an eye on how many times specific errors happen. diff --git a/solutions/observability/apps/configure-lightweight-monitors.md b/solutions/observability/apps/configure-lightweight-monitors.md index d549657af..61eb11594 100644 --- a/solutions/observability/apps/configure-lightweight-monitors.md +++ b/solutions/observability/apps/configure-lightweight-monitors.md @@ -21,7 +21,7 @@ To use the UI, go to the Synthetics UI in {{kib}} or in your Observability Serve :::{image} ../../../images/observability-synthetics-get-started-ui-lightweight.png :alt: Synthetics Create monitor UI -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/configure-settings.md b/solutions/observability/apps/configure-settings.md index b804456ff..30638004e 100644 --- a/solutions/observability/apps/configure-settings.md +++ b/solutions/observability/apps/configure-settings.md @@ -33,7 +33,7 @@ The pattern set here only restricts what the {{uptime-app}} displays. You can st :::{image} ../../../images/observability-heartbeat-indices.png :alt: {{heartbeat}} indices -:class: screenshot +:screenshot: ::: @@ -47,7 +47,7 @@ For more information about each connector, see [action types and connectors](../ :::{image} ../../../images/observability-alert-connector.png :alt: Rule connector -:class: screenshot +:screenshot: ::: @@ -64,6 +64,6 @@ A standard security requirement is to make sure that your TLS certificates have :::{image} ../../../images/observability-cert-expiry-settings.png :alt: Certificate expiry settings -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/configure-synthetics-settings.md b/solutions/observability/apps/configure-synthetics-settings.md index c94275c50..251dc9f33 100644 --- a/solutions/observability/apps/configure-synthetics-settings.md +++ b/solutions/observability/apps/configure-synthetics-settings.md @@ -33,7 +33,7 @@ On the *Rules* page, you can manage the default synthetics rules including snooz :::{image} ../../../images/observability-synthetics-settings-disable-default-rules.png :alt: Rules page with default Synthetics rules -:class: screenshot +:screenshot: ::: ::::{note} @@ -50,7 +50,7 @@ In the **Alerting** tab on the Synthetics Settings page, you can add and configu :::{image} ../../../images/observability-synthetics-settings-alerting.png :alt: Alerting tab on the Synthetics Settings page in {kib} -:class: screenshot +:screenshot: ::: @@ -62,7 +62,7 @@ In the **{{private-location}}s** tab, you can add and manage {{private-location} :::{image} ../../../images/observability-synthetics-settings-private-locations.png :alt: {{private-location}}s tab on the Synthetics Settings page in {kib} -:class: screenshot +:screenshot: ::: @@ -74,7 +74,7 @@ In the **Global parameters** tab, you can define variables and parameters. This :::{image} ../../../images/observability-synthetics-settings-global-parameters.png :alt: Global parameters tab on the Synthetics Settings page in {kib} -:class: screenshot +:screenshot: ::: @@ -86,7 +86,7 @@ In the **Data retention** tab, use the links to jump to the relevant policy for :::{image} ../../../images/observability-synthetics-settings-data-retention.png :alt: Data retention tab on the Synthetics Settings page in {kib} -:class: screenshot +:screenshot: ::: @@ -106,5 +106,5 @@ In a serverless project, to create a Project API key you must be logged in as a :::{image} ../../../images/observability-synthetics-settings-api-keys.png :alt: Project API keys tab on the Synthetics Settings page in {kib} -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/apps/control-access-to-apm-data.md b/solutions/observability/apps/control-access-to-apm-data.md index 4a5227c86..bed511767 100644 --- a/solutions/observability/apps/control-access-to-apm-data.md +++ b/solutions/observability/apps/control-access-to-apm-data.md @@ -268,7 +268,7 @@ Using the table below, assign each role the following privileges: :::{image} ../../../images/observability-apm-roles-config.png :alt: APM role config example -:class: screenshot +:screenshot: ::: Alternatively, you can use the {{es}} [Create or update roles API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role): diff --git a/solutions/observability/apps/create-apm-rules-alerts.md b/solutions/observability/apps/create-apm-rules-alerts.md index 04d89e41d..817daab36 100644 --- a/solutions/observability/apps/create-apm-rules-alerts.md +++ b/solutions/observability/apps/create-apm-rules-alerts.md @@ -47,7 +47,7 @@ If you’re using the [service groups](../../../solutions/observability/apps/ser :::{image} ../../../images/observability-apm-service-group.png :alt: Example view of service group in the Applications UI in Kibana -:class: screenshot +:screenshot: ::: @@ -57,7 +57,7 @@ Alerts can be viewed within the context of any service. After selecting a servic :::{image} ../../../images/observability-active-alert-service.png :alt: View active alerts by service -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/create-custom-links.md b/solutions/observability/apps/create-custom-links.md index 6f64fa14d..ff6530bad 100644 --- a/solutions/observability/apps/create-custom-links.md +++ b/solutions/observability/apps/create-custom-links.md @@ -45,7 +45,7 @@ Because everyone’s data is different, you’ll need to examine your traces to :::{image} ../../../images/observability-example-metadata.png :alt: Example metadata -:class: screenshot +:screenshot: ::: @@ -103,7 +103,7 @@ This link opens a new GitHub issue in the apm-agent-rum repository. It populates :::{image} ../../../images/observability-create-github-issue.png :alt: Example github issue -:class: screenshot +:screenshot: ::: | | | @@ -130,7 +130,7 @@ This link creates a new task on the Engineering board in Jira. It populates the :::{image} ../../../images/observability-create-jira-issue.png :alt: Example jira issue -:class: screenshot +:screenshot: ::: | | | diff --git a/solutions/observability/apps/create-monitors-in-synthetics-app.md b/solutions/observability/apps/create-monitors-in-synthetics-app.md index 20ba2c3d0..86be3cdaa 100644 --- a/solutions/observability/apps/create-monitors-in-synthetics-app.md +++ b/solutions/observability/apps/create-monitors-in-synthetics-app.md @@ -51,7 +51,7 @@ To use the UI to add a lightweight monitor: :::{image} ../../../images/serverless-private-locations-monitor-locations.png :alt: Screenshot of Monitor locations options including a {private-location} - :class: screenshot + :screenshot: ::: ::::: @@ -63,7 +63,7 @@ To use the UI to add a lightweight monitor: :::{image} ../../../images/observability-synthetics-get-started-ui-lightweight.png :alt: Synthetics Create monitor UI - :class: screenshot + :screenshot: ::: @@ -92,7 +92,7 @@ To use the UI to add a browser monitor: :::{image} ../../../images/observability-synthetics-ui-inline-script.png :alt: Configure a synthetic monitor using an inline script in Elastic {{fleet}} - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/solutions/observability/apps/create-monitors-with-project-monitors.md b/solutions/observability/apps/create-monitors-with-project-monitors.md index d83d574c0..a362dd379 100644 --- a/solutions/observability/apps/create-monitors-with-project-monitors.md +++ b/solutions/observability/apps/create-monitors-with-project-monitors.md @@ -84,7 +84,7 @@ Then, follow the prompts on screen to set up the correct default variables for y :::{image} ../../../images/serverless-synthetics-monitor-management-api-key.png :alt: Project API Keys tab in Synthetics settings - :class: screenshot + :screenshot: ::: ::::{note} @@ -127,7 +127,7 @@ Then, take a look at key files and directories inside your Synthetics project: :::{image} ../../../images/serverless-synthetics-monitor-management-api-key.png :alt: Project API Keys tab in Synthetics settings - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/solutions/observability/apps/create-upload-source-maps-rum.md b/solutions/observability/apps/create-upload-source-maps-rum.md index 2e02edcfd..50c512887 100644 --- a/solutions/observability/apps/create-upload-source-maps-rum.md +++ b/solutions/observability/apps/create-upload-source-maps-rum.md @@ -15,14 +15,14 @@ Here’s an example of an exception stack trace in the Applications UI when usin :::{image} ../../../images/observability-source-map-before.png :alt: Applications UI without source mapping -:class: screenshot +:screenshot: ::: With a source map, minified files are mapped back to the original source code, allowing you to maintain the speed advantage of minified code, without losing the ability to quickly and easily debug your application. Here’s the same example as before, but with a source map uploaded and applied: :::{image} ../../../images/observability-source-map-after.png :alt: Applications UI with source mapping -:class: screenshot +:screenshot: ::: Follow the steps below to enable source mapping your error stack traces in the Applications UI: diff --git a/solutions/observability/apps/dependencies.md b/solutions/observability/apps/dependencies.md index 2839eeac2..c640fa0c3 100644 --- a/solutions/observability/apps/dependencies.md +++ b/solutions/observability/apps/dependencies.md @@ -10,7 +10,7 @@ APM agents collect details about external calls made from instrumented services. :::{image} ../../../images/observability-dependencies.png :alt: Dependencies view in the Applications UI -:class: screenshot +:screenshot: ::: Many application issues are caused by slow or unresponsive downstream dependencies. And because a single, slow dependency can significantly impact the end-user experience, it’s important to be able to quickly identify these problems and determine the root cause. @@ -19,7 +19,7 @@ Select a dependency to see detailed latency, throughput, and failed transaction :::{image} ../../../images/observability-dependencies-drilldown.png :alt: Dependencies drilldown view in the Applications UI -:class: screenshot +:screenshot: ::: When viewing a dependency, consider your pattern of usage with that dependency. If your usage pattern *hasn’t* increased or decreased, but the experience has been negatively affected—either with an increase in latency or errors—there’s likely a problem with the dependency that needs to be addressed. @@ -39,12 +39,12 @@ The Dependency operations functionality is in beta and is subject to change. The :::{image} ../../../images/observability-operations.png :alt: operations view in the Applications UI -:class: screenshot +:screenshot: ::: Selecting an operation displays the operation’s impact and performance trends over time, via key metrics like latency, throughput, and failed transaction rate. In addition, the [**Trace sample timeline**](../../../solutions/observability/apps/trace-sample-timeline.md) provides a visual drill-down into an end-to-end trace sample. :::{image} ../../../images/observability-operations-detail.png :alt: operations detail view in the Applications UI -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/errors-2.md b/solutions/observability/apps/errors-2.md index 65d4da397..1f0d0243b 100644 --- a/solutions/observability/apps/errors-2.md +++ b/solutions/observability/apps/errors-2.md @@ -12,14 +12,14 @@ A service returning a 5xx code from a request handler, controller, etc., will no :::{image} ../../../images/observability-apm-errors-overview.png :alt: APM Errors overview -:class: screenshot +:screenshot: ::: Selecting an error group ID or error message brings you to the **Error group**. :::{image} ../../../images/observability-apm-error-group.png :alt: APM Error group -:class: screenshot +:screenshot: ::: The error group details page visualizes the number of error occurrences over time and compared to a recent time range. This allows you to quickly determine if the error rate is changing or remaining constant. You’ll also see the top 5 affected transactions—​enabling you to quickly narrow down which transactions are most impacted by the selected error. diff --git a/solutions/observability/apps/explore-mobile-sessions-with-discover.md b/solutions/observability/apps/explore-mobile-sessions-with-discover.md index 930d1f0da..fe0b48dde 100644 --- a/solutions/observability/apps/explore-mobile-sessions-with-discover.md +++ b/solutions/observability/apps/explore-mobile-sessions-with-discover.md @@ -25,28 +25,28 @@ Here we can see the `session.id` guid in the metadata viewer in the error detail :::{image} ../../../images/observability-mobile-session-error-details.png :alt: Example of session.id in error details -:class: screenshot +:screenshot: ::: Copy this value and open the Discover page: :::{image} ../../../images/observability-mobile-session-explorer-nav.png :alt: Example view of navigation to Discover -:class: screenshot +:screenshot: ::: Set the data view. `APM` selected in the example: :::{image} ../../../images/observability-mobile-session-explorer-apm.png :alt: Example view of Explorer selecting APM data view -:class: screenshot +:screenshot: ::: Filter using the `session.id`: `session.id: ""`: :::{image} ../../../images/observability-mobile-session-filter-discover.png :alt: Filter Explor using session.id -:class: screenshot +:screenshot: ::: Explore all the documents associated with that session id including crashes, lifecycle events, network requests, errors, and other custom events! diff --git a/solutions/observability/apps/filter-application-data.md b/solutions/observability/apps/filter-application-data.md index d7a274e9a..dc9b73733 100644 --- a/solutions/observability/apps/filter-application-data.md +++ b/solutions/observability/apps/filter-application-data.md @@ -13,7 +13,7 @@ Global filters are ways you can filter your APM data based on a specific time ra :::{image} ../../../images/observability-global-filters.png :alt: Global filters view -:class: screenshot +:screenshot: ::: ::::{note} diff --git a/solutions/observability/apps/find-transaction-latency-failure-correlations.md b/solutions/observability/apps/find-transaction-latency-failure-correlations.md index 27263a938..a47c54f11 100644 --- a/solutions/observability/apps/find-transaction-latency-failure-correlations.md +++ b/solutions/observability/apps/find-transaction-latency-failure-correlations.md @@ -45,7 +45,7 @@ The correlations on the **Latency correlations** tab help you discover which att :::{image} ../../../images/observability-correlations-hover.png :alt: APM latency correlations -:class: screenshot +:screenshot: ::: The progress bar indicates the status of the asynchronous analysis, which performs statistical searches across a large number of attributes. For large time ranges and services with high transaction throughput, this might take some time. To improve performance, reduce the time range. @@ -72,7 +72,7 @@ For example, in the screenshot below, there are attributes such as a specific no :::{image} ../../../images/observability-correlations-failed-transactions.png :alt: Failed transaction correlations -:class: screenshot +:screenshot: ::: Select the `+` filter to create a new query in the Applications UI for transactions with one or more of these attributes. If you are unfamiliar with a field, click the icon beside its name to view its most popular values and optionally filter on those values too. Each time that you add another attribute, it is filtering out more and more noise and bringing you closer to a diagnosis. \ No newline at end of file diff --git a/solutions/observability/apps/fleet-managed-apm-server.md b/solutions/observability/apps/fleet-managed-apm-server.md index 8d7159588..116f54cfa 100644 --- a/solutions/observability/apps/fleet-managed-apm-server.md +++ b/solutions/observability/apps/fleet-managed-apm-server.md @@ -73,7 +73,7 @@ You can install only a single {{agent}} per host, which means you cannot run {{f :::{image} ../../../images/observability-add-fleet-server.png :alt: In-product instructions for adding a {{fleet-server}} - :class: screenshot + :screenshot: ::: @@ -116,14 +116,14 @@ If you don’t have a {{fleet}} setup already in place, the easiest way to get s :::{image} ../../../images/observability-kibana-fleet-integrations-apm.png :alt: {{fleet}} showing APM integration - :class: screenshot + :screenshot: ::: 3. Click **Add Elastic APM**. :::{image} ../../../images/observability-kibana-fleet-integrations-apm-overview.png :alt: {{fleet}} showing APM integration overview - :class: screenshot + :screenshot: ::: 4. On the **Add Elastic APM integration** page, define the host and port where APM Server will listen. Make a note of this value—​you’ll need it later. @@ -138,7 +138,7 @@ If you don’t have a {{fleet}} setup already in place, the easiest way to get s :::{image} ../../../images/observability-apm-agent-policy-1.png :alt: {{fleet}} showing apm policy - :class: screenshot + :screenshot: ::: Any {{agent}}s assigned to this policy will collect APM data from your instrumented services. @@ -812,5 +812,5 @@ Back in {{kib}}, under {{observability}}, select APM. You should see application :::{image} ../../../images/observability-kibana-apm-sample-data.png :alt: Applications UI with data -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/get-started-with-apm.md b/solutions/observability/apps/get-started-with-apm.md index 18670a3be..bc28d7dba 100644 --- a/solutions/observability/apps/get-started-with-apm.md +++ b/solutions/observability/apps/get-started-with-apm.md @@ -90,7 +90,7 @@ This decision tree highlights key factors to help you make an informed decision :::{image} ../../../images/observability-apm-help-me-decide.svg :alt: APM decision tree -:class: screenshot +:screenshot: ::: % What needs to be done: Align serverless/stateful diff --git a/solutions/observability/apps/grant-access-using-api-keys.md b/solutions/observability/apps/grant-access-using-api-keys.md index b4999f063..91f3a9420 100644 --- a/solutions/observability/apps/grant-access-using-api-keys.md +++ b/solutions/observability/apps/grant-access-using-api-keys.md @@ -24,7 +24,7 @@ To create an API key: :::{image} ../../../images/observability-server-api-key-create.png :alt: API key creation - :class: screenshot + :screenshot: ::: 3. Enter a name for your API key and select **Restrict privileges**. In the role descriptors box, assign the appropriate privileges to the new API key. For example: @@ -90,7 +90,7 @@ To open **API keys**, find **Stack Management** in the main menu or use the [glo :::{image} ../../../images/observability-server-api-key-create.png :alt: API key creation -:class: screenshot +:screenshot: ::: Enter a name for your API key and select **Restrict privileges**. In the role descriptors box, assign the appropriate privileges to the new API key. For example: diff --git a/solutions/observability/apps/index-lifecycle-management.md b/solutions/observability/apps/index-lifecycle-management.md index e719cb1e0..d2dccfbf2 100644 --- a/solutions/observability/apps/index-lifecycle-management.md +++ b/solutions/observability/apps/index-lifecycle-management.md @@ -69,7 +69,7 @@ The **Data Streams** view in {{kib}} shows you data streams, index templates, an :::{image} ../../../images/observability-data-stream-overview.png :alt: Data streams info - :class: screenshot + :screenshot: ::: @@ -103,7 +103,7 @@ To apply your new index lifecycle policy to the `traces-apm-*` data stream, edit :::{image} ../../../images/observability-create-component-template.png :alt: Create component template - :class: screenshot + :screenshot: ::: diff --git a/solutions/observability/apps/infrastructure.md b/solutions/observability/apps/infrastructure.md index 82b59d7fa..70a5a4511 100644 --- a/solutions/observability/apps/infrastructure.md +++ b/solutions/observability/apps/infrastructure.md @@ -22,7 +22,7 @@ The **Infrastructure** tab provides information about the containers, pods, and :::{image} ../../../images/serverless-infra.png :alt: Example view of the Infrastructure tab in the Applications UI -:class: screenshot +:screenshot: ::: IT ops and software reliability engineers (SREs) can use this tab to quickly find a service’s underlying infrastructure resources when debugging a problem. Knowing what infrastructure is related to a service allows you to remediate issues by restarting, killing hanging instances, changing configuration, rolling back deployments, scaling up, scaling out, and so on. diff --git a/solutions/observability/apps/inspect-uptime-duration-anomalies.md b/solutions/observability/apps/inspect-uptime-duration-anomalies.md index c3f74b577..c1a83e417 100644 --- a/solutions/observability/apps/inspect-uptime-duration-anomalies.md +++ b/solutions/observability/apps/inspect-uptime-duration-anomalies.md @@ -28,6 +28,6 @@ When an anomaly is detected, the duration is displayed on the **Monitor duration :::{image} ../../../images/observability-inspect-uptime-duration-anomalies.png :alt: inspect uptime duration anomalies -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/integrate-with-machine-learning.md b/solutions/observability/apps/integrate-with-machine-learning.md index 0cae88b3e..0e598329f 100644 --- a/solutions/observability/apps/integrate-with-machine-learning.md +++ b/solutions/observability/apps/integrate-with-machine-learning.md @@ -23,7 +23,7 @@ Results from machine learning jobs are shown in multiple places throughout the A :::{image} ../../../images/observability-apm-service-map-anomaly.png :alt: Example view of anomaly scores on service maps in the Applications UI - :class: screenshot + :screenshot: ::: diff --git a/solutions/observability/apps/inventory.md b/solutions/observability/apps/inventory.md index 20cd800dc..f62ea4746 100644 --- a/solutions/observability/apps/inventory.md +++ b/solutions/observability/apps/inventory.md @@ -18,7 +18,7 @@ The new Inventory requires the Elastic Entity Model (EEM). To learn more, refer :::{image} ../../../images/observability-inventory-catalog.png :alt: Inventory catalog -:class: screenshot +:screenshot: ::: Inventory is currently available for hosts, containers, and services, but it will scale to support all of your entities. @@ -72,7 +72,7 @@ For each entity, you can click the entity name and get a detailed view. For exam :::{image} ../../../images/observability-inventory-entity-detailed-view.png :alt: Inventory detailed view -:class: screenshot +:screenshot: ::: If you open an entity of type `host` or `container` that does not have infrastructure data, some of the visualizations will be blank and some features on the page will not be fully populated. diff --git a/solutions/observability/apps/logs.md b/solutions/observability/apps/logs.md index 014a0896a..fd422f4ec 100644 --- a/solutions/observability/apps/logs.md +++ b/solutions/observability/apps/logs.md @@ -16,7 +16,7 @@ To learn how to correlate your logs with your instrumented services, refer to [S :::{image} ../../../images/observability-logs.png :alt: Example view of the Logs tab in Applications UI -:class: screenshot +:screenshot: ::: ::::{tip} diff --git a/solutions/observability/apps/metrics-2.md b/solutions/observability/apps/metrics-2.md index 8dd23ead9..18399d543 100644 --- a/solutions/observability/apps/metrics-2.md +++ b/solutions/observability/apps/metrics-2.md @@ -12,19 +12,19 @@ If you’re experiencing a problem with your service, you can use this page to a :::{image} ../../../images/observability-apm-metrics.png :alt: Example view of the Metrics overview in Applications UI in Kibana -:class: screenshot +:screenshot: ::: If you’re using the Java APM agent, you can view metrics for each JVM. :::{image} ../../../images/observability-jvm-metrics-overview.png :alt: Example view of the Metrics overview for the Java Agent -:class: screenshot +:screenshot: ::: Breaking down metrics by JVM makes it much easier to analyze the provided metrics: CPU usage, memory usage, heap or non-heap memory, thread count, garbage collection rate, and garbage collection time spent per minute. :::{image} ../../../images/observability-jvm-metrics.png :alt: Example view of the Metrics overview for the Java Agent -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/apps/mobile-service-overview.md b/solutions/observability/apps/mobile-service-overview.md index 511ed3d15..e8d337d85 100644 --- a/solutions/observability/apps/mobile-service-overview.md +++ b/solutions/observability/apps/mobile-service-overview.md @@ -32,7 +32,7 @@ Note: due to the way crash rate is calculated (crashes per session) it is possib :::{image} ../../../images/observability-mobile-location.png :alt: mobile service overview centered on location map -:class: screenshot +:screenshot: ::: @@ -43,7 +43,7 @@ Optimize your end-user experience and your application QA strategy based on your :::{image} ../../../images/observability-mobile-most-used.png :alt: mobile service overview showing most used devices -:class: screenshot +:screenshot: ::: @@ -58,7 +58,7 @@ By default, transaction groups are sorted by *Impact* to show the most used and :::{image} ../../../images/observability-traffic-transactions.png :alt: Traffic and transactions -:class: screenshot +:screenshot: ::: @@ -86,11 +86,11 @@ Displaying dependencies for services instrumented with the Real User Monitoring :::{image} ../../../images/observability-spans-dependencies.png :alt: Span type duration and dependencies -:class: screenshot +:screenshot: ::: :::{image} ../../../images/observability-mobile-tp.png :alt: mobile service overview showing latency -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/monitoring-aws-lambda-functions.md b/solutions/observability/apps/monitoring-aws-lambda-functions.md index 9342144bb..63fb04241 100644 --- a/solutions/observability/apps/monitoring-aws-lambda-functions.md +++ b/solutions/observability/apps/monitoring-aws-lambda-functions.md @@ -24,7 +24,7 @@ Normally, during the execution of a Lambda function, there’s only a single lan :::{image} ../../../images/serverless-apm-agents-aws-lambda-functions-architecture.png :alt: image showing data flow from lambda function -:class: screenshot +:screenshot: ::: By using an AWS Lambda extension, Elastic APM agents can send data to a local Lambda extension process, and that process will forward data on to the managed intake service asynchronously. The Lambda extension ensures that any potential latency between the Lambda function and the managed intake service instance will not cause latency in the request flow of the Lambda function itself. diff --git a/solutions/observability/apps/observe-lambda-functions.md b/solutions/observability/apps/observe-lambda-functions.md index 4e2117bb2..aa00072e0 100644 --- a/solutions/observability/apps/observe-lambda-functions.md +++ b/solutions/observability/apps/observe-lambda-functions.md @@ -12,7 +12,7 @@ To set up Lambda monitoring, refer to [AWS Lambda functions](/solutions/observab :::{image} ../../../images/observability-lambda-overview.png :alt: lambda overview -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/real-user-monitoring-user-experience.md b/solutions/observability/apps/real-user-monitoring-user-experience.md index e184309f4..25cfb5abc 100644 --- a/solutions/observability/apps/real-user-monitoring-user-experience.md +++ b/solutions/observability/apps/real-user-monitoring-user-experience.md @@ -17,7 +17,7 @@ Powered by the APM Real user monitoring (RUM) agent, all it takes is a few lines :::{image} ../../../images/observability-user-experience-tab.png :alt: {{user-experience}} tab -:class: screenshot +:screenshot: ::: @@ -44,7 +44,7 @@ You won’t be able to fix any problems from viewing these metrics alone, but yo :::{image} ../../../images/observability-page-load-duration.png :alt: {{user-experience}} page load duration metrics -:class: screenshot +:screenshot: ::: @@ -54,7 +54,7 @@ You won’t be able to fix any problems from viewing these metrics alone, but yo :::{image} ../../../images/observability-user-exp-metrics.png :alt: {{user-experience}} metrics -:class: screenshot +:screenshot: ::: ::::{dropdown} Metric reference @@ -123,7 +123,7 @@ Don’t forget, this data also influences search engine page rankings and placem :::{image} ../../../images/observability-visitor-breakdown.png :alt: {{user-experience}} visitor breakdown -:class: screenshot +:screenshot: ::: @@ -133,7 +133,7 @@ JavaScript errors can be detrimental to a users experience on your website. But :::{image} ../../../images/observability-js-errors.png :alt: {{user-experience}} JavaScript errors -:class: screenshot +:screenshot: ::: Open error messages in APM for additional analysis tools, like occurrence rates, transaction ids, user data, and more. diff --git a/solutions/observability/apps/scripting-browser-monitors.md b/solutions/observability/apps/scripting-browser-monitors.md index 1d0c5aa8b..4eb1e10cd 100644 --- a/solutions/observability/apps/scripting-browser-monitors.md +++ b/solutions/observability/apps/scripting-browser-monitors.md @@ -20,5 +20,5 @@ Start by learning the basics of synthetic monitoring, including how to: :::{image} ../../../images/observability-synthetic-monitor-lifecycle.png :alt: Diagram of the lifecycle of a synthetic monitor: write a test -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/apps/service-map.md b/solutions/observability/apps/service-map.md index b8133c3b5..f8c4468dc 100644 --- a/solutions/observability/apps/service-map.md +++ b/solutions/observability/apps/service-map.md @@ -30,7 +30,7 @@ If there’s a specific service that interests you, select that service to highl :::{image} ../../../images/observability-service-maps-java.png :alt: Example view of service maps in the Applications UI in Kibana -:class: screenshot +:screenshot: ::: @@ -46,7 +46,7 @@ You can create machine learning jobs to calculate anomaly scores on APM transact :::{image} ../../../images/observability-apm-service-map-anomaly.png :alt: Example view of anomaly scores on service maps in the Applications UI -:class: screenshot +:screenshot: ::: If an anomaly has been detected, click **View anomalies** to view the anomaly detection metric viewer. This time series analysis will display additional details on the severity and time of the detected anomalies. diff --git a/solutions/observability/apps/service-overview.md b/solutions/observability/apps/service-overview.md index 8d03b800a..7cda8708f 100644 --- a/solutions/observability/apps/service-overview.md +++ b/solutions/observability/apps/service-overview.md @@ -21,7 +21,7 @@ For insight into the health of your services, you can compare how a service perf :::{image} ../../../images/observability-time-series-expected-bounds-comparison.png :alt: Time series and expected bounds comparison -:class: screenshot +:screenshot: ::: Select the **Comparison** box to apply a time-based or expected bounds comparison. The time-based comparison options are based on the selected time filter range: @@ -41,7 +41,7 @@ Response times for the service. You can filter the **Latency** chart to display :::{image} ../../../images/observability-latency.png :alt: Service latency -:class: screenshot +:screenshot: ::: @@ -72,7 +72,7 @@ The **Errors** table provides a high-level view of each error message when it fi :::{image} ../../../images/observability-error-rate.png :alt: failed transaction rate and errors -:class: screenshot +:screenshot: ::: @@ -103,7 +103,7 @@ The **Instances** table displays a list of all the available service instances w :::{image} ../../../images/observability-all-instances.png :alt: All instances -:class: screenshot +:screenshot: ::: @@ -114,7 +114,7 @@ To view metadata relating to the service agent, and if relevant, the container a :::{image} ../../../images/observability-metadata-icons.png :alt: Service metadata -:class: screenshot +:screenshot: ::: **Service information** diff --git a/solutions/observability/apps/services.md b/solutions/observability/apps/services.md index 5ec0d5904..e6e4a1fd8 100644 --- a/solutions/observability/apps/services.md +++ b/solutions/observability/apps/services.md @@ -17,7 +17,7 @@ In addition to health status, active alerts for each service are prominently dis :::{image} ../../../images/observability-apm-services-overview.png :alt: Example view of services table the Applications UI in Kibana -:class: screenshot +:screenshot: ::: % Stateful only for the following tip? @@ -47,7 +47,7 @@ Group services together to build meaningful views that remove noise, simplify in :::{image} ../../../images/observability-apm-service-group.png :alt: Example view of service group in the Applications UI in Kibana -:class: screenshot +:screenshot: ::: To create a service group: diff --git a/solutions/observability/apps/storage-explorer.md b/solutions/observability/apps/storage-explorer.md index 79d415125..546751662 100644 --- a/solutions/observability/apps/storage-explorer.md +++ b/solutions/observability/apps/storage-explorer.md @@ -11,7 +11,7 @@ Analyze your APM data and manage costs with **storage explorer**. For example, a :::{image} ../../../images/observability-storage-explorer-overview.png :alt: APM Storage Explorer -:class: screenshot +:screenshot: ::: @@ -44,7 +44,7 @@ The service statistics table provides detailed information on each service: :::{image} ../../../images/observability-storage-explorer-expanded.png :alt: APM Storage Explorer service breakdown -:class: screenshot +:screenshot: ::: As you explore your service statistics, you might want to take action to reduce the number of documents and therefore storage size of a particular service. diff --git a/solutions/observability/apps/synthetic-monitoring.md b/solutions/observability/apps/synthetic-monitoring.md index 61a853ee8..6187d1ff0 100644 --- a/solutions/observability/apps/synthetic-monitoring.md +++ b/solutions/observability/apps/synthetic-monitoring.md @@ -19,7 +19,7 @@ Synthetics periodically checks the status of your services and applications. Mon :::{image} ../../../images/observability-synthetics-monitor-page.png :alt: {{synthetics-app}} in {{kib}} -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/trace-sample-timeline.md b/solutions/observability/apps/trace-sample-timeline.md index 9d570c9ad..93973a952 100644 --- a/solutions/observability/apps/trace-sample-timeline.md +++ b/solutions/observability/apps/trace-sample-timeline.md @@ -10,7 +10,7 @@ The trace sample timeline visualization is a high-level view of what your applic :::{image} ../../../images/observability-apm-transaction-sample.png :alt: Example of distributed trace colors in the Applications UI -:class: screenshot +:screenshot: ::: View a span in detail by clicking on it in the timeline waterfall. For example, when you click on an SQL Select database query, the information displayed includes the actual SQL that was executed, how long it took, and the percentage of the trace’s total time. You also get a stack trace, which shows the SQL query in your code. Finally, APM knows which files are your code and which are just modules or libraries that you’ve installed. These library frames will be minimized by default in order to show you the most relevant stack trace. @@ -22,7 +22,7 @@ A [span](/solutions/observability/apps/spans.md) is the duration of a single eve :::{image} ../../../images/observability-apm-span-detail.png :alt: Example view of a span detail in the Applications UI -:class: screenshot +:screenshot: ::: @@ -45,14 +45,14 @@ When a trace travels through multiple services it is known as a *distributed tra :::{image} ../../../images/observability-apm-services-trace.png :alt: Example of distributed trace colors in the Applications UI -:class: screenshot +:screenshot: ::: As application architectures are shifting from monolithic to more distributed, service-based architectures, distributed tracing has become a crucial feature of modern application performance monitoring. It allows you to trace requests through your service architecture automatically, and visualize those traces in one single view in the Applications UI. From initial web requests to your front-end service, to queries made to your back-end services, this makes finding possible bottlenecks throughout your application much easier and faster. :::{image} ../../../images/observability-apm-distributed-tracing.png :alt: Example view of the distributed tracing in the Applications UI -:class: screenshot +:screenshot: ::: Don’t forget; by definition, a distributed trace includes more than one transaction. When viewing distributed traces in the timeline waterfall, you’ll see this icon: ![APM icon](../../../images/observability-transaction-icon.png ""), which indicates the next transaction in the trace. For easier problem isolation, transactions can be collapsed in the waterfall by clicking the icon to the left of the transactions. Transactions can also be expanded and viewed in detail by clicking on them. diff --git a/solutions/observability/apps/traces-2.md b/solutions/observability/apps/traces-2.md index 322cb17fe..dd59eb9fe 100644 --- a/solutions/observability/apps/traces-2.md +++ b/solutions/observability/apps/traces-2.md @@ -19,7 +19,7 @@ You can also use queries to filter and search the transactions shown on this pag :::{image} ../../../images/observability-apm-traces.png :alt: Example view of the Traces overview in Applications UI in Kibana -:class: screenshot +:screenshot: ::: @@ -36,5 +36,5 @@ Curate your own custom queries, or use the [**Service Map**](../../../solutions/ :::{image} ../../../images/observability-trace-explorer.png :alt: Trace explorer -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/apps/traces.md b/solutions/observability/apps/traces.md index 942d6575c..fa3108a40 100644 --- a/solutions/observability/apps/traces.md +++ b/solutions/observability/apps/traces.md @@ -79,7 +79,7 @@ APM's timeline visualization provides a visual deep-dive into each of your appli :::{image} ../../../images/observability-apm-distributed-tracing.png :alt: Distributed tracing in the Applications UI -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/track-deployments-with-annotations.md b/solutions/observability/apps/track-deployments-with-annotations.md index 91e9d459b..239abb8f2 100644 --- a/solutions/observability/apps/track-deployments-with-annotations.md +++ b/solutions/observability/apps/track-deployments-with-annotations.md @@ -16,7 +16,7 @@ navigation_title: "Track deployments with annotations" :::{image} ../../../images/observability-apm-transaction-annotation.png :alt: Example view of transactions annotation in the Applications UI -:class: screenshot +:screenshot: ::: For enhanced visibility into your deployments, we offer deployment annotations on all transaction charts. This feature enables you to easily determine if your deployment has increased response times for an end-user, or if the memory/CPU footprint of your application has changed. Being able to quickly identify bad deployments enables you to rollback and fix issues without causing costly outages. diff --git a/solutions/observability/apps/transaction-sampling.md b/solutions/observability/apps/transaction-sampling.md index 03d1d3ea5..e7cae36b3 100644 --- a/solutions/observability/apps/transaction-sampling.md +++ b/solutions/observability/apps/transaction-sampling.md @@ -37,7 +37,7 @@ In the example in *Figure 1*, `Service A` initiates four transactions and has sa :::{image} ../../../images/observability-dt-sampling-example-1.png :alt: Distributed tracing and head based sampling example one -:class: screenshot +:screenshot: ::: In the example in *Figure 2*, `Service A` initiates four transactions and has a sample rate of `1` (`100%`). Again, the upstream sampling decision is respected, so the sample rate for all services will be `1` (`100%`). @@ -46,7 +46,7 @@ In the example in *Figure 2*, `Service A` initiates four transactions and has a :::{image} ../../../images/observability-dt-sampling-example-2.png :alt: Distributed tracing and head based sampling example two -:class: screenshot +:screenshot: ::: @@ -64,7 +64,7 @@ In the example in *Figure 3*, `Service A` is an Elastic-monitored service that i :::{image} ../../../images/observability-dt-sampling-continuation-strategy-restart_external.png :alt: Distributed tracing and head based sampling with restart_external continuation strategy -:class: screenshot +:screenshot: ::: Use the **`restart`** trace continuation strategy on an Elastic-monitored service to start a new trace regardless of whether the previous service had a `traceparent` header. This can be helpful if an Elastic-monitored service is publicly exposed, and you do not want tracing data to possibly be spoofed by user requests. diff --git a/solutions/observability/apps/transactions-2.md b/solutions/observability/apps/transactions-2.md index 9f276d95a..53e90f37f 100644 --- a/solutions/observability/apps/transactions-2.md +++ b/solutions/observability/apps/transactions-2.md @@ -11,7 +11,7 @@ A *transaction* describes an event captured by an Elastic APM agent instrumentin :::{image} ../../../images/observability-apm-transactions-overview.png :alt: Example view of transactions table in the Applications UI -:class: screenshot +:screenshot: ::: The **Latency**, **Throughput**, **Failed transaction rate**, **Time spent by span type**, and **Cold start rate** charts display information on all transactions associated with the selected service: @@ -53,7 +53,7 @@ The **Transactions** table displays a list of *transaction groups* for the selec :::{image} ../../../images/observability-apm-transactions-table.png :alt: Example view of the transactions table in the Applications UI in Kibana -:class: screenshot +:screenshot: ::: By default, transaction groups are sorted by *Impact*. Impact helps show the most used and slowest endpoints in your service - in other words, it’s the collective amount of pain a specific endpoint is causing your users. If there’s a particular endpoint you’re worried about, you can click on it to view the [transaction details](../../../solutions/observability/apps/transactions-2.md#transaction-details). @@ -73,7 +73,7 @@ The transaction overview page is customized for the JavaScript RUM agent. Specif :::{image} ../../../images/observability-apm-geo-ui.png :alt: average page load duration distribution -:class: screenshot +:screenshot: ::: Additional RUM goodies, like core vitals, and visitor breakdown by browser, location, and device, are available in the Observability User Experience tab. @@ -85,7 +85,7 @@ Selecting a transaction group will bring you to the **transaction** details. Thi :::{image} ../../../images/observability-apm-transactions-overview.png :alt: Example view of response time distribution -:class: screenshot +:screenshot: ::: @@ -95,7 +95,7 @@ The latency distribution shows a plot of all transaction durations for the given :::{image} ../../../images/observability-apm-transaction-duration-dist.png :alt: Example view of latency distribution graph -:class: screenshot +:screenshot: ::: Click and drag to select a latency duration *bucket* to display up to 500 trace samples. @@ -113,7 +113,7 @@ Each sample has a trace timeline waterfall that shows how a typical request in t :::{image} ../../../images/observability-apm-transaction-sample.png :alt: Example view of transactions sample -:class: screenshot +:screenshot: ::: ::::{note} @@ -155,7 +155,7 @@ To learn how to correlate your logs with your instrumented services, see [Stream :::{image} ../../../images/observability-apm-logs-tab.png :alt: APM logs tab -:class: screenshot +:screenshot: ::: @@ -165,5 +165,5 @@ Correlations surface attributes of your data that are potentially correlated wit :::{image} ../../../images/observability-correlations-hover.png :alt: APM lattency correlations -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/apps/uptime-monitoring-deprecated.md b/solutions/observability/apps/uptime-monitoring-deprecated.md index 64470f557..e44b35e8f 100644 --- a/solutions/observability/apps/uptime-monitoring-deprecated.md +++ b/solutions/observability/apps/uptime-monitoring-deprecated.md @@ -39,7 +39,7 @@ In the {{uptime-app}}, you can monitor the status of network endpoints using the :::{image} ../../../images/observability-uptime-app.png :alt: {{uptime-app}} in {{kib}} -:class: screenshot +:screenshot: ::: To set up your first monitor, refer to [Get started with Uptime](get-started-with-uptime.md). @@ -53,7 +53,7 @@ In addition to the common name, associated monitors, issuer information, and SHA :::{image} ../../../images/observability-tls-certificates.png :alt: TLS certificates -:class: screenshot +:screenshot: ::: The table entries can be sorted by *status* and *valid until*. You can use the search bar at the top of the view to find values in most of the TLS-related fields in your Uptime indices. diff --git a/solutions/observability/apps/use-advanced-queries-on-application-data.md b/solutions/observability/apps/use-advanced-queries-on-application-data.md index 78a514630..f6f1f0069 100644 --- a/solutions/observability/apps/use-advanced-queries-on-application-data.md +++ b/solutions/observability/apps/use-advanced-queries-on-application-data.md @@ -19,7 +19,7 @@ When you type, you can begin to see some of the transaction fields available for :::{image} ../../../images/observability-apm-query-bar.png :alt: Example of the Kibana Query bar in Applications UI in Kibana -:class: screenshot +:screenshot: ::: ::::{tip} @@ -58,17 +58,17 @@ In this example, we’re interested in viewing all of the `APIRestController#cus :::{image} ../../../images/observability-advanced-discover.png :alt: View all transactions in bucket -:class: screenshot +:screenshot: ::: You can now explore the data until you find a specific transaction that you’re interested in. Copy that transaction’s `transaction.id` and paste it into APM to view the data in the context of the APM: :::{image} ../../../images/observability-specific-transaction-search.png :alt: View specific transaction in Applications UI -:class: screenshot +:screenshot: ::: :::{image} ../../../images/observability-specific-transaction.png :alt: View specific transaction in Applications UI -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/apps/use-opentelemetry-with-apm.md b/solutions/observability/apps/use-opentelemetry-with-apm.md index 9e0f4201d..6ce57fade 100644 --- a/solutions/observability/apps/use-opentelemetry-with-apm.md +++ b/solutions/observability/apps/use-opentelemetry-with-apm.md @@ -35,7 +35,7 @@ Elastic offers several distributions of OpenTelemetry language SDKs. A *distribu :::{image} ../../../images/observability-apm-otel-distro.png :alt: apm otel distro -:class: screenshot +:screenshot: ::: With an Elastic Distribution of OpenTelemetry language SDK you have access to all the features of the OpenTelemetry SDK that it customizes, plus: @@ -63,7 +63,7 @@ Use the OpenTelemetry API/SDKs with [Elastic APM agents](../../../solutions/obse :::{image} ../../../images/observability-apm-otel-api-sdk-elastic-agent.png :alt: apm otel api sdk elastic agent -:class: screenshot +:screenshot: ::: This allows you to reuse your existing OpenTelemetry instrumentation to create Elastic APM transactions and spans — ​avoiding vendor lock-in and having to redo manual instrumentation. @@ -86,7 +86,7 @@ You can set up an [OpenTelemetry Collector](https://opentelemetry.io/docs/collec :::{image} ../../../images/observability-apm-otel-api-sdk-collector.png :alt: apm otel api sdk collector -:class: screenshot +:screenshot: ::: ::::{note} diff --git a/solutions/observability/apps/use-synthetics-recorder.md b/solutions/observability/apps/use-synthetics-recorder.md index e616f498d..9e4de7bbc 100644 --- a/solutions/observability/apps/use-synthetics-recorder.md +++ b/solutions/observability/apps/use-synthetics-recorder.md @@ -16,7 +16,7 @@ You can use the Synthetics Recorder to [write a synthetic test](../../../solutio :::{image} ../../../images/observability-synthetics-create-test-script-recorder.png :alt: Elastic Synthetics Recorder after recording a journey and clicking Export -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/use-synthetics-with-traffic-filters.md b/solutions/observability/apps/use-synthetics-with-traffic-filters.md index 551c186db..8a2667d95 100644 --- a/solutions/observability/apps/use-synthetics-with-traffic-filters.md +++ b/solutions/observability/apps/use-synthetics-with-traffic-filters.md @@ -52,7 +52,7 @@ For example, if you had a {{private-location}} running with a public CIDR block :::{image} ../../../images/observability-synthetics-traffic-filters-create-filter.png :alt: Create a traffic filter in {{ecloud}} -:class: screenshot +:screenshot: ::: Once the traffic filter has been created, it needs to be assigned to the deployment from which you’re managing monitors from (the deployment containing the {{es}} cluster where your results need to go). diff --git a/solutions/observability/apps/view-monitor-status.md b/solutions/observability/apps/view-monitor-status.md index 56da35990..f9f40d028 100644 --- a/solutions/observability/apps/view-monitor-status.md +++ b/solutions/observability/apps/view-monitor-status.md @@ -24,7 +24,7 @@ To get started with your analysis, use the automated filter options, such as loc :::{image} ../../../images/observability-uptime-filter-bar.png :alt: Uptime filter bar -:class: screenshot +:screenshot: ::: @@ -36,7 +36,7 @@ Next to the counts, a histogram shows a count of **Pings over time** with a brea :::{image} ../../../images/observability-monitors-chart.png :alt: Monitors chart -:class: screenshot +:screenshot: ::: Information about individual monitors is displayed in the monitor list and provides a quick way to navigate to a detailed visualization for hosts or endpoints. @@ -53,7 +53,7 @@ Expand the table row for a specific monitor on the list to view additional infor :::{image} ../../../images/observability-monitors-list.png :alt: Monitors list -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/apps/work-with-params-secrets.md b/solutions/observability/apps/work-with-params-secrets.md index 33aa8c6c1..ec1b14334 100644 --- a/solutions/observability/apps/work-with-params-secrets.md +++ b/solutions/observability/apps/work-with-params-secrets.md @@ -42,7 +42,7 @@ From any page in the Synthetics UI: :::{image} ../../../images/observability-synthetics-params-secrets-kibana-define.png :alt: Global parameters tab on the Synthetics Settings page -:class: screenshot +:screenshot: ::: @@ -121,14 +121,14 @@ To use a param in a lightweight monitor that is created in the Synthetics UI, wr :::{image} ../../../images/serverless-synthetics-params-secrets-kibana-use-lightweight.png :alt: Use a param in a lightweight monitor created in the Synthetics UI -:class: screenshot +:screenshot: ::: To use a param in a browser monitor that is created in the Synthetics UI, add `params.` before the name of the param (for example, `params.my_url`). :::{image} ../../../images/observability-synthetics-params-secrets-kibana-use-lightweight.png :alt: Use a param in a lightweight monitor created in the Synthetics UI -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cicd.md b/solutions/observability/cicd.md index f4e076d0a..6f227a935 100644 --- a/solutions/observability/cicd.md +++ b/solutions/observability/cicd.md @@ -48,19 +48,19 @@ The Jenkins health dashboards provide insights on the build executions, the fail :::{image} ../../images/observability-ci-cd-overview.png :alt: CI/CD overview :title: Jenkins KPIs in Elastic {{observability}} -:class: screenshot +:screenshot: ::: :::{image} ../../images/observability-jenkins-kpis.png :alt: Jenkins KPIs :title: Jenkins Provisioning KPIs in Elastic {{observability}} -:class: screenshot +:screenshot: ::: :::{image} ../../images/observability-jenkins-jvm-indicators.png :alt: Jenkins JVM health indicators :title: Jenkins JVM health indicators in Elastic {{observability}} -:class: screenshot +:screenshot: ::: @@ -75,7 +75,7 @@ In the following image, a Jenkins CI build failed, and its exceptions are report :::{image} ../../images/observability-jenkins-pipeline-build.png :alt: Jenkins pipeline builds :title: Jenkins pipeline build error in Elastic {{observability}} -:class: screenshot +:screenshot: ::: The Errors overview screen provides a high-level view of the exceptions that CI builds catch. Similar errors are grouped to quickly see which ones are affecting your services and allow you to take action to rectify them. @@ -83,13 +83,13 @@ The Errors overview screen provides a high-level view of the exceptions that CI :::{image} ../../images/observability-jenkins-pipeline-errors.png :alt: Jenkins pipeline build errors :title: Jenkins jobs and pipelines errors in Elastic {{observability}} -:class: screenshot +:screenshot: ::: :::{image} ../../images/observability-concourse-ci-traces.png :alt: Concourse CI traces view :title: Concourse CI pipeline execution as a trace in Elastic {{observability}} -:class: screenshot +:screenshot: ::: @@ -109,7 +109,7 @@ The Applications Services view in Elastic {{observability}} provides a view of a :::{image} ../../images/observability-jenkins-servers.png :alt: Jenkins servers view :title: Jenkins servers in Elastic {{observability}} -:class: screenshot +:screenshot: ::: The Service page provides more granular insights into your CI/CD workflows by breaking down health and performance metrics by pipeline. To quickly view which pipelines experience the most errors, are the most frequently executed, or are the slowest, you can sort and filter the list. @@ -117,7 +117,7 @@ The Service page provides more granular insights into your CI/CD workflows by br :::{image} ../../images/observability-jenkins-server.png :alt: Jenkins server view :title: A Jenkins server in Elastic {{observability}} -:class: screenshot +:screenshot: ::: @@ -128,7 +128,7 @@ Once you’ve identified the pipeline you want to troubleshoot, you can drill do :::{image} ../../images/observability-jenkins-pipeline-overview.png :alt: Jenkins pipeline overview :title: Performance overview of a Jenkins pipeline in Elastic {{observability}} -:class: screenshot +:screenshot: ::: The pipelines and traditional jobs are instrumented automatically. If you spot a slow or failing build and need to understand what’s happening, you can drill into the trace view of the build to look for the high duration jobs or jobs with errors. You can then dig into the details to understand the source of the error. @@ -136,7 +136,7 @@ The pipelines and traditional jobs are instrumented automatically. If you spot a :::{image} ../../images/observability-jenkins-pipeline-trace.png :alt: Trace of a Jenkins pipeline build :title: A Jenkins pipeline build as a trace in Elastic {{observability}} -:class: screenshot +:screenshot: ::: To investigate further, you can view the details of the build captured as labels. @@ -144,7 +144,7 @@ To investigate further, you can view the details of the build captured as labels :::{image} ../../images/observability-jenkins-pipeline-context.png :alt: Attributes of a Jenkins pipeline execution :title: Contextual attributes of a Jenkins pipeline execution in Elastic {{observability}} -:class: screenshot +:screenshot: ::: @@ -186,7 +186,7 @@ The Jenkins OpenTelemetry Plugin provides pipeline log storage in {{es}} while e :::{image} ../../images/observability-ci-cd-visualize-logs-kibana-and-jenkins-console.png :alt: Jenkins Console Output page displaying both log contents and a link to view logs in Elastic {{observability}} -:class: screenshot +:screenshot: ::: This more advanced setup requires connecting the Jenkins Controller to {{es}} with read permissions on the `logs-apm.app` and preferably on the Metadata of the {{ilm-init}} policy of this index template (by default it’s the `logs-apm.app_logs-default_policy` policy). Use "Validate {{es}} configuration" to verify the setup. @@ -203,7 +203,7 @@ Visualizing logs exclusively in {{kib}} involves a simpler setup that doesn’t :::{image} ../../images/observability-ci-cd-visualize-logs-kibana-console.png :alt: Jenkins Console Output page with link to view logs in Elastic {{observability}} -:class: screenshot +:screenshot: ::: :::{image} ../../images/observability-ci-cd-visualize-logs-kibana-architecture.png @@ -226,7 +226,7 @@ Observing CI/CD pipelines is achieved by instrumenting the different CI/CD and D :::{image} ../../images/observability-jenkins-plugin-manager.png :alt: Jenkins Plugin Manager - :class: screenshot + :screenshot: ::: 2. Click the **Available** tab, and search for **OpenTelemetry**. @@ -248,7 +248,7 @@ The OpenTelemetry plugin needs to be configured to report data to an OpenTelemet :::{image} ../../images/observability-configure-otel-plugin.png :alt: Configure OTEL plugin - :class: screenshot + :screenshot: ::: * If using the Elastic API Key authorization, define the **Header Authentications**: @@ -261,7 +261,7 @@ The OpenTelemetry plugin needs to be configured to report data to an OpenTelemet :::{image} ../../images/observability-kibana-url.png :alt: Define {{kib}} URL - :class: screenshot + :screenshot: ::: 2. Finally, there are additional settings to configure: @@ -292,7 +292,7 @@ For instance, you can follow the below steps: :::{image} ../../images/observability-jenkins-dashboard-import.png :alt: Import {{kib}} dashboard :title: Import dashboard in {{kib}} -:class: screenshot +:screenshot: ::: * The new dashboard is now ready to be used: @@ -300,13 +300,13 @@ For instance, you can follow the below steps: :::{image} ../../images/observability-jenkins-dashboard-ready.png :alt: Jenkins dashboard in {{kib}} :title: Jenkins dashboard in {{kib}} is ready -:class: screenshot +:screenshot: ::: :::{image} ../../images/observability-jenkins-dashboard.png :alt: Jenkins dashboard :title: Jenkins dashboard in {{kib}} -:class: screenshot +:screenshot: ::: @@ -360,7 +360,7 @@ When invoking Maven builds with Jenkins, it’s unnecessary to use environment v :::{image} ../../images/observability-jenkins-maven-pipeline.png :alt: Maven builds in Jenkins :title: A Jenkins pipeline executing Maven builds -:class: screenshot +:screenshot: ::: To learn more, see the [integration of Maven builds with Elastic {{observability}}](https://github.com/open-telemetry/opentelemetry-java-contrib/tree/main/maven-extension). @@ -377,7 +377,7 @@ The context propagation from the Jenkins job or pipeline is passed to the Ansibl :::{image} ../../images/observability-jenkins-ansible-pipeline.png :alt: Ansible playbooks in Jenkins :title: Visibility into your Ansible playbooks -:class: screenshot +:screenshot: ::: This integration feeds, out of the box, the Service Map with all the services that are connected to the Ansible Playbook. All of these features can help you quickly and visually assess your services used in your provisioning and Continuous Deployment. @@ -385,7 +385,7 @@ This integration feeds, out of the box, the Service Map with all the services th :::{image} ../../images/observability-ansible-service-map.png :alt: Ansible service map view :title: ServiceMap view of a Jenkins pipeline execution instrumented with the Ansible plugin -:class: screenshot +:screenshot: ::: @@ -400,13 +400,13 @@ To inject the environment variables and service details, use custom credential t :::{image} ../../images/observability-ansible-automation-apm-endpoint.png :alt: Applications Services Endpoint in Ansible Tower :title: An Applications Services Endpoint in Ansible AWX/Tower -:class: screenshot +:screenshot: ::: :::{image} ../../images/observability-ansible-automation-apm-service-details.png :alt: Custom fields in Ansible Tower :title: Custom fileds in Ansible AWX/Tower -:class: screenshot +:screenshot: ::: Want to learn more? This [blog post](https://www.elastic.co/blog/5-questions-about-ansible-that-elastic-observability-can-answer) provides a great overview of how all of these pieces work together. @@ -460,13 +460,13 @@ make login build push :::{image} ../../images/observability-jenkins-makefile.png :alt: Jenkins build executing an instrumented Makefile :title: A Jenkins build executing a Makefile instrumented with the otel-cli in Elastic {{observability}} -:class: screenshot +:screenshot: ::: :::{image} ../../images/observability-jenkins-service-map.png :alt: Jenkins service map view :title: ServiceMap view of a Jenkins pipeline execution instrumented with the otel-cli -:class: screenshot +:screenshot: ::: @@ -486,7 +486,7 @@ pytest --otel-session-name='My_Test_cases' :::{image} ../../images/observability-pytest-otel-pipeline.png :alt: Pytest tests :title: Visibility into your Pytest tests -:class: screenshot +:screenshot: ::: @@ -506,7 +506,7 @@ Once Concourse CI tracing is configured, Concourse CI pipeline executions are re :::{image} ../../images/observability-jenkins-concourse.png :alt: Concourse CI pipeline execution :title: A Concourse CI pipeline execution in Elastic {{observability}} -:class: screenshot +:screenshot: ::: The Concourse CI doesn’t report health metrics through OpenTelemetry. However, you can use the [OpenTelemetry Collector Span Metrics Processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/spanmetricsprocessor#span-metrics-processor) to derive pipeline execution traces into KPI metrics like throughput and the error rate of pipelines. diff --git a/solutions/observability/cloud/ingestion-options.md b/solutions/observability/cloud/ingestion-options.md index 879d6f1ea..0ea78516f 100644 --- a/solutions/observability/cloud/ingestion-options.md +++ b/solutions/observability/cloud/ingestion-options.md @@ -26,6 +26,6 @@ The high-level architecture is shown below. :::{image} ../../../images/observability-ingest-options-overview.png :alt: Ingest options -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md b/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md index d818d0d74..9639bcf3a 100644 --- a/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md +++ b/solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md @@ -67,7 +67,7 @@ For more information {{agent}} and integrations, refer to the [{{fleet}} and {{a :::{image} ../../../images/observability-ec2-overview-dashboard.png :alt: Screenshot showing the EC2 overview dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md b/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md index 0fdcf69e8..5673a45bb 100644 --- a/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md +++ b/solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md @@ -69,7 +69,7 @@ For more information {{agent}} and integrations, refer to the [{{fleet}} and {{a :::{image} ../../../images/observability-kinesis-dashboard.png :alt: Screenshot showing the Kinesis overview dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md b/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md index 9bcad028c..d7c9dbdb2 100644 --- a/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md +++ b/solutions/observability/cloud/monitor-amazon-simple-queue-service-sqs.md @@ -65,7 +65,7 @@ For example, to see an overview of your SQS metrics in {{kib}}, go to the **Dash :::{image} ../../../images/observability-sqs-dashboard.png :alt: Screenshot showing the SQS overview dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md b/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md index 3603e04cd..b0804fd60 100644 --- a/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md +++ b/solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md @@ -68,7 +68,7 @@ For more information {{agent}} and integrations, refer to the [{{fleet}} and {{a :::{image} ../../../images/observability-s3-dashboard.png :alt: Screenshot showing the S3 dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cloud/monitor-amazon-web-services-aws-with-elastic-agent.md b/solutions/observability/cloud/monitor-amazon-web-services-aws-with-elastic-agent.md index 1a5bb2228..6ad270eb6 100644 --- a/solutions/observability/cloud/monitor-amazon-web-services-aws-with-elastic-agent.md +++ b/solutions/observability/cloud/monitor-amazon-web-services-aws-with-elastic-agent.md @@ -245,14 +245,14 @@ The AWS integration also comes with pre-built dashboards that you can use to vis :::{image} ../../../images/observability-agent-tut-vpcflowlog-dashboard.png :alt: Screenshot of the VPC Flow Log Overview dashboard -:class: screenshot +:screenshot: ::: Next, open the dashboard called **[Logs AWS] S3 Server Access Log Overview**: :::{image} ../../../images/observability-agent-tut-s3accesslog-dashboard.png :alt: Screenshot of the S3 Server Access Log Overview dashboard -:class: screenshot +:screenshot: ::: @@ -303,7 +303,7 @@ Now that the metrics are streaming into {{es}}, you can visualize them in {{kib} :::{image} ../../../images/observability-agent-tut-ec2-metrics-discover.png :alt: Screenshot of the Discover app showing EC2 metrics -:class: screenshot +:screenshot: ::: The AWS integration also comes with pre-built dashboards that you can use to visualize the data. Find **Dashboards** in the main menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). @@ -312,14 +312,14 @@ Search for EC2 and select the dashboard called **[Metrics AWS] EC2 Overview**: :::{image} ../../../images/observability-agent-tut-ec2-overview-dashboard.png :alt: Screenshot of the EC2 Overview dashboard -:class: screenshot +:screenshot: ::: To track your AWS billing, open the **[Metrics AWS] Billing Overview** dashboard: :::{image} ../../../images/observability-agent-tut-billing-dashboard.png :alt: Screenshot of the Billing Overview dashboard -:class: screenshot +:screenshot: ::: Congratulations! You have completed the tutorial. diff --git a/solutions/observability/cloud/monitor-aws-network-firewall-logs.md b/solutions/observability/cloud/monitor-aws-network-firewall-logs.md index a866f51c5..cd205b5af 100644 --- a/solutions/observability/cloud/monitor-aws-network-firewall-logs.md +++ b/solutions/observability/cloud/monitor-aws-network-firewall-logs.md @@ -117,5 +117,5 @@ Navigate to {{kib}} and choose **Visualize your logs with Discover**. :::{image} ../../../images/observability-firehose-networkfirewall-discover.png :alt: Visualize Network Firewall logs with Discover -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cloud/monitor-microsoft-azure-openai.md b/solutions/observability/cloud/monitor-microsoft-azure-openai.md index 2e6c1d1f9..423afbb4b 100644 --- a/solutions/observability/cloud/monitor-microsoft-azure-openai.md +++ b/solutions/observability/cloud/monitor-microsoft-azure-openai.md @@ -271,7 +271,7 @@ From here, filter your data and dive deeper into individual logs to find informa :::{image} ../../../images/observability-azure-openai-discover.png :alt: screenshot of the discover main page -:class: screenshot +:screenshot: ::: For more on using Discover and creating data views, refer to the [Discover](../../../explore-analyze/discover.md) documentation. @@ -446,14 +446,14 @@ After ingesting your data, you can filter and explore it using Discover in {{kib :::{image} ../../../images/observability-azure-openai-apm-discover.png :alt: screenshot of the discover main page -:class: screenshot +:screenshot: ::: Then, use these fields to create visualizations and build dashboards. Refer to the [Dashboard and visualizations](../../../explore-analyze/dashboards.md) documentation for more information. :::{image} ../../../images/observability-azure-openai-apm-dashboard.png :alt: screenshot of the Azure OpenAI APM dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md b/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md index a957e632f..54c2c857e 100644 --- a/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md +++ b/solutions/observability/cloud/monitor-microsoft-azure-with-azure-native-isv-service.md @@ -59,7 +59,7 @@ Microsoft Azure allows you to find, deploy, and manage {{es}} from within the Az :::{image} ../../../images/observability-monitor-azure-native-create-elastic-resource.png :alt: Screenshot of Elastic resource creation in Azure - :class: screenshot + :screenshot: ::: 7. To create the {{es}} deployment, click **Create**. @@ -67,7 +67,7 @@ Microsoft Azure allows you to find, deploy, and manage {{es}} from within the Az :::{image} ../../../images/observability-monitor-azure-native-elastic-deployment.png :alt: Screenshot of deployment details for Elastic resource in Azure - :class: screenshot + :screenshot: ::: 9. Click **Accept** (if necessary) to grant permissions to use your Azure account, then log in to {{ecloud}} using your Azure credentials as a single sign-on. @@ -83,7 +83,7 @@ To ingest Azure subscription and resource logs into Elastic, you use the Azure N :::{image} ../../../images/observability-monitor-azure-native-elastic-config-logs-metrics.png :alt: Screenshot of logs and metrics configuration for Elastic resource in Azure - :class: screenshot + :screenshot: ::: ::::{note} @@ -106,7 +106,7 @@ To ingest Azure subscription and resource logs into Elastic, you use the Azure N :::{image} ../../../images/observability-monitor-azure-native-elastic-vms.png :alt: Screenshot that shows VMs selected for logs and metrics collection - :class: screenshot + :screenshot: ::: 3. Wait until the extension is installed and sending data (if the list does not update, click **Refresh** ). diff --git a/solutions/observability/cloud/monitor-microsoft-azure-with-elastic-agent.md b/solutions/observability/cloud/monitor-microsoft-azure-with-elastic-agent.md index 3cdc16982..95d15318e 100644 --- a/solutions/observability/cloud/monitor-microsoft-azure-with-elastic-agent.md +++ b/solutions/observability/cloud/monitor-microsoft-azure-with-elastic-agent.md @@ -41,7 +41,7 @@ The [Azure REST API](https://learn.microsoft.com/en-us/rest/api/azure/) allows y :::{image} ../../../images/observability-agent-tut-azure-register-app.png :alt: Screenshot of the application registration - :class: screenshot + :screenshot: ::: Copy the **Application (client) ID** and save it for later. This ID is required to configure {{agent}} to connect to your Azure account. @@ -50,7 +50,7 @@ The [Azure REST API](https://learn.microsoft.com/en-us/rest/api/azure/) allows y :::{image} ../../../images/observability-agent-tut-azure-click-client-secret.png :alt: Screenshot of adding a new client secret - :class: screenshot + :screenshot: ::: 5. Type a description of the secret and select an expiration. Click **Add** to create the client secret. Under **Value**, copy the secret value and save it (along with your client ID) for later. @@ -78,7 +78,7 @@ After creating the Azure service principal, you need to grant it the correct per :::{image} ../../../images/observability-agent-tut-azure-add-role-assignment.png :alt: Screen capture of adding a role assignment - :class: screenshot + :screenshot: ::: 10. Click **Review + assign** to grant the service principal access to your subscription. @@ -115,7 +115,7 @@ To add the integration: :::{image} ../../../images/observability-agent-tut-azure-integration-settings.png :alt: Screenshot of integration settings for Azure - :class: screenshot + :screenshot: ::: 6. Make sure the **Collect Azure Billing metrics** selector is turned on. @@ -161,7 +161,7 @@ Now that the metrics are streaming to {{es}}, you can visualize them in {{kib}}. :::{image} ../../../images/observability-agent-tut-azure-billing-dashboard.png :alt: Screenshot of Azure billing overview dashboard -:class: screenshot +:screenshot: ::: Keep in mind {{agent}} collects data every 24 hours. @@ -187,7 +187,7 @@ To create an Azure event hub: :::{image} ../../../images/observability-agent-tut-azure-create-eventhub.png :alt: Screenshot of window for creating an event hub namespace - :class: screenshot + :screenshot: ::: 5. Click **Create** to deploy the resource. @@ -231,7 +231,7 @@ To configure diagnostic settings for the Azure Monitor service: :::{image} ../../../images/observability-agent-tut-azure-log-categories.png :alt: Screenshot of Azure diagnostic settings showing Administrative - :class: screenshot + :screenshot: ::: 7. Save the diagnostic settings. @@ -268,7 +268,7 @@ To add the integration: :::{image} ../../../images/observability-agent-tut-azure-activity-log-settings.png :alt: Screenshot of integration settings for Azure activity logs - :class: screenshot + :screenshot: ::: 6. Make sure the **Collect Azure activity logs from Event Hub** selector is turned on. @@ -287,7 +287,7 @@ The Azure activity logs integration also comes with pre-built dashboards that yo :::{image} ../../../images/observability-agent-tut-azure-activity-logs-dashboard.png :alt: Screenshot of Azure activity logs dashboard -:class: screenshot +:screenshot: ::: Congratulations! You have completed the tutorial. diff --git a/solutions/observability/cloud/monitor-virtual-private-cloud-vpc-flow-logs.md b/solutions/observability/cloud/monitor-virtual-private-cloud-vpc-flow-logs.md index 8c5a40276..69d8e12c7 100644 --- a/solutions/observability/cloud/monitor-virtual-private-cloud-vpc-flow-logs.md +++ b/solutions/observability/cloud/monitor-virtual-private-cloud-vpc-flow-logs.md @@ -53,14 +53,14 @@ You want to see what IP addresses are trying to hit your web servers. Then, you :::{image} ../../../images/observability-discover-ip-addresses.png :alt: IP addresses in Discover -:class: screenshot +:screenshot: ::: You can also create a visualization by choosing **Visualize**. You get the following donut chart, which you can add to a dashboard. :::{image} ../../../images/observability-discover-visualize-chart.png :alt: Visualization chart in Discover -:class: screenshot +:screenshot: ::: On top of the IP addresses, you also want to know what port is being hit on your web servers. @@ -69,7 +69,7 @@ If you select the destination port field, the pop-up shows that port `8081` is b :::{image} ../../../images/observability-discover-destination-port.png :alt: Destination port in Discover -:class: screenshot +:screenshot: ::: @@ -82,14 +82,14 @@ Elastic Observability provides the ability to detect anomalies on logs using Mac :::{image} ../../../images/observability-ml-anomalies-detection.png :alt: Anomalies detection with ML -:class: screenshot +:screenshot: ::: For your VPC flow log, you can enable both features. When you look at what was detected for anomalous log entry rates, you get the following results: :::{image} ../../../images/observability-ml-anomalies-results.png :alt: Anomalies results with ML -:class: screenshot +:screenshot: ::: Elastic detected a spike in logs when you turned on VPC flow logs for your application. The rate change is being detected because you’re also ingesting VPC flow logs from another application. @@ -98,14 +98,14 @@ You can drill down into this anomaly with ML and analyze further. :::{image} ../../../images/observability-ml-anomalies-explorer.png :alt: Anomalies explorer in ML -:class: screenshot +:screenshot: ::: Because you know that a spike exists, you can also use the Elastic AIOps Labs Explain Log Rate Spikes capability. By grouping them, you can see what is causing some of the spikes. :::{image} ../../../images/observability-ml-spike.png :alt: Spikes in ML -:class: screenshot +:screenshot: ::: @@ -117,6 +117,6 @@ You can enhance this baseline dashboard with the visualizations you find in Disc :::{image} ../../../images/observability-flow-log-dashboard.png :alt: Flow logs dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/cloud/monitor-web-application-firewall-waf-logs.md b/solutions/observability/cloud/monitor-web-application-firewall-waf-logs.md index d9747bbe9..f70cf8021 100644 --- a/solutions/observability/cloud/monitor-web-application-firewall-waf-logs.md +++ b/solutions/observability/cloud/monitor-web-application-firewall-waf-logs.md @@ -131,5 +131,5 @@ Navigate to Kibana and visualize the first WAF logs in your {{stack}}. :::{image} ../../../images/observability-firehose-waf-logs.png :alt: Firehose WAF logs in Kibana -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/get-started/add-data-from-splunk.md b/solutions/observability/get-started/add-data-from-splunk.md index 1d4d02fb2..08ee7b6bd 100644 --- a/solutions/observability/get-started/add-data-from-splunk.md +++ b/solutions/observability/get-started/add-data-from-splunk.md @@ -16,7 +16,7 @@ These integrations work by using the `httpjson` input in {{agent}} to run a Splu :::{image} ../../../images/observability-elastic-agent-splunk.png :alt: Splunk integration components -:class: screenshot +:screenshot: ::: To ingest Nginx data from Splunk, perform the following steps. The options are the same for Apache, AWS CloudTrail, and Zeek. @@ -45,7 +45,7 @@ Enable "Collect logs from third-party REST API" and disable both "Collect logs f :::{image} ../../../images/observability-kibana-fleet-third-party-rest-api.png :alt: {{fleet}} showing enabling third-party REST API -:class: screenshot +:screenshot: ::: @@ -61,7 +61,7 @@ SSL Configuration is available under the "Advanced options". These may be neces :::{image} ../../../images/observability-kibana-fleet-third-party-rest-settings.png :alt: {{fleet}} showing enabling third-party REST API settings -:class: screenshot +:screenshot: ::: @@ -79,7 +79,7 @@ Tags may be added in the "Advanced options". For example, if you’d like to ta :::{image} ../../../images/observability-kibana-fleet-third-party-rest-dataset-settings.png :alt: {{fleet}} showing enabling third-party REST API settings -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/get-started/get-started-with-dashboards.md b/solutions/observability/get-started/get-started-with-dashboards.md index 02ca1c419..16e16ccea 100644 --- a/solutions/observability/get-started/get-started-with-dashboards.md +++ b/solutions/observability/get-started/get-started-with-dashboards.md @@ -13,7 +13,7 @@ In your Observability project, go to **Dashboards** to see installed dashboards :::{image} ../../../images/serverless-dashboards.png :alt: Screenshot showing list of System dashboards -:class: screenshot +:screenshot: ::: Notice you can filter the list of dashboards: diff --git a/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md b/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md index 298f37688..09c6e843c 100644 --- a/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md +++ b/solutions/observability/get-started/quickstart-collect-data-with-aws-firehose.md @@ -152,7 +152,7 @@ The following table shows the type of data ingested by the supported AWS service :::{image} ../../../images/observability-quickstart-aws-firehose-entry-point.png :alt: AWS Firehose entry point - :class: screenshot + :screenshot: ::: 3. Click **Create Firehose Stream in AWS** to create a CloudFormation stack from the CloudFormation template. @@ -170,7 +170,7 @@ The following table shows the type of data ingested by the supported AWS service :::{image} ../../../images/serverless-quickstart-aws-firehose-entry-point.png :alt: AWS Firehose entry point - :class: screenshot + :screenshot: ::: 4. Click **Create Firehose Stream in AWS** to create a CloudFormation stack from the CloudFormation template. @@ -186,14 +186,14 @@ After installation is complete and all relevant data is flowing into Elastic, th :::{image} ../../../images/observability-quickstart-aws-firehose-dashboards.png :alt: AWS Firehose dashboards -:class: screenshot +:screenshot: ::: Here is an example of the VPC Flow logs dashboard: :::{image} ../../../images/observability-quickstart-aws-firehose-vpc-flow.png :alt: AWS Firehose VPC flow -:class: screenshot +:screenshot: ::: Refer to [What is Elastic {{observability}}?](../../../solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features. diff --git a/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md b/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md index 1ac8c2696..963db96c6 100644 --- a/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md +++ b/solutions/observability/get-started/quickstart-monitor-hosts-with-elastic-agent.md @@ -67,7 +67,7 @@ The script also generates an {{agent}} configuration file that you can use with :::{image} ../../../images/observability-quickstart-monitor-hosts-entry-point.png :alt: Host monitoring entry point - :class: screenshot + :screenshot: ::: 3. Copy the install command. @@ -91,7 +91,7 @@ The script also generates an {{agent}} configuration file that you can use with :::{image} ../../../images/serverless-quickstart-monitor-hosts-entry-point.png :alt: Host monitoring entry point - :class: screenshot + :screenshot: ::: 4. Copy the install command. @@ -140,7 +140,7 @@ For example, you can navigate the **Host overview** dashboard to explore detaile :::{image} ../../../images/observability-quickstart-host-overview.png :alt: Host overview dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md b/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md index f912bca96..d31b8c422 100644 --- a/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md +++ b/solutions/observability/get-started/quickstart-monitor-hosts-with-opentelemetry.md @@ -68,7 +68,7 @@ Follow these steps to collect logs and metrics using the EDOT Collector: :::{image} ../../../images/observability-quickstart-monitor-hosts-otel-entry-point.png :alt: Host monitoring entry point - :class: screenshot + :screenshot: ::: 3. Select the appropriate platform. @@ -98,7 +98,7 @@ Logs are collected from setup onward, so you won’t see logs that occurred befo :::{image} ../../../images/serverless-quickstart-monitor-hosts-otel-entry-point.png :alt: Host monitoring entry point - :class: screenshot + :screenshot: ::: 5. Select the appropriate platform, and complete the following: diff --git a/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md b/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md index 0ad8f0abb..7fead7d73 100644 --- a/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md +++ b/solutions/observability/get-started/quickstart-monitor-kubernetes-cluster-with-elastic-agent.md @@ -62,7 +62,7 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu :::{image} ../../../images/observability-quickstart-k8s-entry-point.png :alt: Kubernetes entry point - :class: screenshot + :screenshot: ::: 3. To install the Elastic Agent on your host, copy and run the install command. @@ -86,7 +86,7 @@ The kubectl command installs the standalone Elastic Agent in your Kubernetes clu :::{image} ../../../images/serverless-quickstart-k8s-entry-point.png :alt: Kubernetes entry point - :class: screenshot + :screenshot: ::: 4. To install the Elastic Agent on your host, copy and run the install command. @@ -107,7 +107,7 @@ After installation is complete and all relevant data is flowing into Elastic, th :::{image} ../../../images/observability-quickstart-k8s-overview.png :alt: Kubernetes overview dashboard -:class: screenshot +:screenshot: ::: Furthermore, you can access other useful prebuilt dashboards for monitoring Kubernetes resources, for example running pods per namespace, as well as the resources they consume, like CPU and memory. diff --git a/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md b/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md index 1f62b87cd..dbc98bcc0 100644 --- a/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md +++ b/solutions/observability/get-started/quickstart-unified-kubernetes-observability-with-elastic-distributions-of-opentelemetry-edot.md @@ -65,7 +65,7 @@ For a more detailed description of the components and advanced configuration, re :::{image} ../../../images/observability-quickstart-k8s-otel-entry-point.png :alt: Kubernetes-OTel entry point - :class: screenshot + :screenshot: ::: 3. Follow the on-screen instructions to install all needed components. @@ -95,7 +95,7 @@ For a more detailed description of the components and advanced configuration, re :::{image} ../../../images/serverless-quickstart-k8s-otel-entry-point.png :alt: Kubernetes-OTel entry point - :class: screenshot + :screenshot: ::: 4. Follow the on-screen instructions to install all needed components. @@ -127,7 +127,7 @@ After installation is complete and all relevant data is flowing into Elastic, th :::{image} ../../../images/observability-quickstart-k8s-otel-dashboard.png :alt: Kubernetes overview dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/get-started/what-is-elastic-observability.md b/solutions/observability/get-started/what-is-elastic-observability.md index da3a16656..03c97cd25 100644 --- a/solutions/observability/get-started/what-is-elastic-observability.md +++ b/solutions/observability/get-started/what-is-elastic-observability.md @@ -40,7 +40,7 @@ The **Service** inventory provides a quick, high-level overview of the health an :::{image} ../../../images/serverless-services-inventory.png :alt: Service inventory showing health and performance of instrumented services -:class: screenshot +:screenshot: ::: [Learn more about Application performance monitoring (APM) →](../../../solutions/observability/apps/application-performance-monitoring-apm.md) @@ -54,7 +54,7 @@ On the {{observability}} **Overview** page, the **Hosts** table shows your top h :::{image} ../../../images/observability-metrics-summary.png :alt: Summary of Hosts on the {{observability}} overview page -:class: screenshot +:screenshot: ::: You can then drill down into the {{infrastructure-app}} by clicking **Show inventory**. Here you can monitor and filter your data by hosts, pods, containers,or EC2 instances and create custom groupings such as availability zones or namespaces. @@ -72,7 +72,7 @@ On the {{observability}} **Overview** page, the **{{user-experience}}** chart pr :::{image} ../../../images/observability-obs-overview-ue.png :alt: Summary of {{user-experience}} metrics on the {{observability}} overview page -:class: screenshot +:screenshot: ::: You can then drill down into the {{user-experience}} dashboard by clicking **Show dashboard** too see data by URL, operating system, browser, and location. @@ -102,7 +102,7 @@ On the **Alerts** page, the **Alerts** table provides a snapshot of alerts occur :::{image} ../../../images/serverless-observability-alerts-overview.png :alt: Summary of Alerts on the Observability overview page -:class: screenshot +:screenshot: ::: [Learn more about alerting → ](../../../solutions/observability/incident-management/alerting.md) @@ -116,7 +116,7 @@ From the SLO overview list, you can see all of your SLOs and a quick summary of :::{image} ../../../images/serverless-slo-dashboard.png :alt: Dashboard showing list of SLOs -:class: screenshot +:screenshot: ::: [Learn more about SLOs → ](../../../solutions/observability/incident-management/service-level-objectives-slos.md) @@ -127,7 +127,7 @@ Collect and share information about observability issues by creating cases. Case :::{image} ../../../images/serverless-cases.png :alt: Screenshot showing list of cases -:class: screenshot +:screenshot: ::: [Learn more about cases → ](../../../solutions/observability/incident-management/cases.md) @@ -142,7 +142,7 @@ Reduce the time and effort required to detect, understand, investigate, and reso :::{image} ../../../images/serverless-log-rate-analysis.png :alt: Log rate analysis page showing log rate spike -:class: screenshot +:screenshot: ::: [Learn more about machine learning and AIOps →](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md) \ No newline at end of file diff --git a/solutions/observability/incident-management/alerting.md b/solutions/observability/incident-management/alerting.md index ed63df6bc..caa60218b 100644 --- a/solutions/observability/incident-management/alerting.md +++ b/solutions/observability/incident-management/alerting.md @@ -24,7 +24,7 @@ On the **Alerts** page, the Alerts table provides a snapshot of alerts occurring :::{image} ../../../images/serverless-observability-alerts-overview.png :alt: Summary of Alerts -:class: screenshot +:screenshot: ::: You can filter this table by alert status or time period, customize the visible columns, and search for specific alerts (for example, alerts related to a specific service or environment) using KQL. Select **View alert detail** from the **More actions** menu ![action menu](../../../images/serverless-boxesHorizontal.svg ""), or click the Reason link for any alert to [view alert](../../../solutions/observability/incident-management/view-alerts.md) in detail, and you can then either **View in app** or **View rule details**. diff --git a/solutions/observability/incident-management/cases.md b/solutions/observability/incident-management/cases.md index d28d40eea..0ec02d143 100644 --- a/solutions/observability/incident-management/cases.md +++ b/solutions/observability/incident-management/cases.md @@ -10,5 +10,5 @@ Collect and share information about observability issues by creating a case. Cas :::{image} ../../../images/observability-cases.png :alt: Cases page -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/incident-management/configure-access-to-cases.md b/solutions/observability/incident-management/configure-access-to-cases.md index b3307feeb..475b5f014 100644 --- a/solutions/observability/incident-management/configure-access-to-cases.md +++ b/solutions/observability/incident-management/configure-access-to-cases.md @@ -16,7 +16,7 @@ For more details, refer to [feature access based on user privileges](../../../de :::{image} ../../../images/observability-cases-privileges.png :alt: cases privileges -:class: screenshot +:screenshot: ::: Below are the minimum required privileges for some common use cases. diff --git a/solutions/observability/incident-management/configure-case-settings.md b/solutions/observability/incident-management/configure-case-settings.md index f2f08218c..f0ece2025 100644 --- a/solutions/observability/incident-management/configure-case-settings.md +++ b/solutions/observability/incident-management/configure-case-settings.md @@ -18,7 +18,7 @@ To change case closure options and add custom fields, templates, and connectors :::{image} ../../../images/observability-cases-settings.png :alt: View case settings -:class: screenshot +:screenshot: ::: @@ -57,7 +57,7 @@ After creating a connector, you can set your cases to [automatically close](../. :::{image} ../../../images/serverless-observability-cases-add-connector.png :alt: Add a connector to send cases to an external source - :class: screenshot + :screenshot: ::: 3. Enter your required settings. For connector configuration details, refer to: @@ -101,7 +101,7 @@ To create a custom field: :::{image} ../../../images/observability-cases-add-custom-field.png :alt: Add a custom field in case settings - :class: screenshot + :screenshot: ::: 2. You must provide a field label and type (text or toggle). You can optionally designate it as a required field and provide a default value. @@ -126,7 +126,7 @@ To create a template: :::{image} ../../../images/serverless-observability-cases-templates.png :alt: Add a case template - :class: screenshot + :screenshot: ::: 2. You must provide a template name and case severity. You can optionally add template tags and a description, values for each case field, and a case connector. diff --git a/solutions/observability/incident-management/configure-service-level-objective-slo-access.md b/solutions/observability/incident-management/configure-service-level-objective-slo-access.md index a4cc7b821..4ed43f658 100644 --- a/solutions/observability/incident-management/configure-service-level-objective-slo-access.md +++ b/solutions/observability/incident-management/configure-service-level-objective-slo-access.md @@ -45,7 +45,7 @@ Set the following privileges for the SLO Editor role: :::{image} ../../../images/observability-slo-es-priv-editor.png :alt: Cluster and index privileges for SLO Editor role - :class: screenshot + :screenshot: ::: 4. In the **Kibana** section, click **Add Kibana privilege**. @@ -54,7 +54,7 @@ Set the following privileges for the SLO Editor role: :::{image} ../../../images/observability-slo-kibana-priv-all.png :alt: SLO Kibana all privileges - :class: screenshot + :screenshot: ::: 7. Click **Create Role** at the bottom of the page and assign the role to the relevant users. @@ -68,7 +68,7 @@ Set the following privileges for the SLO Read role: :::{image} ../../../images/observability-slo-es-priv-viewer.png :alt: Index privileges for SLO Viewer role - :class: screenshot + :screenshot: ::: 2. In the **Kibana** section, click **Add Kibana privilege**. @@ -77,7 +77,7 @@ Set the following privileges for the SLO Read role: :::{image} ../../../images/observability-slo-kibana-priv-read.png :alt: SLO Kibana read privileges - :class: screenshot + :screenshot: ::: 5. Click **Create Role** at the bottom of the page and assign the role to the relevant users. diff --git a/solutions/observability/incident-management/create-an-anomaly-detection-rule.md b/solutions/observability/incident-management/create-an-anomaly-detection-rule.md index 18b47723a..d7362df80 100644 --- a/solutions/observability/incident-management/create-an-anomaly-detection-rule.md +++ b/solutions/observability/incident-management/create-an-anomaly-detection-rule.md @@ -36,7 +36,7 @@ To create an anomaly detection rule: :::{image} ../../../images/serverless-anomaly-detection-alert.png :alt: Anomaly detection alert settings - :class: screenshot + :screenshot: ::: 6. For the result type: @@ -115,14 +115,14 @@ Alternatively, you can set the action frequency to **For each alert** and specif :::{image} ../../../images/serverless-alert-action-frequency.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: With the **Run when** menu you can choose if an action runs when the the anomaly score matched the condition or was recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the anomaly score was matched and also when it recovers. :::{image} ../../../images/serverless-alert-anomaly-action-frequency-recovered.png :alt: Choose between anomaly score matched condition or recovered -:class: screenshot +:screenshot: ::: ::::: @@ -133,7 +133,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-an-apm-anomaly-rule.md b/solutions/observability/incident-management/create-an-apm-anomaly-rule.md index cebc5f809..3af1d6e32 100644 --- a/solutions/observability/incident-management/create-an-apm-anomaly-rule.md +++ b/solutions/observability/incident-management/create-an-apm-anomaly-rule.md @@ -24,7 +24,7 @@ You can create an anomaly rule to alert you when either the latency, throughput, :::{image} ../../../images/serverless-alerts-create-apm-anomaly.png :alt: Create rule for APM anomaly alert -:class: screenshot +:screenshot: ::: ::::{tip} @@ -93,14 +93,14 @@ Alternatively, you can set the action frequency to **For each alert** and specif :::{image} ../../../images/serverless-alert-action-frequency.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: With the **Run when** menu you can choose if an action runs when the threshold for an alert is reached, or when the alert is recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the rule is triggered and also when it recovers. :::{image} ../../../images/serverless-alert-apm-action-frequency-recovered.png :alt: Choose between threshold met or recovered -:class: screenshot +:screenshot: ::: ::::: @@ -111,7 +111,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md b/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md index a2394a620..16fc612ca 100644 --- a/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md +++ b/solutions/observability/incident-management/create-an-elasticsearch-query-rule.md @@ -31,7 +31,7 @@ When you create an {{es}} query rule, your choice of query type affects the info :::{image} ../../../images/serverless-alerting-rule-types-es-query-conditions.png :alt: Define the condition to detect -:class: screenshot +:screenshot: ::: 1. Define your query @@ -83,14 +83,14 @@ If you use query DSL, KQL, or Lucene, the query runs against the selected indice :::{image} ../../../images/serverless-alerting-rule-types-es-query-valid.png :alt: Test {{es}} query returns number of matches when valid -:class: screenshot +:screenshot: ::: If you use an ES|QL query, a table is displayed. For example: :::{image} ../../../images/serverless-alerting-rule-types-esql-query-valid.png :alt: Test ES|QL query returns a table when valid -:class: screenshot +:screenshot: ::: If the query is not valid, an error occurs. @@ -145,7 +145,7 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/serverless-alerting-es-query-rule-action-summary.png :alt: UI for defining alert summary action in an {{es}} query rule -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency to **For each alert** and specify the conditions each alert must meet for the action to run. @@ -154,7 +154,7 @@ With the **Run when** menu you can choose how often the action runs (at each che :::{image} ../../../images/serverless-alerting-es-query-rule-action-query-matched.png :alt: UI for defining a recovery action -:class: screenshot +:screenshot: ::: You can further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame. @@ -167,7 +167,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-an-error-count-threshold-rule.md b/solutions/observability/incident-management/create-an-error-count-threshold-rule.md index de85d551f..98e17053a 100644 --- a/solutions/observability/incident-management/create-an-error-count-threshold-rule.md +++ b/solutions/observability/incident-management/create-an-error-count-threshold-rule.md @@ -20,7 +20,7 @@ Create an error count threshold rule to alert you when the number of errors in a :::{image} ../../../images/serverless-alerts-create-rule-error-count.png :alt: Create rule for error count threshold alert -:class: screenshot +:screenshot: ::: ::::{tip} @@ -91,14 +91,14 @@ Alternatively, you can set the action frequency to **For each alert** and specif :::{image} ../../../images/serverless-alert-action-frequency.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: With the **Run when** menu you can choose if an action runs when the threshold for an alert is reached, or when the alert is recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the rule is triggered and also when it recovers. :::{image} ../../../images/serverless-alert-apm-action-frequency-recovered.png :alt: Choose between threshold met or recovered -:class: screenshot +:screenshot: ::: ::::: @@ -109,7 +109,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-an-inventory-rule.md b/solutions/observability/incident-management/create-an-inventory-rule.md index 2b98956e8..c1d38046a 100644 --- a/solutions/observability/incident-management/create-an-inventory-rule.md +++ b/solutions/observability/incident-management/create-an-inventory-rule.md @@ -39,7 +39,7 @@ In this example, Kubernetes Pods is the selected inventory type. The conditions :::{image} ../../../images/serverless-inventory-alert.png :alt: Inventory rule -:class: screenshot +:screenshot: ::: @@ -88,14 +88,14 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/serverless-action-alert-summary.png :alt: Action types -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that you choose how often the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). In this case, you define precisely when the alert is triggered by selecting a specific threshold condition: `Alert`, `Warning`, or `Recovered` (a value that was once above a threshold has now dropped below it). :::{image} ../../../images/serverless-inventory-threshold-run-when-selection.png :alt: Configure when an alert is triggered -:class: screenshot +:screenshot: ::: You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame: @@ -105,7 +105,7 @@ You can also further refine the conditions under which actions run by specifying :::{image} ../../../images/serverless-conditional-alerts.png :alt: Configure a conditional alert -:class: screenshot +:screenshot: ::: ::::: @@ -116,7 +116,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md b/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md index f36d44dcc..3e720b290 100644 --- a/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md +++ b/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md @@ -25,7 +25,7 @@ Choose which SLO to monitor and then define multiple burn rate windows with appr :::{image} ../../../images/serverless-slo-alerts-create-rule.png :alt: Create rule for failed transaction rate threshold -:class: screenshot +:screenshot: ::: ::::{tip} @@ -101,14 +101,14 @@ Alternatively, you can set the action frequency to **For each alert** and specif :::{image} ../../../images/serverless-alert-action-frequency.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: With the **Run when** menu you can choose if an action runs for a specific severity (critical, high, medium, low), or when the alert is recovered. For example, you can add a corresponding action for each severity you want an alert for, and also for when the alert recovers. :::{image} ../../../images/serverless-slo-action-frequency.png :alt: Choose between severity or recovered -:class: screenshot +:screenshot: ::: ::::: @@ -119,7 +119,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-an-slo.md b/solutions/observability/incident-management/create-an-slo.md index 81891901e..272fbe254 100644 --- a/solutions/observability/incident-management/create-an-slo.md +++ b/solutions/observability/incident-management/create-an-slo.md @@ -247,5 +247,5 @@ After you’ve created your SLO, you can monitor it from the *SLOs* page in Obse :::{image} ../../../images/observability-slo-overview-embeddable-widget.png :alt: Using the Add panel button to add an SLO Overview widget to a dashboard -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/incident-management/create-an-uptime-duration-anomaly-rule.md b/solutions/observability/incident-management/create-an-uptime-duration-anomaly-rule.md index 4dccdad39..332df65a0 100644 --- a/solutions/observability/incident-management/create-an-uptime-duration-anomaly-rule.md +++ b/solutions/observability/incident-management/create-an-uptime-duration-anomaly-rule.md @@ -27,7 +27,7 @@ The *anomaly score* is a value from `0` to `100`, which indicates the significan :::{image} ../../../images/observability-response-durations-alert.png :alt: Uptime response duration rule -:class: screenshot +:screenshot: ::: @@ -66,14 +66,14 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/observability-duration-anomaly-alert-summary.png :alt: Action types -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that you choose how often the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). In this case, you must also select the specific threshold condition that affects when actions run: `Uptime Duration Anomaly` or `Recovered`. :::{image} ../../../images/observability-duration-anomaly-run-when-selection.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: @@ -83,7 +83,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/observability-duration-anomaly-alert-default-message.png :alt: Default notification message for Uptime duration anomaly rules with open "Add variable" popup listing available action variables -:class: screenshot +:screenshot: ::: @@ -93,6 +93,6 @@ To receive a notification when the alert recovers, select **Run when Recovered** :::{image} ../../../images/observability-duration-anomaly-alert-recovery.png :alt: Default recovery message for Uptime duration anomaly rules with open "Add variable" popup listing available action variables -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/incident-management/create-custom-threshold-rule.md b/solutions/observability/incident-management/create-custom-threshold-rule.md index 537f5bcd2..f865eaf98 100644 --- a/solutions/observability/incident-management/create-custom-threshold-rule.md +++ b/solutions/observability/incident-management/create-custom-threshold-rule.md @@ -24,7 +24,7 @@ Create a custom threshold rule to trigger an alert when an {{obs-serverless}} da :::{image} ../../../images/serverless-custom-threshold-rule.png :alt: Rule details (custom threshold) -:class: screenshot +:screenshot: ::: @@ -181,7 +181,7 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/serverless-custom-threshold-run-when.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame: @@ -191,7 +191,7 @@ You can also further refine the conditions under which actions run by specifying :::{image} ../../../images/serverless-logs-threshold-conditional-alert.png :alt: Configure a conditional alert -:class: screenshot +:screenshot: ::: ::::: @@ -202,7 +202,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md b/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md index 55994b9e3..a838d2ec5 100644 --- a/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md +++ b/solutions/observability/incident-management/create-failed-transaction-rate-threshold-rule.md @@ -20,7 +20,7 @@ You can create a failed transaction rate threshold rule to alert you when the ra :::{image} ../../../images/serverless-alerts-create-rule-failed-transaction-rate.png :alt: Create rule for failed transaction rate threshold alert -:class: screenshot +:screenshot: ::: ::::{tip} @@ -91,14 +91,14 @@ Alternatively, you can set the action frequency to **For each alert** and specif :::{image} ../../../images/serverless-alert-action-frequency.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: With the **Run when** menu you can choose if an action runs when the threshold for an alert is reached, or when the alert is recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the rule is triggered and also when it recovers. :::{image} ../../../images/serverless-alert-apm-action-frequency-recovered.png :alt: Choose between threshold met or recovered -:class: screenshot +:screenshot: ::: ::::: @@ -109,7 +109,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-latency-threshold-rule.md b/solutions/observability/incident-management/create-latency-threshold-rule.md index 51accd080..db0e30a05 100644 --- a/solutions/observability/incident-management/create-latency-threshold-rule.md +++ b/solutions/observability/incident-management/create-latency-threshold-rule.md @@ -20,7 +20,7 @@ You can create a latency threshold rule to alert you when the latency of a speci :::{image} ../../../images/serverless-alerts-create-rule-apm-latency-threshold.png :alt: Create rule for APM latency threshold alert -:class: screenshot +:screenshot: ::: ::::{tip} @@ -95,14 +95,14 @@ Alternatively, you can set the action frequency to **For each alert** and specif :::{image} ../../../images/serverless-alert-action-frequency.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: With the **Run when** menu you can choose if an action runs when the threshold for an alert is reached, or when the alert is recovered. For example, you can add a corresponding action for each state to ensure you are alerted when the rule is triggered and also when it recovers. :::{image} ../../../images/serverless-alert-apm-action-frequency-recovered.png :alt: Choose between threshold met or recovered -:class: screenshot +:screenshot: ::: ::::: @@ -113,7 +113,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-action-variables-popup.png :alt: Action variables list -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You can also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-log-threshold-rule.md b/solutions/observability/incident-management/create-log-threshold-rule.md index 10fccaad6..0de6dcb57 100644 --- a/solutions/observability/incident-management/create-log-threshold-rule.md +++ b/solutions/observability/incident-management/create-log-threshold-rule.md @@ -11,7 +11,7 @@ mapped_pages: :::{image} ../../../images/observability-log-threshold-alert.png :alt: Log threshold alert configuration -:class: screenshot +:screenshot: ::: @@ -62,7 +62,7 @@ When group by fields are selected, but no documents contain the selected field(s :::{image} ../../../images/observability-log-threshold-alert-group-by.png :alt: Log threshold rule group by -:class: screenshot +:screenshot: ::: @@ -72,7 +72,7 @@ To determine how many log entries would match each part of your configuration, y :::{image} ../../../images/observability-log-threshold-alert-chart-previews.png :alt: Log threshold chart previews -:class: screenshot +:screenshot: ::: The shaded area denotes the threshold that has been selected. @@ -86,7 +86,7 @@ The following example triggers an alert when there are twice as many error logs :::{image} ../../../images/observability-log-threshold-alert-ratio.png :alt: Log threshold ratio rule -:class: screenshot +:screenshot: ::: ::::{important} @@ -129,7 +129,7 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/observability-log-threshold-run-when-selection.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame: @@ -139,7 +139,7 @@ You can also further refine the conditions under which actions run by specifying :::{image} ../../../images/observability-logs-threshold-conditional-alert.png :alt: Configure a conditional alert -:class: screenshot +:screenshot: ::: @@ -149,7 +149,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/observability-logs-threshold-alert-default-message.png :alt: Default notification message for log threshold rules with open "Add variable" popup listing available action variables -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). @@ -193,7 +193,7 @@ When a rule check is performed, a query is built based on the configuration of t :::{image} ../../../images/observability-log-threshold-alert-es-query-ungrouped.png :alt: Log threshold ungrouped {{es}} query example -:class: screenshot +:screenshot: ::: ```json @@ -245,7 +245,7 @@ When a rule check is performed, a query is built based on the configuration of t :::{image} ../../../images/observability-log-threshold-alert-es-query-grouped.png :alt: Log threshold grouped {{es}} query example -:class: screenshot +:screenshot: ::: ```json diff --git a/solutions/observability/incident-management/create-manage-cases.md b/solutions/observability/incident-management/create-manage-cases.md index fb9d149de..f0eceea93 100644 --- a/solutions/observability/incident-management/create-manage-cases.md +++ b/solutions/observability/incident-management/create-manage-cases.md @@ -48,7 +48,7 @@ After you create a case, you can upload and manage files on the **Files** tab: :::{image} ../../../images/serverless-cases-files-tab.png :alt: A list of files attached to a case -:class: screenshot +:screenshot: ::: To download or delete the file or copy the file hash to your clipboard, open the action menu (…). The available hash functions are MD5, SHA-1, and SHA-256. diff --git a/solutions/observability/incident-management/create-manage-rules.md b/solutions/observability/incident-management/create-manage-rules.md index 88c2d043c..5e7bb4905 100644 --- a/solutions/observability/incident-management/create-manage-rules.md +++ b/solutions/observability/incident-management/create-manage-rules.md @@ -48,7 +48,7 @@ After a rule is created, you can open the **More actions** menu ![More actions]( :::{image} ../../../images/serverless-alerts-edit-rule.png :alt: Edit rule (failed transaction rate) -:class: screenshot +:screenshot: ::: From the action menu you can also: @@ -66,7 +66,7 @@ Click on an individual rule on the **{{rules-app}}** page to view details includ :::{image} ../../../images/serverless-alerts-detail-apm-anomaly.png :alt: Rule details (APM anomaly) -:class: screenshot +:screenshot: ::: A rule can have one of the following responses: diff --git a/solutions/observability/incident-management/create-metric-threshold-rule.md b/solutions/observability/incident-management/create-metric-threshold-rule.md index 05c0172b8..490083a53 100644 --- a/solutions/observability/incident-management/create-metric-threshold-rule.md +++ b/solutions/observability/incident-management/create-metric-threshold-rule.md @@ -30,7 +30,7 @@ In this example, the conditions state that you will receive a critical alert for :::{image} ../../../images/observability-metrics-alert.png :alt: Metric threshold alert -:class: screenshot +:screenshot: ::: When you select **Alert me if there’s no data**, the rule is triggered if the metrics don’t report any data over the expected time period, or if the rule fails to query {{es}}. @@ -40,7 +40,7 @@ When you select **Alert me if there’s no data**, the rule is triggered if the :::{image} ../../../images/observability-metrics-alert-filters-and-group.png :alt: Metric threshold filter and group fields -:class: screenshot +:screenshot: ::: The **Filters** control the scope of the rule. If used, the rule will only evaluate metric data that matches the query in this field. In this example, the rule will only alert on metrics reported from a Cloud region called `us-east`. @@ -91,14 +91,14 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/observability-action-alert-summary.png :alt: Action types -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that you choose how often the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). In this case, you must also select the specific threshold condition that affects when actions run: `Alert`, `Warning`, `No data`, or `Recovered` (a value that was once above a threshold has now dropped below it). :::{image} ../../../images/observability-metrics-threshold-run-when-selection.png :alt: Configure when a rule is triggered -:class: screenshot +:screenshot: ::: You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame: @@ -108,7 +108,7 @@ You can also further refine the conditions under which actions run by specifying :::{image} ../../../images/observability-metric-threshold-conditional-alerts.png :alt: Configure a conditional alert -:class: screenshot +:screenshot: ::: @@ -118,7 +118,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/observability-metrics-threshold-alert-default-message.png :alt: Default notification message for metric threshold rules with open "Add variable" popup listing available action variables -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/create-monitor-status-rule.md b/solutions/observability/incident-management/create-monitor-status-rule.md index b4938d2d1..1de6c45b4 100644 --- a/solutions/observability/incident-management/create-monitor-status-rule.md +++ b/solutions/observability/incident-management/create-monitor-status-rule.md @@ -21,7 +21,7 @@ The **Filter by** section controls the scope of the rule. The rule will only che :::{image} ../../../images/serverless-synthetic-monitor-filters.png :alt: Filter by section of the Synthetics monitor status rule -:class: screenshot +:screenshot: ::: @@ -43,7 +43,7 @@ In this example, the conditions will be met any time a `browser` monitor is down :::{image} ../../../images/serverless-synthetic-monitor-conditions.png :alt: Filters and conditions defining a Synthetics monitor status rule -:class: screenshot +:screenshot: ::: @@ -81,14 +81,14 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/serverless-synthetic-monitor-action-types-summary.png :alt: synthetic monitor action types summary -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that you choose how often the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). In this case, you must also select the specific threshold condition that affects when actions run: the *Synthetics monitor status* changes or when it is *Recovered* (went from down to up). :::{image} ../../../images/serverless-synthetic-monitor-action-types-each-alert.png :alt: synthetic monitor action types each alert -:class: screenshot +:screenshot: ::: You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame: @@ -98,7 +98,7 @@ You can also further refine the conditions under which actions run by specifying :::{image} ../../../images/serverless-synthetic-monitor-action-types-more-options.png :alt: synthetic monitor action types more options -:class: screenshot +:screenshot: ::: @@ -108,7 +108,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/serverless-synthetic-monitor-action-variables.png :alt: synthetic monitor action variables -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). @@ -202,7 +202,7 @@ This rule covers all the monitors you have running. You can use a query to speci :::{image} ../../../images/observability-monitor-status-alert.png :alt: Monitor status rule -:class: screenshot +:screenshot: ::: The final step when creating a rule is to select one or more actions to take when the alert is triggered. @@ -216,21 +216,21 @@ You can configure action types on the [Settings](../../../solutions/observabilit :::{image} ../../../images/observability-uptime-alert-connectors.png :alt: Uptime rule connectors -:class: screenshot +:screenshot: ::: After you select a connector, you must set the action frequency. You can choose to create a summary of alerts on each check interval or on a custom interval. For example, send email notifications that summarize the new, ongoing, and recovered alerts each hour: :::{image} ../../../images/observability-action-alert-summary.png :alt: Action frequency summary of alerts -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that you choose how often the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). In this case, you must also select the specific threshold condition that affects when actions run: `Uptime Down Monitor` or `Recovered`. :::{image} ../../../images/observability-uptime-run-when-selection.png :alt: Action frequency for each alert -:class: screenshot +:screenshot: ::: @@ -240,7 +240,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/observability-monitor-status-alert-default-message.png :alt: Default notification message for monitor status rules with open "Add variable" popup listing available action variables -:class: screenshot +:screenshot: ::: @@ -250,7 +250,7 @@ To receive a notification when the alert recovers, select **Run when Recovered** :::{image} ../../../images/observability-monitor-status-alert-recovery.png :alt: Default recovery message for monitor status rules with open "Add variable" popup listing available action variables -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/incident-management/create-tls-certificate-rule.md b/solutions/observability/incident-management/create-tls-certificate-rule.md index 02fcc986b..97bbc8af9 100644 --- a/solutions/observability/incident-management/create-tls-certificate-rule.md +++ b/solutions/observability/incident-management/create-tls-certificate-rule.md @@ -39,7 +39,7 @@ In this example, the conditions are met when any of the TLS certificates on site :::{image} ../../../images/observability-tls-rule-synthetics-conditions.png :alt: Conditions and advanced options defining a Synthetics TLS certificate rule -:class: screenshot +:screenshot: ::: @@ -76,14 +76,14 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/observability-tls-rule-synthetics-action-types-summary.png :alt: tls rule synthetics action types summary -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that you choose how often the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). In this case, you must also select the specific threshold condition that affects when actions run: the *Synthetics TLS certificate* changes or when it is *Recovered* (went from down to up). :::{image} ../../../images/observability-tls-rule-synthetics-action-types-each-alert.png :alt: tls rule synthetics action types each alert -:class: screenshot +:screenshot: ::: You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame: @@ -93,7 +93,7 @@ You can also further refine the conditions under which actions run by specifying :::{image} ../../../images/observability-tls-rule-synthetics-action-types-more-options.png :alt: tls rule synthetics action types more options -:class: screenshot +:screenshot: ::: @@ -103,7 +103,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/observability-tls-rule-synthetics-action-variables.png :alt: tls rule synthetics action variables -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). @@ -182,7 +182,7 @@ In this example, the conditions are met when any of the TLS certificates on site :::{image} ../../../images/observability-tls-rule-uptime-conditions.png :alt: Monitor status rule -:class: screenshot +:screenshot: ::: @@ -221,14 +221,14 @@ After you select a connector, you must set the action frequency. You can choose :::{image} ../../../images/observability-tls-rule-uptime-action-types-summary.png :alt: tls rule uptime action types summary -:class: screenshot +:screenshot: ::: Alternatively, you can set the action frequency such that you choose how often the action runs (for example, at each check interval, only when the alert status changes, or at a custom action interval). In this case, you must also select the specific threshold condition that affects when actions run: *Uptime TLS Alert* or *Recovered* (went from down to up). :::{image} ../../../images/observability-tls-rule-uptime-action-types-each-alert.png :alt: tls rule uptime action types each alert -:class: screenshot +:screenshot: ::: You can also further refine the conditions under which actions run by specifying that actions only run when they match a KQL query or when an alert occurs within a specific time frame: @@ -238,7 +238,7 @@ You can also further refine the conditions under which actions run by specifying :::{image} ../../../images/observability-tls-rule-uptime-action-types-more-options.png :alt: tls rule uptime action types more options -:class: screenshot +:screenshot: ::: @@ -248,7 +248,7 @@ Use the default notification message or customize it. You can add more context t :::{image} ../../../images/observability-tls-rule-uptime-default-message.png :alt: Default notification message for TLS rules with open "Add variable" popup listing available action variables -:class: screenshot +:screenshot: ::: The following variables are specific to this rule type. You an also specify [variables common to all rules](../../../explore-analyze/alerts-cases/alerts/rule-action-variables.md). diff --git a/solutions/observability/incident-management/service-level-objectives-slos.md b/solutions/observability/incident-management/service-level-objectives-slos.md index b0af89c92..8048f2316 100644 --- a/solutions/observability/incident-management/service-level-objectives-slos.md +++ b/solutions/observability/incident-management/service-level-objectives-slos.md @@ -37,7 +37,7 @@ From the SLO overview, you can see all of your SLOs and a quick summary of what :::{image} ../../../images/observability-slo-dashboard.png :alt: Dashboard showing list of SLOs -:class: screenshot +:screenshot: ::: Select an SLO from the overview to see additional details including: @@ -49,7 +49,7 @@ Select an SLO from the overview to see additional details including: :::{image} ../../../images/serverless-slo-detailed-view.png :alt: Detailed view of a single SLO -:class: screenshot +:screenshot: ::: @@ -59,7 +59,7 @@ You can apply searches and filters to quickly find the SLOs you’re interested :::{image} ../../../images/serverless-slo-filtering-options.png :alt: Options for filtering SLOs in the overview -:class: screenshot +:screenshot: ::: * **Apply structured filters:** Next to the search field, click the **Add filter** ![Add filter icon](../../../images/serverless-plusInCircleFilled.svg "") icon to add a custom filter. Notice that you can use `OR` and `AND` to combine filters. The structured filter can be disabled, inverted, or pinned across all apps. @@ -70,7 +70,7 @@ There are also options to sort and group the SLOs displayed in the overview: :::{image} ../../../images/serverless-slo-group-by.png :alt: SLOs sorted by SLO status and grouped by tags -:class: screenshot +:screenshot: ::: * **Sort by**: SLI value, SLO status, Error budget consumed, or Error budget remaining. @@ -89,7 +89,7 @@ Available SLO panels include: :::{image} ../../../images/serverless-slo-dashboard-panel.png :alt: Detailed view of an SLO dashboard panel -:class: screenshot +:screenshot: ::: To learn more about Dashboards, see [Dashboards](../../../solutions/observability/get-started/get-started-with-dashboards.md). diff --git a/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md b/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md index 6b6523ac7..a60ea5462 100644 --- a/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md +++ b/solutions/observability/incident-management/triage-slo-burn-rate-breaches.md @@ -24,7 +24,7 @@ Explore charts on the page to learn more about the SLO breach: :::{image} ../../../images/observability-slo-burn-rate-breach.png :alt: Alert details for SLO burn rate breach - :class: screenshot + :screenshot: ::: ::::{tip} @@ -36,7 +36,7 @@ Explore charts on the page to learn more about the SLO breach: :::{image} ../../../images/observability-log-threshold-breach-alert-history-chart.png :alt: Alert history chart in alert details for SLO burn rate breach - :class: screenshot + :screenshot: ::: diff --git a/solutions/observability/incident-management/triage-threshold-breaches.md b/solutions/observability/incident-management/triage-threshold-breaches.md index 113293628..afbee98a3 100644 --- a/solutions/observability/incident-management/triage-threshold-breaches.md +++ b/solutions/observability/incident-management/triage-threshold-breaches.md @@ -24,7 +24,7 @@ Explore charts on the page to learn more about the threshold breach: :::{image} ../../../images/observability-log-threshold-breach-condition-chart.png :alt: Chart for a condition in alert details for log threshold breach - :class: screenshot + :screenshot: ::: ::::{tip} @@ -36,14 +36,14 @@ Explore charts on the page to learn more about the threshold breach: :::{image} ../../../images/observability-log-threshold-breach-log-rate-analysis.png :alt: Log rate analysis chart in alert details for log threshold breach - :class: screenshot + :screenshot: ::: * **Alerts history chart**. The next chart provides information about alerts for the same rule and group over the last 30 days. It shows the number of those alerts that were triggered per day, the total number of alerts triggered throughout the 30 days, and the average time it took to recover after a breach. :::{image} ../../../images/observability-log-threshold-breach-alert-history-chart.png :alt: Alert history chart in alert details for log threshold breach - :class: screenshot + :screenshot: ::: diff --git a/solutions/observability/incident-management/view-alerts.md b/solutions/observability/incident-management/view-alerts.md index fc9f6a129..e42afd5c3 100644 --- a/solutions/observability/incident-management/view-alerts.md +++ b/solutions/observability/incident-management/view-alerts.md @@ -23,7 +23,7 @@ You can centrally manage rules from the [{{kib}} Management UI](../../../explore :::{image} ../../../images/serverless-observability-alerts-view.png :alt: Alerts page -:class: screenshot +:screenshot: ::: @@ -44,7 +44,7 @@ From the **Alerts** table, you can click on a specific alert to open the alert d :::{image} ../../../images/serverless-alert-details-flyout.png :alt: Alerts detail (APM anomaly) -:class: screenshot +:screenshot: ::: There are three common alert statuses: diff --git a/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md b/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md index 12b209755..b2f6d6a17 100644 --- a/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md +++ b/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md @@ -29,7 +29,7 @@ You also need to copy the **Symbols** endpoint from the deployment overview page :::{image} ../../../images/observability-profiling-symbolizer-url.png :alt: profiling symbolizer url -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/infra-and-hosts/analyze-compare-hosts.md b/solutions/observability/infra-and-hosts/analyze-compare-hosts.md index 8ddb9ef4e..e3e7d8bfb 100644 --- a/solutions/observability/infra-and-hosts/analyze-compare-hosts.md +++ b/solutions/observability/infra-and-hosts/analyze-compare-hosts.md @@ -20,7 +20,7 @@ To open **Hosts**, find **Infrastructure** in the main menu or use the [global s :::{image} ../../../images/serverless-hosts.png :alt: Screenshot of the Hosts page -:class: screenshot +:screenshot: ::: To learn more about the metrics shown on this page, refer to the [Metrics reference](https://www.elastic.co/guide/en/serverless/current/observability-metrics-reference.html) documentation. @@ -88,7 +88,7 @@ Metrics visualizations are powered by Lens, meaning you can continue your analys :::{image} ../../../images/serverless-hosts-open-in-lens.png :alt: Screenshot showing option to open in Lens -:class: screenshot +:screenshot: ::: In Lens, you can examine all the fields and formulas used to create the visualization, make modifications to the visualization, and save your changes. @@ -102,7 +102,7 @@ On the **Logs** tab of the **Hosts** page, view logs for the systems you are mon :::{image} ../../../images/serverless-hosts-logs.png :alt: Screenshot showing Logs view -:class: screenshot +:screenshot: ::: To see logs for a specific host, refer to [View host details](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details). @@ -120,7 +120,7 @@ From the **Actions** menu, you can choose to: :::{image} ../../../images/serverless-hosts-view-alerts.png :alt: Screenshot showing Alerts view -:class: screenshot +:screenshot: ::: To see alerts for a specific host, refer to [View host details](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md#view-host-details). @@ -164,7 +164,7 @@ Click **Show all** to drill down into related data. :::{image} ../../../images/serverless-overview-overlay.png :alt: Host overview -:class: screenshot +:screenshot: ::: ::::: @@ -177,7 +177,7 @@ This information can help when investigating events—for example, when filterin :::{image} ../../../images/serverless-metadata-overlay.png :alt: Host metadata -:class: screenshot +:screenshot: ::: ::::: @@ -188,7 +188,7 @@ The **Metrics** tab shows host metrics organized by type and is more complete th :::{image} ../../../images/serverless-metrics-overlay.png :alt: Metrics -:class: screenshot +:screenshot: ::: ::::: @@ -219,7 +219,7 @@ The processes listed in the **Top processes** table are based on an aggregation :::{image} ../../../images/serverless-processes-overlay.png :alt: Host processes -:class: screenshot +:screenshot: ::: ::::: @@ -238,7 +238,7 @@ For more on Universal Profiling, refer to the [Universal Profiling](../../../sol :::{image} ../../../images/observability-universal-profiling-overlay.png :alt: Host Universal Profiling -:class: screenshot +:screenshot: ::: ::::: @@ -256,7 +256,7 @@ To view the logs in the {{logs-app}} for a detailed analysis, click **Open in Lo :::{image} ../../../images/serverless-logs-overlay.png :alt: Host logs -:class: screenshot +:screenshot: ::: ::::: @@ -271,7 +271,7 @@ To drill down and analyze the metric anomaly, select **Actions** → **Open in A :::{image} ../../../images/serverless-anomalies-overlay.png :alt: Anomalies -:class: screenshot +:screenshot: ::: ::::: @@ -311,7 +311,7 @@ Other options include: :::{image} ../../../images/serverless-osquery-overlay.png :alt: Osquery -:class: screenshot +:screenshot: ::: ::::: @@ -339,7 +339,7 @@ In this example, the data emission rate is lower than the Lens chart interval. A :::{image} ../../../images/serverless-hosts-dashed.png :alt: Screenshot showing dashed chart -:class: screenshot +:screenshot: ::: The chart interval is automatically set depending on the selected time duration. To fix this problem, change the selected time range at the top of the page. @@ -357,7 +357,7 @@ A solid line indicates that the chart interval is set appropriately for the data :::{image} ../../../images/serverless-hosts-missing-data.png :alt: Screenshot showing missing data -:class: screenshot +:screenshot: ::: @@ -369,7 +369,7 @@ This missing data can be hard to spot at first glance. The green boxes outline r :::{image} ../../../images/serverless-hosts-dashed-and-missing.png :alt: Screenshot showing dashed lines and missing data -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/infra-and-hosts/detect-metric-anomalies.md b/solutions/observability/infra-and-hosts/detect-metric-anomalies.md index 23a895601..42e7c4c70 100644 --- a/solutions/observability/infra-and-hosts/detect-metric-anomalies.md +++ b/solutions/observability/infra-and-hosts/detect-metric-anomalies.md @@ -71,7 +71,7 @@ After creating {{ml}} jobs, you cannot change the settings. You can recreate the :::{image} ../../../images/serverless-metrics-ml-jobs.png :alt: Infrastructure {{ml-app}} anomalies -:class: screenshot +:screenshot: ::: The **Anomalies** table displays a list of each single metric {{anomaly-detect}} job for the specific host or Kubernetes pod. By default, anomaly jobs are sorted by time to show the most recent job. @@ -93,5 +93,5 @@ On the **Infrastructure inventory** page, click **Show history** to view the met :::{image} ../../../images/serverless-metrics-history-chart.png :alt: History -:class: screenshot +:screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md b/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md index 7bc1dba26..3106a2ac6 100644 --- a/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md +++ b/solutions/observability/infra-and-hosts/explore-infrastructure-metrics-over-time.md @@ -13,7 +13,7 @@ To open **Metrics Explorer**, find **Infrastructure** in the main menu or use th :::{image} ../../../images/observability-metrics-explorer.png :alt: Metrics Explorer -:class: screenshot +:screenshot: ::: To learn more about the metrics shown on this page, refer to the [Metrics reference](/reference/data-analysis/observability/index.md) documentation. @@ -40,7 +40,7 @@ As an example, let’s view the system load metrics for hosts we’re currently :::{image} ../../../images/observability-metrics-explorer-filter.png :alt: Metrics Explorer query - :class: screenshot + :screenshot: ::: 3. Select **Actions** in the top right-hand corner of one of the graphs and then click **Add filter**. @@ -59,7 +59,7 @@ As an example, let’s view the system load metrics for hosts we’re currently :::{image} ../../../images/observability-metrics-time-series.png :alt: Time series chart - :class: screenshot + :screenshot: ::: The `derivative` aggregation is used to calculate the difference between each bucket. By default, the value of units is automatically set to `1s`, along with the `positive only` aggregation. diff --git a/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md b/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md index b783bf9e5..4b46870dc 100644 --- a/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md +++ b/solutions/observability/infra-and-hosts/get-started-with-system-metrics.md @@ -62,14 +62,14 @@ In this step, add the System integration to monitor host logs and metrics. :::{image} ../../../images/observability-kibana-agent-add-log-path.png :alt: Configuration page for adding log paths to the {{agent}} System integration - :class: screenshot + :screenshot: ::: 6. Click **Save and continue**. This step takes a minute or two to complete. When it’s done, you’ll have an agent policy that contains a system integration policy for the configuration you just specified. :::{image} ../../../images/observability-kibana-system-policy.png :alt: Configuration page for adding the {{agent}} System integration - :class: screenshot + :screenshot: ::: 7. In the popup, click **Add {{agent}} to your hosts** to open the **Add agent** flyout. @@ -106,7 +106,7 @@ The **Add agent** flyout has two options: **Enroll in {{fleet}}** and **Run stan :::{image} ../../../images/observability-kibana-agent-flyout.png :alt: Add agent flyout in {{kib}} - :class: screenshot + :screenshot: ::: It takes about a minute for {{agent}} to enroll in {{fleet}}, download the configuration specified in the policy you just created, and start collecting data. diff --git a/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md b/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md index 2e443504d..ee95f638a 100644 --- a/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md +++ b/solutions/observability/infra-and-hosts/get-started-with-universal-profiling.md @@ -70,7 +70,7 @@ After enabling Universal Profiling on your deployment for the first time, select :::{image} ../../../images/observability-profiling-setup-popup.png :alt: profiling setup popup -:class: screenshot +:screenshot: ::: Click **Set up Universal Profiling** to configure data ingestion. @@ -120,7 +120,7 @@ To install the Universal Profiling Agent using the {{agent}} and the Universal P :::{image} ../../../images/observability-profiling-elastic-agent.png :alt: profiling elastic agent - :class: screenshot + :screenshot: ::: 2. Click `Manage Universal Profiling Agent in Fleet` to complete the integration. @@ -132,7 +132,7 @@ To install the Universal Profiling Agent using the {{agent}} and the Universal P :::{image} ../../../images/observability-profililing-elastic-agent-creds.png :alt: profililing elastic agent creds - :class: screenshot + :screenshot: ::: 5. Click **Save and continue**. @@ -148,7 +148,7 @@ The following is an example of the provided instructions for {{k8s}}: :::{image} ../../../images/observability-profiling-k8s-hostagent.png :alt: profiling k8s hostagent -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/infra-and-hosts/run-universal-profiling-on-self-hosted-elastic-stack.md b/solutions/observability/infra-and-hosts/run-universal-profiling-on-self-hosted-elastic-stack.md index a2a47a026..3b3ef1a16 100644 --- a/solutions/observability/infra-and-hosts/run-universal-profiling-on-self-hosted-elastic-stack.md +++ b/solutions/observability/infra-and-hosts/run-universal-profiling-on-self-hosted-elastic-stack.md @@ -59,7 +59,7 @@ The backend is made up of two services: the collector and the symbolizer. :::{image} ../../../images/observability-profiling-self-managed-ingestion-architecture.png :alt: profiling self managed ingestion architecture -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md index 23645f304..20c4d0dca 100644 --- a/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md +++ b/solutions/observability/infra-and-hosts/tutorial-observe-kubernetes-deployments.md @@ -457,7 +457,7 @@ After configuring your integration, you need to download and update your manifes :::{image} ../../../images/observability-run-standalone-option.png :alt: Select run standalone under Enroll in Fleet - :class: screenshot + :screenshot: ::: 4. Under **Configure the agent**, select **Download Manifest**. @@ -500,7 +500,7 @@ On the **Infrastructure inventory** page, you can switch between different views :::{image} ../../../images/observability-metrics-inventory.png :alt: Inventory page that shows Kubernetes pods -:class: screenshot +:screenshot: ::: For more on using the Inventory page, refer to [View infrastructure metrics by resource type](view-infrastructure-metrics-by-resource-type.md). @@ -509,7 +509,7 @@ On the **Metrics Explorer** page, you can group and analyze metrics for the reso :::{image} ../../../images/observability-monitor-k8s-metrics-explorer.png :alt: Metrics dashboard that shows CPU usage for Kubernetes pods -:class: screenshot +:screenshot: ::: For more on using the **Metrics Explorer** page, refer to [Explore infrastructure metrics over time](explore-infrastructure-metrics-over-time.md). @@ -1156,7 +1156,7 @@ The **Applications** app allows you to monitor your software services and applic :::{image} ../../../images/observability-apm-app-landing.png :alt: Applications UI Kubernetes -:class: screenshot +:screenshot: ::: Having access to application-level insights with just a few clicks can drastically decrease the time you spend debugging errors, slow response times, and crashes. @@ -1165,7 +1165,7 @@ Best of all, because Kubernetes environment variables have been mapped to APM me :::{image} ../../../images/observability-apm-app-kubernetes-filter.png :alt: Applications UI Kubernetes -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md b/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md index 7d170e553..9081474f4 100644 --- a/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md +++ b/solutions/observability/infra-and-hosts/tutorial-observe-nginx-instances.md @@ -189,7 +189,7 @@ The **Metrics Nginx overview** shows visual representations of total requests, p :::{image} ../../../images/observability-nginx-metrics-dashboard.png :alt: nginx metrics dashboard -:class: screenshot +:screenshot: ::: @@ -223,14 +223,14 @@ The **Nginx logs overview** dashboard shows visual representations of geographic :::{image} ../../../images/observability-nginx-logs-overview-dashboard.png :alt: nginx logs overview dashboard -:class: screenshot +:screenshot: ::: The **Nginx access and error logs** dashboard shows your access logs over time, and lists your access and error logs. :::{image} ../../../images/observability-nginx-logs-access-error-dashboard.png :alt: nginx access and error logs dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/infra-and-hosts/understanding-no-results-found-message.md b/solutions/observability/infra-and-hosts/understanding-no-results-found-message.md index ad7bd4750..ca8732aac 100644 --- a/solutions/observability/infra-and-hosts/understanding-no-results-found-message.md +++ b/solutions/observability/infra-and-hosts/understanding-no-results-found-message.md @@ -37,5 +37,5 @@ This could be for any of these reasons: :::{image} ../../../images/serverless-turn-on-system-metrics.png :alt: Screenshot showing system cpu and diskio metrics selected for collection - :class: screenshot + :screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md b/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md index 304fb1ce0..dbefdb8bf 100644 --- a/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md +++ b/solutions/observability/infra-and-hosts/universal-profiling-index-life-cycle-management.md @@ -84,7 +84,7 @@ To apply a custom {{ilm-init}} policy, you must name the component template `pro :::{image} ../../../images/observability-profiling-create-component-template.png :alt: Create component template - :class: screenshot + :screenshot: ::: diff --git a/solutions/observability/infra-and-hosts/universal-profiling.md b/solutions/observability/infra-and-hosts/universal-profiling.md index 6fa6fa0b0..3c52f8754 100644 --- a/solutions/observability/infra-and-hosts/universal-profiling.md +++ b/solutions/observability/infra-and-hosts/universal-profiling.md @@ -44,7 +44,7 @@ Adding symbols for unsymbolized frames is currently a manual operation. See [Add :::{image} ../../../images/observability-profiling-stacktraces-unsymbolized.png :alt: profiling stacktraces unsymbolized -:class: screenshot +:screenshot: ::: @@ -54,7 +54,7 @@ The stacktraces view shows graphs of stacktraces grouped by threads, traces, hos :::{image} ../../../images/observability-profiling-stacktraces-default-view.png :alt: profiling stacktraces default view -:class: screenshot +:screenshot: ::: @@ -85,7 +85,7 @@ Below the top graph, there are individual graphs that show the individual trend- :::{image} ../../../images/observability-profiling-stacktraces-smaller-graphs.png :alt: profiling stacktraces smaller graphs -:class: screenshot +:screenshot: ::: The percentage displayed in the top-right corner of every individual graph is the relative number of occurrences of every time over the total of samples in the group. @@ -101,7 +101,7 @@ In the **Traces** tab, clicking **Show more** at the bottom of one of the indivi :::{image} ../../../images/observability-profiling-stacktraces-show-more.png :alt: profiling stacktraces show more -:class: screenshot +:screenshot: ::: @@ -111,7 +111,7 @@ The flamegraph view groups hierarchical data (stacktraces) into rectangles stack :::{image} ../../../images/observability-profiling-flamegraph-view.png :alt: profiling flamegraph view -:class: screenshot +:screenshot: ::: @@ -143,7 +143,7 @@ Hovering your mouse over a rectangle in the flamegraph displays the frame’s de :::{image} ../../../images/observability-profiling-flamegraph-detailed-view.png :alt: profiling flamegraph detailed view -:class: screenshot +:screenshot: ::: Below the graph area, you can use the search bar to find specific text in the flamegraph; here you can search binaries, function or file names, and move over the occurrences. @@ -155,7 +155,7 @@ The functions view presents an ordered list of functions that Universal Profilin :::{image} ../../../images/observability-profiling-functions-default-view.png :alt: profiling functions default view -:class: screenshot +:screenshot: ::: @@ -184,14 +184,14 @@ In differential functions, the right-most column of functions has green or orang :::{image} ../../../images/observability-profiling-functions-differential-view.png :alt: profiling functions differential view -:class: screenshot +:screenshot: ::: In differential flamegraphs, the difference with the baseline is highlighted with color and hue. A vivid green colored rectangle indicates that a frame has been seen in *less* samples compared to the baseline, which means an improvement. A vivid red colored rectangle indicates a frame has been seen in more samples being recorded on CPU, indicating a potential performance regression. :::{image} ../../../images/observability-profiling-flamegraph-differential-view.png :alt: profiling flamegraph differential view -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md b/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md index f21e0d9eb..95727b6bd 100644 --- a/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md +++ b/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md @@ -12,7 +12,7 @@ To open **Infrastructure inventory**, find **Infrastructure** in the main menu o :::{image} ../../../images/observability-metrics-app.png :alt: Infrastructure UI in {kib} -:class: screenshot +:screenshot: ::: To learn more about the metrics shown on this page, refer to the [Metrics reference](https://www.elastic.co/guide/en/serverless/current/observability-metrics-reference.html). @@ -42,7 +42,7 @@ You can sort by resource, group the resource by specific fields related to it, a :::{image} ../../../images/observability-kubernetes-filter.png :alt: Kubernetes pod filtering -:class: screenshot +:screenshot: ::: You can also use the search bar to create structured queries using [{{kib}} Query Language](../../../explore-analyze/query-filter/languages/kql.md). For example, enter `host.hostname : "host1"` to view only the information for `host1`. @@ -77,7 +77,7 @@ Click **Show all** to drill down into related data. :::{image} ../../../images/observability-overview-overlay.png :alt: Host overview -:class: screenshot +:screenshot: ::: ::::: @@ -90,7 +90,7 @@ This information can help when investigating events—for example, when filterin :::{image} ../../../images/observability-metadata-overlay.png :alt: Host metadata -:class: screenshot +:screenshot: ::: ::::: @@ -101,7 +101,7 @@ The **Metrics** tab shows host metrics organized by type and is more complete th :::{image} ../../../images/serverless-metrics-overlay.png :alt: Metrics -:class: screenshot +:screenshot: ::: ::::: @@ -132,7 +132,7 @@ The processes listed in the **Top processes** table are based on an aggregation :::{image} ../../../images/serverless-processes-overlay.png :alt: Host processes -:class: screenshot +:screenshot: ::: % Stateful only for Profiling @@ -151,7 +151,7 @@ For more on Universal Profiling, refer to the [Universal Profiling](../../../sol :::{image} ../../../images/observability-universal-profiling-overlay.png :alt: Host Universal Profiling -:class: screenshot +:screenshot: ::: ::::: @@ -169,7 +169,7 @@ To view the logs in the {{logs-app}} for a detailed analysis, click **Open in Lo :::{image} ../../../images/observability-logs-overlay.png :alt: Host logs -:class: screenshot +:screenshot: ::: ::::: @@ -184,7 +184,7 @@ To drill down and analyze the metric anomaly, select **Actions** → **Open in A :::{image} ../../../images/serverless-anomalies-overlay.png :alt: Anomalies -:class: screenshot +:screenshot: ::: ::::: @@ -225,7 +225,7 @@ Other options include: :::{image} ../../../images/observability-osquery-overlay.png :alt: Osquery -:class: screenshot +:screenshot: ::: ::::: @@ -275,7 +275,7 @@ Click **Show all** to drill down into related data. :::{image} ../../../images/observability-overview-overlay-containers.png :alt: Container overview -:class: screenshot +:screenshot: ::: ::::: @@ -292,7 +292,7 @@ All of this information can help when investigating events—for example, filter :::{image} ../../../images/observability-metadata-overlay-containers.png :alt: Container metadata -:class: screenshot +:screenshot: ::: ::::: @@ -303,7 +303,7 @@ The **Metrics** tab shows container metrics organized by type. :::{image} ../../../images/observability-metrics-overlay-containers.png :alt: Metrics -:class: screenshot +:screenshot: ::: ::::: @@ -321,7 +321,7 @@ To view the logs in the {{logs-app}} for a detailed analysis, click **Open in Lo :::{image} ../../../images/observability-logs-overlay-containers.png :alt: Container logs -:class: screenshot +:screenshot: ::: ::::: @@ -334,7 +334,7 @@ When you have searched and filtered for a specific resource, you can drill down :::{image} ../../../images/observability-pod-metrics.png :alt: Kubernetes pod metrics -:class: screenshot +:screenshot: ::: @@ -346,7 +346,7 @@ Select your resource, and from the **Metric** filter menu, click **Add metric**. :::{image} ../../../images/serverless-add-custom-metric.png :alt: Add custom metrics -:class: screenshot +:screenshot: ::: diff --git a/solutions/observability/logs/add-service-name-to-logs.md b/solutions/observability/logs/add-service-name-to-logs.md index 4371d0d6a..de52885cb 100644 --- a/solutions/observability/logs/add-service-name-to-logs.md +++ b/solutions/observability/logs/add-service-name-to-logs.md @@ -35,7 +35,7 @@ Adding the `add_fields` processor to an integration’s settings would add `your :::{image} ../../../images/serverless-add-field-processor.png :alt: Add the add_fields processor to an integration -:class: screenshot +:screenshot: ::: For more on defining processors, refer to [define processors](/reference/ingestion-tools/fleet/agent-processors.md). diff --git a/solutions/observability/logs/categorize-log-entries.md b/solutions/observability/logs/categorize-log-entries.md index 1c706421a..cee84e433 100644 --- a/solutions/observability/logs/categorize-log-entries.md +++ b/solutions/observability/logs/categorize-log-entries.md @@ -33,7 +33,7 @@ The **Categories** page lists all the log categories from the selected indices. :::{image} ../../../images/observability-log-categories.jpg :alt: Log categories -:class: screenshot +:screenshot: ::: The category row contains the following information: @@ -48,7 +48,7 @@ To view a log message under a particular category, click the arrow at the end of :::{image} ../../../images/observability-log-opened.png :alt: Opened log category -:class: screenshot +:screenshot: ::: For more information about categorization, go to [Detecting anomalous categories of data](../../../explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md). diff --git a/solutions/observability/logs/filter-aggregate-logs.md b/solutions/observability/logs/filter-aggregate-logs.md index 587a50f41..d5d9508b7 100644 --- a/solutions/observability/logs/filter-aggregate-logs.md +++ b/solutions/observability/logs/filter-aggregate-logs.md @@ -120,7 +120,7 @@ Under the **Documents** tab, you’ll see the filtered log data matching your qu :::{image} ../../../images/serverless-logs-kql-filter.png :alt: logs kql filter -:class: screenshot +:screenshot: ::: For more on using Discover, refer to the [Discover](../../../explore-analyze/discover.md) documentation. diff --git a/solutions/observability/logs/inspect-log-anomalies.md b/solutions/observability/logs/inspect-log-anomalies.md index d04ca76b4..7b04dfc48 100644 --- a/solutions/observability/logs/inspect-log-anomalies.md +++ b/solutions/observability/logs/inspect-log-anomalies.md @@ -44,7 +44,7 @@ If you have a lot of log partitions, use the following to filter your data: :::{image} ../../../images/observability-anomalies-chart.png :alt: Anomalies chart -:class: screenshot +:screenshot: ::: The chart shows the time range where anomalies were detected. The typical rate values are shown in gray, while the anomalous regions are color-coded and superimposed on top. diff --git a/solutions/observability/logs/run-pattern-analysis-on-log-data.md b/solutions/observability/logs/run-pattern-analysis-on-log-data.md index 98737632f..a80fa84d7 100644 --- a/solutions/observability/logs/run-pattern-analysis-on-log-data.md +++ b/solutions/observability/logs/run-pattern-analysis-on-log-data.md @@ -29,7 +29,7 @@ To run a log pattern analysis: :::{image} ../../../images/serverless-log-pattern-analysis.png :alt: Log pattern analysis of the message field - :class: screenshot + :screenshot: ::: 5. (Optional) Select one or more patterns, then choose to filter for (or filter out) documents that match the selected patterns. Discover only displays documents that match (or don’t match) the selected patterns. The filter options enable you to remove unimportant messages and focus on the more important, actionable data during troubleshooting. diff --git a/solutions/observability/observability-ai-assistant.md b/solutions/observability/observability-ai-assistant.md index bfeca9252..dfce8061d 100644 --- a/solutions/observability/observability-ai-assistant.md +++ b/solutions/observability/observability-ai-assistant.md @@ -17,7 +17,7 @@ The AI Assistant uses generative AI to provide: :::{image} ../../images/observability-obs-assistant2.gif :alt: Observability AI assistant preview -:class: screenshot +:screenshot: ::: The AI Assistant integrates with your large language model (LLM) provider through our supported {{stack}} connectors: @@ -241,7 +241,7 @@ This opens the AI Assistant flyout, where you can ask the assistant questions ab :::{image} ../../images/observability-obs-ai-chat.png :alt: Observability AI assistant chat -:class: screenshot +:screenshot: ::: ::::{important} @@ -314,14 +314,14 @@ For example, in the log details, you’ll see prompts for **What’s this messag :::{image} ../../images/observability-obs-ai-logs-prompts.png :alt: Observability AI assistant logs prompts -:class: screenshot +:screenshot: ::: Clicking a prompt generates a message specific to that log entry: :::{image} ../../images/observability-obs-ai-logs.gif :alt: Observability AI assistant example -:class: screenshot +:screenshot: ::: Continue a conversation from a contextual prompt by clicking **Start chat** to open the AI Assistant chat. @@ -338,7 +338,7 @@ Use the [Observability AI Assistant connector](kibana://reference/connectors-kib :::{image} ../../images/observability-obs-ai-assistant-action-high-cpu.png :alt: Add an Observability AI assistant action while creating a rule in the Observability UI - :class: screenshot + :screenshot: ::: @@ -353,7 +353,7 @@ When the alert fires, contextual details about the event—such as when the aler :::{image} ../../images/observability-obs-ai-assistant-output.png :alt: AI Assistant conversation created in response to an alert -:class: screenshot +:screenshot: ::: ::::{important} @@ -374,7 +374,7 @@ The `server.publicBaseUrl` setting must be correctly specified under {{kib}} set :::{image} ../../images/observability-obs-ai-assistant-slack-message.png :alt: Message sent by Slack by the AI Assistant includes a link to the conversation -:class: screenshot +:screenshot: ::: The Observability AI Assistant connector is called when the alert fires and when it recovers. diff --git a/solutions/search/rag/playground-query.md b/solutions/search/rag/playground-query.md index a40e3ab5d..fadd5fc31 100644 --- a/solutions/search/rag/playground-query.md +++ b/solutions/search/rag/playground-query.md @@ -28,7 +28,7 @@ The following screenshot shows the query editor in the Playground UI. In this si :::{image} ../../../images/kibana-query-interface.png :alt: View and modify queries -:class: screenshot +:screenshot: ::: Certain fields in your documents may be hidden. Learn more about [hidden fields](#playground-hidden-fields). diff --git a/solutions/search/rag/playground.md b/solutions/search/rag/playground.md index 7de09dc4d..3bb26eb88 100644 --- a/solutions/search/rag/playground.md +++ b/solutions/search/rag/playground.md @@ -116,7 +116,7 @@ You can also use locally hosted LLMs that are compatible with the OpenAI SDK. On :::{image} ../../../images/kibana-get-started.png :alt: get started -:class: screenshot +:screenshot: ::: @@ -187,7 +187,7 @@ You can always add or remove indices later by selecting the **Data** button from :::{image} ../../../images/kibana-data-button.png :alt: data button -:class: screenshot +:screenshot: :width: 150px ::: @@ -206,14 +206,14 @@ The **chat mode** is selected when you first set up your Playground instance. :::{image} ../../../images/kibana-chat-interface.png :alt: chat interface -:class: screenshot +:screenshot: ::: To switch to **query mode**, select **Query** from the main UI. :::{image} ../../../images/kibana-query-interface.png :alt: query interface -:class: screenshot +:screenshot: ::: ::::{tip} @@ -255,7 +255,7 @@ Use the **View code** button to see the Python code that powers the chat interfa :::{image} ../../../images/kibana-view-code-button.png :alt: view code button -:class: screenshot +:screenshot: :width: 150px ::: diff --git a/solutions/search/retrievers-overview.md b/solutions/search/retrievers-overview.md index e1e7de820..dd50912a8 100644 --- a/solutions/search/retrievers-overview.md +++ b/solutions/search/retrievers-overview.md @@ -29,7 +29,7 @@ Retrievers come in various types, each tailored for different search operations. * [**Linear Retriever**](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json-retriever). Combines the top results from multiple sub-retrievers using a weighted sum of their scores. Allows to specify different weights for each retriever, as well as independently normalize the scores from each result set. * [**RRF Retriever**](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json-retriever). Combines and ranks multiple first-stage retrievers using the reciprocal rank fusion (RRF) algorithm. Allows you to combine multiple result sets with different relevance indicators into a single result set. An RRF retriever is a **compound retriever**, where its `filter` element is propagated to its sub retrievers. * [**Rule Retriever**](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json-retriever). Applies [query rules](elasticsearch://reference/elasticsearch/rest-apis/searching-with-query-rules.md#query-rules) to the query before returning results. -* [**Text Similarity Re-ranker Retriever**](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json-retriever). Used for [semantic reranking](ranking/semantic-reranking.md). Requires first creating a `rerank` task using the [{{es}} Inference API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put). +* [**Text Similarity Re-ranker Retriever**](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search#operation-search-body-application-json-retriever). Used for [semantic reranking](ranking/semantic-reranking.md). Requires first creating a `rerank` task using the [{{es}} Inference API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference). ## What makes retrievers useful? [retrievers-overview-why-are-they-useful] diff --git a/solutions/search/search-connection-details.md b/solutions/search/search-connection-details.md index 0d8863580..850417690 100644 --- a/solutions/search/search-connection-details.md +++ b/solutions/search/search-connection-details.md @@ -28,14 +28,14 @@ To connect to your {{es}} deployment, you need either a Cloud ID or an {{es}} en :::{image} ../../images/kibana-manage-deployment.png :alt: manage deployment - :class: screenshot + :screenshot: ::: 3. The Cloud ID is displayed on the right side of the page. :::{image} ../../images/kibana-cloud-id.png :alt: cloud id - :class: screenshot + :screenshot: ::: @@ -46,14 +46,14 @@ To connect to your {{es}} deployment, you need either a Cloud ID or an {{es}} en :::{image} ../../images/kibana-api-keys-search-bar.png :alt: api keys search bar - :class: screenshot + :screenshot: ::: 2. Click **Create API key**. :::{image} ../../images/kibana-click-create-api-key.png :alt: click create api key - :class: screenshot + :screenshot: ::: 3. Enter the API key details, and click **Create API key**. @@ -69,7 +69,7 @@ To connect to your {{es}} deployment, you need either a Cloud ID or an {{es}} en :::{image} ../../images/kibana-serverless-connection-details.png :alt: serverless connection details - :class: screenshot + :screenshot: ::: @@ -86,7 +86,7 @@ The **Cloud ID** is also displayed in the Copy your connection details section, :::{image} ../../images/kibana-serverless-create-an-api-key.png :alt: serverless create an api key - :class: screenshot + :screenshot: ::: 3. Enter the API key details, and click **Create API key**. diff --git a/solutions/search/search-pipelines.md b/solutions/search/search-pipelines.md index aca7f4049..f597d41db 100644 --- a/solutions/search/search-pipelines.md +++ b/solutions/search/search-pipelines.md @@ -24,7 +24,7 @@ The tab is highlighted in this screenshot: :::{image} /images/elasticsearch-reference-ingest-pipeline-ent-search-ui.png :alt: ingest pipeline ent search ui -:class: screenshot +:screenshot: ::: ## Overview [ingest-pipeline-search-in-enterprise-search] diff --git a/solutions/search/semantic-search.md b/solutions/search/semantic-search.md index 97818b1e6..e6f4cd4ed 100644 --- a/solutions/search/semantic-search.md +++ b/solutions/search/semantic-search.md @@ -35,7 +35,7 @@ This diagram summarizes the relative complexity of each workflow: ### Option 1: `semantic_text` [_semantic_text_workflow] -The simplest way to use NLP models in the {{stack}} is through the [`semantic_text` workflow](semantic-search/semantic-search-semantic-text.md). We recommend using this approach because it abstracts away a lot of manual work. All you need to do is create an {{infer}} endpoint and an index mapping to start ingesting, embedding, and querying data. There is no need to define model-related settings and parameters, or to create {{infer}} ingest pipelines. Refer to the [Create an {{infer}} endpoint API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put) documentation for a list of supported services. +The simplest way to use NLP models in the {{stack}} is through the [`semantic_text` workflow](semantic-search/semantic-search-semantic-text.md). We recommend using this approach because it abstracts away a lot of manual work. All you need to do is create an {{infer}} endpoint and an index mapping to start ingesting, embedding, and querying data. There is no need to define model-related settings and parameters, or to create {{infer}} ingest pipelines. For more information about the supported services, refer to [](/explore-analyze/elastic-inference/inference-api.md) and the [{{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference) documentation . For an end-to-end tutorial, refer to [Semantic search with `semantic_text`](semantic-search/semantic-search-semantic-text.md). diff --git a/solutions/search/semantic-search/semantic-search-semantic-text.md b/solutions/search/semantic-search/semantic-search-semantic-text.md index b6a4580fe..f00f157ad 100644 --- a/solutions/search/semantic-search/semantic-search-semantic-text.md +++ b/solutions/search/semantic-search/semantic-search-semantic-text.md @@ -132,4 +132,4 @@ As a result, you receive the top 10 documents that are closest in meaning to the * If you want to use `semantic_text` in hybrid search, refer to [this notebook](https://colab.research.google.com/github/elastic/elasticsearch-labs/blob/main/notebooks/search/09-semantic-text.ipynb) for a step-by-step guide. * For more information on how to optimize your ELSER endpoints, refer to [the ELSER recommendations](/explore-analyze/machine-learning/nlp/ml-nlp-elser.md#elser-recommendations) section in the model documentation. -* To learn more about model autoscaling, refer to the [trained model autoscaling](/explore-analyze/machine-learning/nlp/ml-nlp-auto-scale.md) page. +* To learn more about model autoscaling, refer to the [trained model autoscaling](../../../deploy-manage/autoscaling/trained-model-autoscaling.md) page. diff --git a/solutions/security/advanced-entity-analytics/anomaly-detection.md b/solutions/security/advanced-entity-analytics/anomaly-detection.md index ebed36760..0d7b6fd08 100644 --- a/solutions/security/advanced-entity-analytics/anomaly-detection.md +++ b/solutions/security/advanced-entity-analytics/anomaly-detection.md @@ -18,7 +18,7 @@ If you have the appropriate role, you can use the **ML job settings** interface :::{image} ../../../images/security-ml-ui.png :alt: ML job settings UI on the Alerts page -:class: screenshot +:screenshot: ::: @@ -30,14 +30,14 @@ You can also check the status of {{ml}} detection rules, and start or stop their :::{image} ../../../images/security-rules-table-ml-job-error.png :alt: Rules table {{ml}} job error - :class: screenshot + :screenshot: ::: * On a rule’s details page, check the **Definition** section to confirm whether the required {{ml}} jobs are running. Switch the toggles on or off to run or stop each job. :::{image} ../../../images/security-rules-ts-ml-job-stopped.png :alt: Rule details page with ML job stopped - :class: screenshot + :screenshot: ::: @@ -56,7 +56,7 @@ Or * You install one or more of the [Advanced Analytics integrations](/solutions/security/advanced-entity-analytics/behavioral-detection-use-cases.md#ml-integrations). -[Prebuilt job reference](security-docs://reference/prebuilt-jobs.md) describes all available {{ml}} jobs and lists which ECS fields are required on your hosts when you are not using {{beats}} or the {{agent}} to ship your data. For information on tuning anomaly results to reduce the number of false positives, see [Optimizing anomaly results](/solutions/security/advanced-entity-analytics/optimizing-anomaly-results.md). +[Prebuilt anomaly detection jobs](/reference/security/prebuilt-anomaly-detection-jobs.md) describes all available {{ml}} jobs and lists which ECS fields are required on your hosts when you are not using {{beats}} or the {{agent}} to ship your data. For information on tuning anomaly results to reduce the number of false positives, see [Optimizing anomaly results](/solutions/security/advanced-entity-analytics/optimizing-anomaly-results.md). ::::{note} Machine learning jobs look back and analyze two weeks of historical data prior to the time they are enabled. After jobs are enabled, they continuously analyze incoming data. When jobs are stopped and restarted within the two-week time frame, previously analyzed data is not processed again. diff --git a/solutions/security/advanced-entity-analytics/asset-criticality.md b/solutions/security/advanced-entity-analytics/asset-criticality.md index 7571c4f8d..e35359f66 100644 --- a/solutions/security/advanced-entity-analytics/asset-criticality.md +++ b/solutions/security/advanced-entity-analytics/asset-criticality.md @@ -41,21 +41,21 @@ You can view, assign, change, or unassign asset criticality from the following p :::{image} ../../../images/security-assign-asset-criticality-host-details.png :alt: Assign asset criticality from the host details page - :class: screenshot + :screenshot: ::: * The [host details flyout](../explore/hosts-page.md#host-details-flyout) and [user details flyout](../explore/users-page.md#user-details-flyout): :::{image} ../../../images/security-assign-asset-criticality-host-flyout.png :alt: Assign asset criticality from the host details flyout - :class: screenshot + :screenshot: ::: * The host details flyout and user details flyout in [Timeline](../investigate/timeline.md): :::{image} ../../../images/security-assign-asset-criticality-timeline.png :alt: Assign asset criticality from the host details flyout in Timeline - :class: screenshot + :screenshot: ::: @@ -63,7 +63,7 @@ If you have enabled the [entity store](entity-store.md), you can also view asset :::{image} ../../../images/security-entities-section.png :alt: Entities section -:class: screenshot +:screenshot: ::: @@ -136,6 +136,6 @@ To view the impact of asset criticality on an entity’s risk score, follow thes :::{image} ../../../images/security-asset-criticality-impact.png :alt: View asset criticality impact on host risk score -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/advanced-entity-analytics/behavioral-detection-use-cases.md b/solutions/security/advanced-entity-analytics/behavioral-detection-use-cases.md index 9bcbdbbb1..5a6ae4b26 100644 --- a/solutions/security/advanced-entity-analytics/behavioral-detection-use-cases.md +++ b/solutions/security/advanced-entity-analytics/behavioral-detection-use-cases.md @@ -32,5 +32,5 @@ Here’s a list of integrations for various behavioral detection use cases: * [Living off the Land Attack Detection](https://docs.elastic.co/en/integrations/problemchild) * [Network Beaconing Identification](https://docs.elastic.co/en/integrations/beaconing) -To learn more about {{ml}} jobs enabled by these integrations, refer to the [Prebuilt jobs page](security-docs://reference/prebuilt-jobs.md). +To learn more about {{ml}} jobs enabled by these integrations, refer to the [Prebuilt anomaly detection jobs page](/reference/security/prebuilt-anomaly-detection-jobs.md). diff --git a/solutions/security/advanced-entity-analytics/optimizing-anomaly-results.md b/solutions/security/advanced-entity-analytics/optimizing-anomaly-results.md index 8adbc0013..6b4b67ad1 100644 --- a/solutions/security/advanced-entity-analytics/optimizing-anomaly-results.md +++ b/solutions/security/advanced-entity-analytics/optimizing-anomaly-results.md @@ -38,7 +38,7 @@ For example, to filter out results from a housekeeping process, named `maintenan :::{image} ../../../images/security-filter-add-item.png :alt: filter add item - :class: screenshot + :screenshot: ::: 8. Click **Add** and then **Save**. @@ -58,7 +58,7 @@ For example, to filter out results from a housekeeping process, named `maintenan :::{image} ../../../images/security-rule-scope.png :alt: rule scope - :class: screenshot + :screenshot: ::: 5. Select: @@ -103,7 +103,7 @@ Running the cloned job can take some time. Only run the job after you have compl :::{image} ../../../images/security-cloned-job-details.png :alt: cloned job details - :class: screenshot + :screenshot: ::: 9. Click **Next** and check the job validates without errors. You can ignore warnings about multiple influencers. @@ -113,7 +113,7 @@ Running the cloned job can take some time. Only run the job after you have compl :::{image} ../../../images/security-start-job-window.png :alt: start job window - :class: screenshot + :screenshot: ::: 11. Select the point of time from which the job will analyze anomalies. @@ -138,7 +138,7 @@ Depending on your anomaly detection results, you may want to set a minimum event :::{image} ../../../images/security-ml-rule-threshold.png :alt: ml rule threshold - :class: screenshot + :screenshot: ::: 5. Select *Add numeric conditions for when the rule applies* and the following `when` statement: diff --git a/solutions/security/advanced-entity-analytics/turn-on-risk-scoring-engine.md b/solutions/security/advanced-entity-analytics/turn-on-risk-scoring-engine.md index 6a777fc6d..f73ef6f34 100644 --- a/solutions/security/advanced-entity-analytics/turn-on-risk-scoring-engine.md +++ b/solutions/security/advanced-entity-analytics/turn-on-risk-scoring-engine.md @@ -25,7 +25,7 @@ To preview risky entities, find **Entity Risk Score** in the navigation menu or :::{image} ../../../images/security-preview-risky-entities.png :alt: Preview of risky entities -:class: screenshot +:screenshot: ::: @@ -47,7 +47,7 @@ You can also choose to include `Closed` alerts in risk scoring calculations and :::{image} ../../../images/security-turn-on-risk-engine.png :alt: Turn on entity risk scoring -:class: screenshot +:screenshot: ::: @@ -66,7 +66,7 @@ If you upgraded to 8.11 from an earlier {{stack}} version, and you have the orig :::{image} ../../../images/security-risk-engine-upgrade-prompt.png :alt: Prompt to upgrade to the latest risk engine -:class: screenshot +:screenshot: ::: 1. Click **Manage** in the upgrade prompt, or find **Entity Risk Score** in the navigation menu. @@ -74,7 +74,7 @@ If you upgraded to 8.11 from an earlier {{stack}} version, and you have the orig :::{image} ../../../images/security-risk-score-start-update.png :alt: Start the risk engine upgrade - :class: screenshot + :screenshot: ::: 3. On the confirmation message, click **Yes, update now**. The old transform is removed and the latest risk engine is installed. @@ -82,7 +82,7 @@ If you upgraded to 8.11 from an earlier {{stack}} version, and you have the orig :::{image} ../../../images/security-turn-on-risk-engine.png :alt: Turn on entity risk scoring - :class: screenshot + :screenshot: ::: diff --git a/solutions/security/advanced-entity-analytics/view-analyze-risk-score-data.md b/solutions/security/advanced-entity-analytics/view-analyze-risk-score-data.md index 0f82eec35..f52f0e4b0 100644 --- a/solutions/security/advanced-entity-analytics/view-analyze-risk-score-data.md +++ b/solutions/security/advanced-entity-analytics/view-analyze-risk-score-data.md @@ -29,7 +29,7 @@ If you have enabled the [entity store](entity-store.md), the dashboard also disp :::{image} ../../../images/security-entity-dashboard.png :alt: Entity Analytics dashboard -:class: screenshot +:screenshot: ::: @@ -56,7 +56,7 @@ Learn more about [customizing the Alerts table](../detect-and-alert/manage-detec :::{image} ../../../images/security-alerts-table-rs.png :alt: Risk scores in the Alerts table -:class: screenshot +:screenshot: ::: @@ -75,14 +75,14 @@ If you change the entity’s criticality level after an alert is generated, that :::{image} ../../../images/security-filter-by-host-risk-level.png :alt: Alerts filtered by high host risk level - :class: screenshot + :screenshot: ::: * `user.asset.criticality` or `host.asset.criticality` for asset criticality level: :::{image} ../../../images/security-filter-by-asset-criticality.png :alt: Filter alerts by asset criticality level - :class: screenshot + :screenshot: ::: * To group alerts by entity risk level or asset criticality level, select **Group alerts by**, then select **Custom field** and search for: @@ -91,14 +91,14 @@ If you change the entity’s criticality level after an alert is generated, that :::{image} ../../../images/security-group-by-host-risk-level.png :alt: Alerts grouped by host risk levels - :class: screenshot + :screenshot: ::: * `host.asset.criticality` or `user.asset.criticality` for asset criticality level: :::{image} ../../../images/security-group-by-asset-criticality.png :alt: Alerts grouped by entity asset criticality levels - :class: screenshot + :screenshot: ::: * You can further sort the grouped alerts by highest entity risk score: @@ -114,7 +114,7 @@ If you change the entity’s criticality level after an alert is generated, that :::{image} ../../../images/security-hrl-sort-by-host-risk-score.png :alt: High-risk alerts sorted by host risk score - :class: screenshot + :screenshot: ::: @@ -125,7 +125,7 @@ To access risk score data in the alert details flyout, select **Insights** → * :::{image} ../../../images/security-alerts-flyout-rs.png :alt: Risk scores in the Alerts flyout -:class: screenshot +:screenshot: ::: @@ -137,14 +137,14 @@ On the Hosts and Users pages, you can access the risk score data: :::{image} ../../../images/security-hosts-hr-level.png :alt: Host risk level data on the All hosts tab of the Hosts page - :class: screenshot + :screenshot: ::: * On the **Host risk** or **User risk** tab: :::{image} ../../../images/security-hosts-hr-data.png :alt: Host risk data on the Host risk tab of the Hosts page - :class: screenshot + :screenshot: ::: @@ -157,14 +157,14 @@ On the host details and user details pages, you can access the risk score data: :::{image} ../../../images/security-host-details-overview.png :alt: Host risk data in the Overview section of the host details page - :class: screenshot + :screenshot: ::: * On the **Host risk** or **User risk** tab: :::{image} ../../../images/security-host-details-hr-tab.png :alt: Host risk data on the Host risk tab of the host details page - :class: screenshot + :screenshot: ::: @@ -175,5 +175,5 @@ In the host details and user details flyouts, you can access the risk score data :::{image} ../../../images/security-risk-summary.png :alt: Host risk data in the Host risk summary section -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/ai/ai-assistant.md b/solutions/security/ai/ai-assistant.md index cd26aaadc..be200bd10 100644 --- a/solutions/security/ai/ai-assistant.md +++ b/solutions/security/ai/ai-assistant.md @@ -23,7 +23,7 @@ The Elastic AI Assistant utilizes generative AI to bolster your cybersecurity op :::{image} ../../../images/security-assistant-basic-view.png :alt: Image of AI Assistant chat window -:class: screenshot +:screenshot: ::: ::::{warning} @@ -71,7 +71,7 @@ To open AI Assistant, select the **AI Assistant** button in the top toolbar from :::{image} ../../../images/security-ai-assistant-button.png :alt: AI Assistant button -:class: screenshot +:screenshot: ::: This opens the **Welcome** chat interface, where you can ask general questions about {{elastic-sec}}. @@ -98,7 +98,7 @@ Use these features to adjust and act on your conversations with AI Assistant: :::{image} ../../../images/security-quick-prompts.png :alt: Quick Prompts highlighted below a conversation - :class: screenshot + :screenshot: ::: * System Prompts and Quick Prompts can also be configured from the corresponding tabs on the **Security AI settings** page. @@ -155,7 +155,7 @@ You can access anonymization settings directly from the **Attack Discovery** pag :::{image} ../../../images/security-assistant-anonymization-menu.png :alt: AI Assistant's settings menu -:class: screenshot +:screenshot: ::: The fields on this list are among those most likely to provide relevant context to AI Assistant. Fields with **Allowed** toggled on are included. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated. diff --git a/solutions/security/ai/triage-alerts.md b/solutions/security/ai/triage-alerts.md index 3abbb5904..c1e532aa8 100644 --- a/solutions/security/ai/triage-alerts.md +++ b/solutions/security/ai/triage-alerts.md @@ -58,5 +58,5 @@ After you review the report, click **Add to existing case** at the top of AI Ass :::{image} ../../../images/security-ai-triage-add-to-case.png :alt: An AI Assistant dialogue with the add to existing case button highlighted -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/cloud/benchmarks.md b/solutions/security/cloud/benchmarks.md index 67c56e829..832c8defd 100644 --- a/solutions/security/cloud/benchmarks.md +++ b/solutions/security/cloud/benchmarks.md @@ -19,7 +19,7 @@ The Benchmarks page lets you view the cloud security posture (CSP) benchmark rul :::{image} ../../../images/security-benchmark-rules.png :alt: Benchmarks page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/cloud/findings-page-2.md b/solutions/security/cloud/findings-page-2.md index cfd9d324c..daa84a8d2 100644 --- a/solutions/security/cloud/findings-page-2.md +++ b/solutions/security/cloud/findings-page-2.md @@ -11,7 +11,7 @@ The **Misconfigurations** tab on the Findings page displays the configuration ri :::{image} ../../../images/security-findings-page.png :alt: Findings page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/cloud/findings-page-3.md b/solutions/security/cloud/findings-page-3.md index a0ec1fcac..b72ca1d7a 100644 --- a/solutions/security/cloud/findings-page-3.md +++ b/solutions/security/cloud/findings-page-3.md @@ -10,7 +10,7 @@ The **Vulnerabilities** tab on the Findings page displays the vulnerabilities de :::{image} ../../../images/serverless--cloud-native-security-cnvm-findings-page.png :alt: The Vulnerabilities tab of the Findings page -:class: screenshot +:screenshot: ::: @@ -41,7 +41,7 @@ Multiple groupings apply to your data in the order you selected them. For exampl :::{image} ../../../images/serverless--cloud-native-security-cnvm-findings-grouped.png :alt: The Vulnerabilities tab of the Findings page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/cloud/findings-page.md b/solutions/security/cloud/findings-page.md index c97a3f907..898f99b86 100644 --- a/solutions/security/cloud/findings-page.md +++ b/solutions/security/cloud/findings-page.md @@ -14,7 +14,7 @@ The **Misconfigurations** tab on the Findings page displays the configuration ri :::{image} ../../../images/security-findings-page.png :alt: Findings page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/cloud/get-started-with-cspm-for-aws.md b/solutions/security/cloud/get-started-with-cspm-for-aws.md index 3a0f30fdf..3df4e07dc 100644 --- a/solutions/security/cloud/get-started-with-cspm-for-aws.md +++ b/solutions/security/cloud/get-started-with-cspm-for-aws.md @@ -28,11 +28,6 @@ You can set up CSPM for AWS either by enrolling a single cloud account, or by en ## Agentless deployment [cspm-aws-agentless] -::::{warning} -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -:::: - - 1. Find **Integrations** in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Search for `CSPM`, then click on the result. 3. Click **Add Cloud Security Posture Management (CSPM)**. diff --git a/solutions/security/cloud/get-started-with-cspm-for-azure.md b/solutions/security/cloud/get-started-with-cspm-for-azure.md index a90605c81..5b7cd3bf3 100644 --- a/solutions/security/cloud/get-started-with-cspm-for-azure.md +++ b/solutions/security/cloud/get-started-with-cspm-for-azure.md @@ -28,11 +28,6 @@ You can set up CSPM for Azure by by enrolling an Azure organization (management ## Agentless deployment [cspm-azure-agentless] -::::{warning} -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -:::: - - 1. Find **Integrations** in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Search for `CSPM`, then click on the result. 3. Click **Add Cloud Security Posture Management (CSPM)**. diff --git a/solutions/security/cloud/get-started-with-cspm-for-gcp.md b/solutions/security/cloud/get-started-with-cspm-for-gcp.md index 62e03b2b4..046b4f8cd 100644 --- a/solutions/security/cloud/get-started-with-cspm-for-gcp.md +++ b/solutions/security/cloud/get-started-with-cspm-for-gcp.md @@ -6,20 +6,6 @@ mapped_urls: # Get started with CSPM for GCP -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/cspm-get-started-gcp.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-cspm-get-started-gcp.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$cspm-gcp-agent-based$$$ - -$$$cspm-gcp-agentless$$$ - - ## Overview [cspm-overview-gcp] This page explains how to get started monitoring the security posture of your GCP cloud assets using the Cloud Security Posture Management (CSPM) feature. @@ -42,11 +28,6 @@ You can set up CSPM for GCP either by enrolling a single project, or by enrollin ## Agentless deployment [cspm-gcp-agentless] -::::{warning} -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -:::: - - 1. Find **Integrations** in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Search for `CSPM`, then click on the result. 3. Click **Add Cloud Security Posture Management (CSPM)**. diff --git a/solutions/security/cloud/get-started-with-kspm.md b/solutions/security/cloud/get-started-with-kspm.md index 10e8471a2..ba65b0031 100644 --- a/solutions/security/cloud/get-started-with-kspm.md +++ b/solutions/security/cloud/get-started-with-kspm.md @@ -252,7 +252,7 @@ To install the integration on unmanaged clusters: :::{image} ../../../images/security-kspm-add-agent-wizard.png :alt: The KSPM integration's Add agent wizard -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/configure-elastic-defend/configure-an-integration-policy-for-elastic-defend.md b/solutions/security/configure-elastic-defend/configure-an-integration-policy-for-elastic-defend.md index f1ae6f838..254d592ef 100644 --- a/solutions/security/configure-elastic-defend/configure-an-integration-policy-for-elastic-defend.md +++ b/solutions/security/configure-elastic-defend/configure-an-integration-policy-for-elastic-defend.md @@ -82,7 +82,7 @@ If you have the appropriate license or project feature, you can customize these :::{image} ../../../images/security-malware-protection.png :alt: Detail of malware protection section. -:class: screenshot +:screenshot: ::: @@ -134,7 +134,7 @@ If you have the appropriate license or project feature, you can customize these :::{image} ../../../images/security-ransomware-protection.png :alt: Detail of ransomware protection section. -:class: screenshot +:screenshot: ::: @@ -163,7 +163,7 @@ If you have the appropriate license or project feature, you can customize these :::{image} ../../../images/security-memory-protection.png :alt: Detail of memory protection section. -:class: screenshot +:screenshot: ::: @@ -199,7 +199,7 @@ If you have the appropriate license or project feature, you can customize these :::{image} ../../../images/security-behavior-protection.png :alt: Detail of behavior protection section. -:class: screenshot +:screenshot: ::: @@ -216,7 +216,7 @@ In {{serverless-short}}, attack surface reduction requires the Endpoint Protecti :::{image} ../../../images/security-attack-surface-reduction.png :alt: Detail of attack surface reduction section. -:class: screenshot +:screenshot: ::: @@ -226,7 +226,7 @@ In the **Settings** section, select which categories of events to collect on eac :::{image} ../../../images/security-event-collection.png :alt: Detail of event collection section. -:class: screenshot +:screenshot: ::: @@ -245,7 +245,7 @@ If you don’t want to sync antivirus registration, you can set it manually with :::{image} ../../../images/security-register-as-antivirus.png :alt: Detail of Register as antivirus option. -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/configure-elastic-defend/configure-offline-endpoints-air-gapped-environments.md b/solutions/security/configure-elastic-defend/configure-offline-endpoints-air-gapped-environments.md index e220ebd27..860059ad3 100644 --- a/solutions/security/configure-elastic-defend/configure-offline-endpoints-air-gapped-environments.md +++ b/solutions/security/configure-elastic-defend/configure-offline-endpoints-air-gapped-environments.md @@ -89,7 +89,7 @@ Set the `advanced.artifacts.global.base_url` advanced setting for each [{{elasti :::{image} ../../../images/security-offline-adv-settings.png :alt: Integration policy advanced settings -:class: screenshot +:screenshot: ::: @@ -171,7 +171,7 @@ Set the `advanced.artifacts.global.base_url` advanced setting for each [{{elasti :::{image} ../../../images/security-offline-adv-settings.png :alt: Integration policy advanced settings -:class: screenshot +:screenshot: ::: @@ -206,6 +206,6 @@ After updating the {{elastic-endpoint}} configuration to read from the mirror se :::{image} ../../../images/security-offline-endpoint-version-discover.png :alt: Searching for `endpoint.policy` in Discover -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/configure-elastic-defend/deploy-on-macos-with-mdm.md b/solutions/security/configure-elastic-defend/deploy-on-macos-with-mdm.md index bfbdd5b75..556d70e8a 100644 --- a/solutions/security/configure-elastic-defend/deploy-on-macos-with-mdm.md +++ b/solutions/security/configure-elastic-defend/deploy-on-macos-with-mdm.md @@ -38,7 +38,7 @@ In Jamf, create a configuration profile for {{elastic-endpoint}}. Follow these s :::{image} ../../../images/security-system-extension-jamf.png :alt: system extension jamf -:class: screenshot +:screenshot: ::: @@ -69,7 +69,7 @@ In Jamf, create a configuration profile for {{elastic-endpoint}}. Follow these s :::{image} ../../../images/security-content-filtering-jamf.png :alt: content filtering jamf -:class: screenshot +:screenshot: ::: @@ -92,7 +92,7 @@ In Jamf, create a configuration profile for {{elastic-endpoint}}. Follow these s :::{image} ../../../images/security-notifications-jamf.png :alt: notifications jamf -:class: screenshot +:screenshot: ::: @@ -139,7 +139,7 @@ In Jamf, create a configuration profile for {{elastic-endpoint}}. Follow these s :::{image} ../../../images/security-fda-jamf.png :alt: fda jamf -:class: screenshot +:screenshot: ::: After you complete these steps, generate the mobile configuration profile and install it onto the macOS machines. Once the profile is installed, {{elastic-defend}} can be deployed without the need for user interaction. diff --git a/solutions/security/configure-elastic-defend/enable-access-for-macos-monterey.md b/solutions/security/configure-elastic-defend/enable-access-for-macos-monterey.md index dfdebb9c5..f9a119b0b 100644 --- a/solutions/security/configure-elastic-defend/enable-access-for-macos-monterey.md +++ b/solutions/security/configure-elastic-defend/enable-access-for-macos-monterey.md @@ -68,7 +68,7 @@ The following instructions apply only to {{elastic-endpoint}} running version 8. :::{image} ../../../images/security-sec-privacy-pane.png :alt: sec privacy pane - :class: screenshot + :screenshot: ::: 3. On the **Security and Privacy** pane, select the **Privacy** tab. @@ -76,7 +76,7 @@ The following instructions apply only to {{elastic-endpoint}} running version 8. :::{image} ../../../images/security-select-fda.png :alt: Select Full Disk Access - :class: screenshot + :screenshot: ::: 5. In the lower-left corner of the pane, click the **Lock button**, then enter your credentials to authenticate. diff --git a/solutions/security/configure-elastic-defend/enable-access-for-macos-ventura-higher.md b/solutions/security/configure-elastic-defend/enable-access-for-macos-ventura-higher.md index af0a0633a..8f3945f37 100644 --- a/solutions/security/configure-elastic-defend/enable-access-for-macos-ventura-higher.md +++ b/solutions/security/configure-elastic-defend/enable-access-for-macos-ventura-higher.md @@ -26,7 +26,7 @@ The following message appears during installation: :::{image} ../../../images/security-system_extension_blocked_warning_ven.png :alt: system extension blocked warning ven -:class: screenshot +:screenshot: ::: 1. Click **Open System Settings**. @@ -34,21 +34,21 @@ The following message appears during installation: :::{image} ../../../images/security-privacy_security_ven.png :alt: privacy security ven - :class: screenshot + :screenshot: ::: 3. On the right pane, scroll down to the Security section. Click **Allow** to allow the ElasticEndpoint system extension to load. :::{image} ../../../images/security-allow_system_extension_ven.png :alt: allow system extension ven - :class: screenshot + :screenshot: ::: 4. Enter your username and password and click **Modify Settings** to save your changes. :::{image} ../../../images/security-enter_login_details_to_confirm_ven.png :alt: enter login details to confirm ven - :class: screenshot + :screenshot: ::: @@ -59,7 +59,7 @@ After successfully loading the ElasticEndpoint system extension, an additional m :::{image} ../../../images/security-allow_network_filter_ven.png :alt: allow network filter ven -:class: screenshot +:screenshot: ::: Click **Allow** to enable content filtering for the ElasticEndpoint system extension. Without this approval, {{elastic-endpoint}} cannot receive network events and, therefore, cannot enable network-related features such as [host isolation](../endpoint-response-actions/isolate-host.md). @@ -73,7 +73,7 @@ If you have not granted Full Disk Access, the following notification prompt will :::{image} ../../../images/security-allow_full_disk_access_notification_ven.png :alt: allow full disk access notification ven -:class: screenshot +:screenshot: ::: To enable Full Disk Access, you must manually approve {{elastic-endpoint}}. @@ -88,21 +88,21 @@ The following instructions apply only to {{elastic-endpoint}} version 8.0.0 and :::{image} ../../../images/security-privacy_security_ven.png :alt: privacy security ven - :class: screenshot + :screenshot: ::: 3. From the right pane, select **Full Disk Access**. :::{image} ../../../images/security-select_fda_ven.png :alt: Select Full Disk Access - :class: screenshot + :screenshot: ::: 4. Enable `ElasticEndpoint` and `co.elastic` to properly enable Full Disk Access. :::{image} ../../../images/security-allow_fda_ven.png :alt: allow fda ven - :class: screenshot + :screenshot: ::: @@ -113,7 +113,7 @@ If the endpoint is running {{elastic-endpoint}} version 7.17.0 or earlier: :::{image} ../../../images/security-enter_login_details_to_confirm_ven.png :alt: enter login details to confirm ven - :class: screenshot + :screenshot: ::: 3. Navigate to `/Library/Elastic/Endpoint`, then select the `elastic-endpoint` file. @@ -122,6 +122,6 @@ If the endpoint is running {{elastic-endpoint}} version 7.17.0 or earlier: :::{image} ../../../images/security-verify_fed_granted_ven.png :alt: Select Full Disk Access - :class: screenshot + :screenshot: ::: diff --git a/solutions/security/configure-elastic-defend/install-elastic-defend.md b/solutions/security/configure-elastic-defend/install-elastic-defend.md index 35aa62b0d..96b44ccfc 100644 --- a/solutions/security/configure-elastic-defend/install-elastic-defend.md +++ b/solutions/security/configure-elastic-defend/install-elastic-defend.md @@ -34,7 +34,7 @@ If you’re using macOS, some versions may require you to grant Full Disk Access :::{image} ../../../images/security-endpoint-cloud-sec-integrations-page.png :alt: Search result for "{{elastic-defend}}" on the Integrations page. - :class: screenshot + :screenshot: ::: 2. Search for and select **{{elastic-defend}}**, then select **Add {{elastic-defend}}**. The integration configuration page appears. @@ -46,7 +46,7 @@ If you’re using macOS, some versions may require you to grant Full Disk Access :::{image} ../../../images/security-endpoint-cloud-security-configuration.png :alt: Add {{elastic-defend}} integration page - :class: screenshot + :screenshot: ::: 3. Configure the {{elastic-defend}} integration with an **Integration name** and optional **Description**. @@ -97,7 +97,7 @@ If you have upgraded to an {{stack}} version that includes {{fleet-server}} 7.13 :::{image} ../../../images/security-endpoint-cloud-sec-add-agent.png :alt: Add agent flyout on the Fleet page. - :class: screenshot + :screenshot: ::: 2. Select an agent policy for the {{agent}}. You can select an existing policy, or select **Create new agent policy** to create a new one. For more details on {{agent}} configuration settings, refer to [{{agent}} policies](/reference/ingestion-tools/fleet/agent-policy.md). @@ -106,7 +106,7 @@ If you have upgraded to an {{stack}} version that includes {{fleet-server}} 7.13 :::{image} ../../../images/security-endpoint-cloud-sec-add-agent-detail.png :alt: Add agent flyout with {{elastic-defend}} integration highlighted. - :class: screenshot + :screenshot: ::: 3. Ensure that the **Enroll in {{fleet}}** option is selected. {{elastic-defend}} cannot be integrated with {{agent}} in standalone mode. diff --git a/solutions/security/configure-elastic-defend/prevent-elastic-agent-uninstallation.md b/solutions/security/configure-elastic-defend/prevent-elastic-agent-uninstallation.md index dce3bbe25..c17c7e8f0 100644 --- a/solutions/security/configure-elastic-defend/prevent-elastic-agent-uninstallation.md +++ b/solutions/security/configure-elastic-defend/prevent-elastic-agent-uninstallation.md @@ -22,7 +22,7 @@ When enabled, {{agent}} and {{elastic-endpoint}} can only be uninstalled on the :::{image} ../../../images/security-agent-tamper-protection.png :alt: Agent tamper protection setting highlighted on Agent policy settings page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/dashboards/cloud-security-posture-dashboard.md b/solutions/security/dashboards/cloud-security-posture-dashboard.md index efddd0ad6..e3ad4698b 100644 --- a/solutions/security/dashboards/cloud-security-posture-dashboard.md +++ b/solutions/security/dashboards/cloud-security-posture-dashboard.md @@ -14,7 +14,7 @@ The Cloud Security Posture dashboard summarizes your cloud infrastructure’s ov :::{image} ../../../images/security-cloud-sec-dashboard.png :alt: The cloud Security dashboard -:class: screenshot +:screenshot: ::: The Cloud Security Posture dashboard shows: @@ -42,7 +42,7 @@ Below the summary section, each row shows the CSP for a benchmark that applies t :::{image} ../../../images/security-cloud-sec-dashboard-individual-row.png :alt: A row representing a single cluster in the Cloud Security Posture dashboard -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/dashboards/data-quality-dashboard.md b/solutions/security/dashboards/data-quality-dashboard.md index 181f620e5..01550f54e 100644 --- a/solutions/security/dashboards/data-quality-dashboard.md +++ b/solutions/security/dashboards/data-quality-dashboard.md @@ -10,7 +10,7 @@ The Data Quality dashboard shows you whether your data is correctly mapped to th :::{image} ../../../images/security-data-qual-dash.png :alt: The Data Quality dashboard -:class: screenshot +:screenshot: ::: Use the Data Quality dashboard to: @@ -77,7 +77,7 @@ After an index is checked, a **Pass** or **Fail** status appears. **Fail** indic :::{image} ../../../images/security-data-qual-dash-detail.png :alt: An expanded index with some failed results in the Data Quality dashboard -:class: screenshot +:screenshot: ::: The index check flyout provides more information about the status of fields in that index. Each of its tabs describe fields grouped by mapping status. diff --git a/solutions/security/dashboards/detection-response-dashboard.md b/solutions/security/dashboards/detection-response-dashboard.md index 9734aa172..dbdc8f360 100644 --- a/solutions/security/dashboards/detection-response-dashboard.md +++ b/solutions/security/dashboards/detection-response-dashboard.md @@ -10,7 +10,7 @@ The Detection & Response dashboard provides focused visibility into the day-to-d :::{image} ../../../images/security-detection-response-dashboard.png :alt: Overview of Detection & Response dashboard -:class: screenshot +:screenshot: ::: Interact with various dashboard elements: diff --git a/solutions/security/dashboards/detection-rule-monitoring-dashboard.md b/solutions/security/dashboards/detection-rule-monitoring-dashboard.md index 6eb393be7..fa2b65117 100644 --- a/solutions/security/dashboards/detection-rule-monitoring-dashboard.md +++ b/solutions/security/dashboards/detection-rule-monitoring-dashboard.md @@ -10,7 +10,7 @@ The Detection rule monitoring dashboard provides visualizations to help you moni :::{image} ../../../images/security-rule-monitoring-overview.png :alt: Overview of Detection rule monitoring dashboard -:class: screenshot +:screenshot: ::: ::::{admonition} Requirements diff --git a/solutions/security/dashboards/entity-analytics-dashboard.md b/solutions/security/dashboards/entity-analytics-dashboard.md index 42615de4c..667e6d5b0 100644 --- a/solutions/security/dashboards/entity-analytics-dashboard.md +++ b/solutions/security/dashboards/entity-analytics-dashboard.md @@ -24,7 +24,7 @@ The dashboard includes the following sections: :::{image} ../../../images/security-entity-dashboard.png :alt: Entity dashboard -:class: screenshot +:screenshot: ::: @@ -45,7 +45,7 @@ Displays user risk score data for your environment, including the total number o :::{image} ../../../images/security-user-score-data.png :alt: User risk table -:class: screenshot +:screenshot: ::: Interact with the table to filter data, view more details, and take action: @@ -70,7 +70,7 @@ Displays host risk score data for your environment, including the total number o :::{image} ../../../images/security-host-score-data.png :alt: Host risk scores table -:class: screenshot +:screenshot: ::: Interact with the table to filter data, view more details, and take action: @@ -105,7 +105,7 @@ The **Entities** table only shows a subset of the data available for each entity :::{image} ../../../images/security-entities-section.png :alt: Entities section -:class: screenshot +:screenshot: ::: Entity data from different sources appears in the **Entities** section based on the following timelines: @@ -128,13 +128,13 @@ Interact with the table to filter data and view more details: Anomaly detection jobs identify suspicious or irregular behavior patterns. The Anomalies table displays the total number of anomalies identified by these prebuilt {{ml}} jobs (named in the **Anomaly name** column). ::::{admonition} Requirements -To display anomaly results, you must [install and run](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md) one or more [prebuilt anomaly detection jobs](security-docs://reference/prebuilt-jobs.md). You cannot add custom anomaly detection jobs to the Entity Analytics dashboard. +To display anomaly results, you must [install and run](/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md) one or more [prebuilt anomaly detection jobs](/reference/security/prebuilt-anomaly-detection-jobs.md). You cannot add custom anomaly detection jobs to the Entity Analytics dashboard. :::: :::{image} ../../../images/security-anomalies-table.png :alt: Anomalies table -:class: screenshot +:screenshot: ::: Interact with the table to view more details: diff --git a/solutions/security/dashboards/overview-dashboard.md b/solutions/security/dashboards/overview-dashboard.md index 9f5f83e0c..bfea217a2 100644 --- a/solutions/security/dashboards/overview-dashboard.md +++ b/solutions/security/dashboards/overview-dashboard.md @@ -40,7 +40,7 @@ View event and host counts grouped by data source, such as **Auditbeat** or **{{ :::{image} ../../../images/security-events-count.png :alt: Host and network events on the Overview dashboard -:class: screenshot +:screenshot: ::: @@ -57,6 +57,6 @@ For more information about connecting to threat intelligence sources, visit [Ena :::{image} ../../../images/security-threat-intelligence-view.png :alt: threat intelligence view -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert.md b/solutions/security/detect-and-alert.md index 5b04bcaff..6c911180d 100644 --- a/solutions/security/detect-and-alert.md +++ b/solutions/security/detect-and-alert.md @@ -25,7 +25,7 @@ Use the detection engine to create and manage rules and view the alerts these ru :::{image} ../../images/security-alert-page.png :alt: Alerts page -:class: screenshot +:screenshot: ::: In addition to creating [your own rules](/solutions/security/detect-and-alert/create-detection-rule.md), enable [Elastic prebuilt rules](/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md#load-prebuilt-rules) to immediately start detecting suspicious activity. For detailed information on all the prebuilt rules, see the [*Prebuilt rule reference*](security-docs://reference/prebuilt-rules/index.md) section. Once the prebuilt rules are loaded and running, [*Tune detection rules*](/solutions/security/detect-and-alert/tune-detection-rules.md) and [Add and manage exceptions](/solutions/security/detect-and-alert/add-manage-exceptions.md) explain how to modify the rules to reduce false positives and get a better set of actionable alerts. You can also use exceptions and value lists when creating or modifying your own rules. diff --git a/solutions/security/detect-and-alert/about-building-block-rules.md b/solutions/security/detect-and-alert/about-building-block-rules.md index f8407f380..44cc26b35 100644 --- a/solutions/security/detect-and-alert/about-building-block-rules.md +++ b/solutions/security/detect-and-alert/about-building-block-rules.md @@ -25,7 +25,7 @@ To create a rule that searches alert indices, select **Index Patterns** as the r :::{image} ../../../images/security-alert-indices-ui.png :alt: alert indices ui -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/about-detection-rules.md b/solutions/security/detect-and-alert/about-detection-rules.md index 7b83fb60f..3253620f9 100644 --- a/solutions/security/detect-and-alert/about-detection-rules.md +++ b/solutions/security/detect-and-alert/about-detection-rules.md @@ -53,7 +53,7 @@ You can create the following types of rules: :::{image} ../../../images/security-all-rules.png :alt: Shows the Rules page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md b/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md index 448fbb79b..261513b34 100644 --- a/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md +++ b/solutions/security/detect-and-alert/add-detection-alerts-to-cases.md @@ -30,7 +30,7 @@ From the Alerts table, you can attach one or more alerts to a [new case](/soluti :::{image} ../../../images/security-add-alert-to-case.gif :alt: add alert to case -:class: screenshot +:screenshot: ::: @@ -56,7 +56,7 @@ To add alerts to a new case: :::{image} ../../../images/security-add-alert-to-new-case.png :alt: add alert to new case -:class: screenshot +:screenshot: ::: @@ -78,5 +78,5 @@ To add alerts to an existing case: :::{image} ../../../images/security-add-alert-to-existing-case.png :alt: Select case dialog listing existing cases -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/add-manage-exceptions.md b/solutions/security/detect-and-alert/add-manage-exceptions.md index ef54e54b5..2f7762e1c 100644 --- a/solutions/security/detect-and-alert/add-manage-exceptions.md +++ b/solutions/security/detect-and-alert/add-manage-exceptions.md @@ -40,7 +40,7 @@ You can add exceptions to a rule from the rule details page, the Alerts table, t :::{image} ../../../images/security-rule-exception-tab.png :alt: Detail of rule exceptions tab - :class: screenshot + :screenshot: ::: * To add an exception from the Alerts table: @@ -115,7 +115,7 @@ You can add exceptions to a rule from the rule details page, the Alerts table, t :::{image} ../../../images/security-add-exception-ui.png :alt: add exception ui - :class: screenshot + :screenshot: ::: 4. Click **AND** or **OR** to create multiple conditions and define their relationships. @@ -188,7 +188,7 @@ Additionally, to add an Endpoint exception to an endpoint protection rule, there :::{image} ../../../images/security-endpoint-add-exp.png :alt: endpoint add exp - :class: screenshot + :screenshot: ::: 2. If required, modify the conditions. @@ -270,7 +270,7 @@ Creates an exception that excludes all LFC-signed trusted processes: :::{image} ../../../images/security-nested-exp.png :alt: nested exp -:class: screenshot +:screenshot: ::: @@ -285,7 +285,7 @@ To view a rule’s exceptions: :::{image} ../../../images/security-manage-default-rule-list.png :alt: A default rule list - :class: screenshot + :screenshot: ::: @@ -301,5 +301,5 @@ Changes that you make to the exception also apply to other rules that use the ex :::{image} ../../../images/security-exception-affects-multiple-rules.png :alt: Exception that affects multiple rules -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/create-detection-rule.md b/solutions/security/detect-and-alert/create-detection-rule.md index 8419a64d1..a37dee0ca 100644 --- a/solutions/security/detect-and-alert/create-detection-rule.md +++ b/solutions/security/detect-and-alert/create-detection-rule.md @@ -137,7 +137,7 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// :::{image} ../../../images/security-rule-query-example.png :alt: Rule query example - :class: screenshot + :screenshot: ::: 3. You can use {{kib}} saved queries (![Saved query menu](../../../images/security-saved-query-menu.png "")) and queries from saved Timelines (**Import query from saved Timeline**) as rule conditions. @@ -242,7 +242,7 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// :::{image} ../../../images/security-eql-rule-query-example.png :alt: eql rule query example - :class: screenshot + :screenshot: ::: ::::{note} @@ -317,7 +317,7 @@ To create or edit {{ml}} rules, you must have the [appropriate license](https:// :::{image} ../../../images/security-indicator-rule-example.png :alt: Indicator match rule settings - :class: screenshot + :screenshot: ::: ::::{tip} @@ -363,7 +363,7 @@ You uploaded a value list of known ransomware domains, and you want to be notifi :::{image} ../../../images/security-indicator_value_list.png :alt: indicator value list -:class: screenshot +:screenshot: ::: @@ -565,7 +565,7 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](/solutions/s :::{image} ../../../images/security-severity-mapping-ui.png :alt: severity mapping ui - :class: screenshot + :screenshot: ::: ::::{note} @@ -583,7 +583,7 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](/solutions/s :::{image} ../../../images/security-risk-source-field-ui.png :alt: risk source field ui - :class: screenshot + :screenshot: ::: ::::{note} @@ -651,7 +651,7 @@ When configuring an {{esql}} rule’s **[Custom highlighted fields](/solutions/s :::{image} ../../../images/security-schedule-rule.png :alt: schedule rule - :class: screenshot + :screenshot: ::: 3. Continue with [setting the rule’s schedule](/solutions/security/detect-and-alert/create-detection-rule.md#rule-schedule). @@ -716,7 +716,7 @@ To use {{kib}} actions for alert notifications, you need the [appropriate licens :::{image} ../../../images/security-selected-action-type.png :alt: selected action type - :class: screenshot + :screenshot: ::: 5. Use the default notification message or customize it. You can add more context to the message by clicking the icon above the message text box and selecting from a list of available [alert notification variables](/solutions/security/detect-and-alert/create-detection-rule.md#rule-action-variables). @@ -852,7 +852,7 @@ Click the **Rule preview** button while creating or editing a rule. The preview :::{image} ../../../images/security-preview-rule.png :alt: Rule preview -:class: screenshot +:screenshot: ::: The preview also includes the effects of rule exceptions and override fields. In the histogram, alerts are stacked by `event.category` (or `host.name` for machine learning rules), and alerts with multiple values are counted more than once. diff --git a/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md b/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md index 35604bd8f..5b01fb3e9 100644 --- a/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md +++ b/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md @@ -17,7 +17,7 @@ Shared exception lists allow you to group exceptions together and then apply the :::{image} ../../../images/security-rule-exceptions-page.png :alt: Shared Exception Lists page -:class: screenshot +:screenshot: ::: @@ -107,7 +107,7 @@ Apply shared exception lists to rules: :::{image} ../../../images/security-associated-shared-exception-list.png :alt: Associated shared exceptions - :class: screenshot + :screenshot: ::: @@ -126,7 +126,7 @@ To view the details of an exception item within a shared exception list, expand :::{image} ../../../images/security-view-filter-shared-exception.png :alt: Associated shared exceptions -:class: screenshot +:screenshot: ::: To filter exception lists by a specific value, enter a value in the search bar. You can search the following attributes: @@ -158,5 +158,5 @@ To export or delete an exception list, select the required action button on the :::{image} ../../../images/security-actions-exception-list.png :alt: Detail of Exception lists table with export and delete buttons highlighted -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/create-manage-value-lists.md b/solutions/security/detect-and-alert/create-manage-value-lists.md index 647fd3cdf..e4384b0f9 100644 --- a/solutions/security/detect-and-alert/create-manage-value-lists.md +++ b/solutions/security/detect-and-alert/create-manage-value-lists.md @@ -46,7 +46,7 @@ To create a value list: :::{image} ../../../images/security-upload-lists-ui.png :alt: Manage value lists flyout - :class: screenshot + :screenshot: ::: 4. Select the list type (**Keywords**, **IP addresses**, **IP ranges**, or **Text**) from the **Type of value list** drop-down. @@ -80,7 +80,7 @@ You can edit, remove, or export existing value lists. :::{image} ../../../images/security-edit-value-lists.png :alt: Manage items in a value lists -:class: screenshot +:screenshot: ::: ::::{tip} @@ -98,5 +98,5 @@ You can also edit value lists while creating and managing exceptions that use va :::{image} ../../../images/security-manage-value-list.png :alt: Import value list flyout with action buttons highlighted - :class: screenshot + :screenshot: ::: diff --git a/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md b/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md index 118adcb57..854cf12ab 100644 --- a/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md +++ b/solutions/security/detect-and-alert/cross-cluster-search-detection-rules.md @@ -25,14 +25,14 @@ This section explains the general process for setting up cross-cluster search in :::{image} ../../../images/security-ccs-local-role.png :alt: Local cluster role configuration - :class: screenshot + :screenshot: ::: 2. **Remote cluster role**: Assign the `read` and `read_cross_cluster` privileges to the indices you want to search. You don’t need to include the remote cluster’s name here. :::{image} ../../../images/security-ccs-remote-role.png :alt: Remote cluster role configuration - :class: screenshot + :screenshot: ::: 3. On the local cluster: @@ -49,7 +49,7 @@ This section explains the general process for setting up cross-cluster search in :::{image} ../../../images/security-ccs-rule-source.png :alt: Rule source configuration - :class: screenshot + :screenshot: ::: ::::{note} diff --git a/solutions/security/detect-and-alert/detections-requirements.md b/solutions/security/detect-and-alert/detections-requirements.md index 95165600f..fa9026d32 100644 --- a/solutions/security/detect-and-alert/detections-requirements.md +++ b/solutions/security/detect-and-alert/detections-requirements.md @@ -50,7 +50,7 @@ After changing the `xpack.encryptedSavedObjects.encryptionKey` value and restart ## Enable and access detections [enable-detections-ui] -To use the Detections feature, it must be enabled, your role must have access to rules and alerts, and your {{kib}} space must have **Data View Management** [feature visibility](/deploy-manage/manage-spaces.md#spaces-control-feature-visibility). If your role doesn’t have the cluster and index privileges needed to enable this feature, you can request someone who has these privileges to visit your {{kib}} space, which will turn it on for you. +To use the Detections feature, it must be enabled, your role must have access to rules and alerts, and your {{kib}} space must have **Data View Management** [feature visibility](/deploy-manage/manage-spaces.md). If your role doesn’t have the cluster and index privileges needed to enable this feature, you can request someone who has these privileges to visit your {{kib}} space, which will turn it on for you. ::::{note} For instructions about using {{ml}} jobs and rules, refer to [Machine learning job and rule requirements](/solutions/security/advanced-entity-analytics/machine-learning-job-rule-requirements.md). @@ -77,7 +77,7 @@ Here is an example of a user who has the Detections feature enabled in all {{kib :::{image} ../../../images/security-sec-admin-user.png :alt: Shows user with the Detections feature enabled in all Kibana spaces -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md b/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md index 6582de178..92fba203a 100644 --- a/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md +++ b/solutions/security/detect-and-alert/install-manage-elastic-prebuilt-rules.md @@ -48,7 +48,7 @@ Follow these guidelines to start using the {{security-app}}'s [prebuilt rules](s :::{image} ../../../images/security-prebuilt-rules-add-badge.png :alt: The Add Elastic Rules page - :class: screenshot + :screenshot: ::: 2. Click **Add Elastic rules**. @@ -70,7 +70,7 @@ Follow these guidelines to start using the {{security-app}}'s [prebuilt rules](s :::{image} ../../../images/security-prebuilt-rules-add.png :alt: The Add Elastic Rules page - :class: screenshot + :screenshot: ::: 4. For any rules you haven’t already enabled, go back to the **Rules** page, search or filter for the rules you want to run, and do either of the following: @@ -132,7 +132,7 @@ Elastic regularly updates prebuilt rules to optimize their performance and ensur :::{image} ../../../images/security-prebuilt-rules-update.png :alt: The Rule Updates tab on the Rules page - :class: screenshot + :screenshot: ::: 3. (Optional) To examine the details of a rule’s latest version before you update it, select the rule name. This opens the rule details flyout. @@ -143,7 +143,7 @@ Elastic regularly updates prebuilt rules to optimize their performance and ensur :::{image} ../../../images/security-prebuilt-rules-update-diff.png :alt: Prebuilt rule comparison - :class: screenshot + :screenshot: ::: 4. Do one of the following to update prebuilt rules on the **Rules** page: diff --git a/solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md b/solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md index 8ef0b7883..05a3da637 100644 --- a/solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md +++ b/solutions/security/detect-and-alert/launch-timeline-from-investigation-guides.md @@ -28,21 +28,21 @@ Interactive investigation guides are compatible between {{stack}} versions 8.7.0 :::{image} ../../../images/security-ig-alert-flyout.png :alt: Alert details flyout with interactive investigation guide -:class: screenshot +:screenshot: ::: Under the Investigation section, click **Show investigation guide** to open the **Investigation** tab in the left panel of the alert details flyout. :::{image} ../../../images/security-ig-alert-flyout-invest-tab.png :alt: Alert details flyout with interactive investigation guide -:class: screenshot +:screenshot: ::: The **Investigation** tab displays query buttons, and each query button displays the number of event documents found. Click the query button to automatically load the query in Timeline, based on configuration settings in the investigation guide. :::{image} ../../../images/security-ig-timeline.png :alt: Timeline with query pre-loaded from investigation guide action -:class: screenshot +:screenshot: ::: @@ -59,14 +59,14 @@ You can configure an interactive investigation guide when you [create a new rule :::{image} ../../../images/security-ig-investigation-guide-editor.png :alt: Investigation guide editor field - :class: screenshot + :screenshot: ::: 2. Place the editor cursor where you want to add the query button in the investigation guide, then select the Investigate icon (![Investigate icon](../../../images/security-ig-investigate-icon.png "")) in the toolbar. The **Add investigation query** builder form appears. :::{image} ../../../images/security-ig-investigation-query-builder.png :alt: Add investigation guide UI - :class: screenshot + :screenshot: ::: 3. Complete the query builder form to create an investigation query: @@ -79,7 +79,7 @@ You can configure an interactive investigation guide when you [create a new rule :::{image} ../../../images/security-ig-filters-field-custom-value.png :alt: Add investigation guide UI - :class: screenshot + :screenshot: ::: 4. **Relative time range**: (Optional) Select a time range to limit the query, relative to the alert’s creation time. @@ -137,7 +137,7 @@ This example creates the following Timeline query, as illustrated below: :::{image} ../../../images/security-ig-timeline-query.png :alt: Timeline query -:class: screenshot +:screenshot: ::: @@ -147,5 +147,5 @@ When viewing an interactive investigation guide in contexts unconnected to a spe :::{image} ../../../images/security-ig-timeline-template-fields.png :alt: Timeline template -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/manage-detection-alerts.md b/solutions/security/detect-and-alert/manage-detection-alerts.md index 822b4f0be..f451410e4 100644 --- a/solutions/security/detect-and-alert/manage-detection-alerts.md +++ b/solutions/security/detect-and-alert/manage-detection-alerts.md @@ -37,7 +37,7 @@ The Alerts page displays all detection alerts. From the Alerts page, you can fil :::{image} ../../../images/security-alert-page.png :alt: Alerts page overview -:class: screenshot +:screenshot: ::: @@ -49,7 +49,7 @@ The Alerts page offers various ways for you to organize and triage detection ale :::{image} ../../../images/security-view-alert-details.png :alt: View details button - :class: screenshot + :screenshot: ::: * View the rule that created an alert. Click a name in the **Rule** column to open the rule’s details. @@ -62,7 +62,7 @@ The Alerts page offers various ways for you to organize and triage detection ale :::{image} ../../../images/security-inline-actions-menu.png :alt: Inline additional actions menu - :class: screenshot + :screenshot: ::: * Filter alert results to include building block alerts or to only show alerts from indicator match rules by selecting the **Additional filters** drop-down. By default, [building block alerts](/solutions/security/detect-and-alert/about-building-block-rules.md) are excluded from the Overview and Alerts pages. You can choose to include building block alerts on the Alerts page, which expands the number of alerts. @@ -74,7 +74,7 @@ The Alerts page offers various ways for you to organize and triage detection ale :::{image} ../../../images/security-additional-filters.png :alt: Alerts table with Additional filters menu highlighted - :class: screenshot + :screenshot: ::: * View detection alerts generated by a specific rule. Go to **Rules** → **Detection rules (SIEM)**, then select a rule name in the table. The rule details page displays a comprehensive view of the rule’s settings, and the Alerts table under the Trend histogram displays the alerts associated with the rule, including alerts from any previous or deleted revision of that rule. @@ -86,7 +86,7 @@ By default, the drop-down controls on the Alerts page filter alerts by **Status* :::{image} ../../../images/security-alert-page-dropdown-controls.png :alt: Alerts page with drop-down controls highlighted -:class: screenshot +:screenshot: ::: ::::{note} @@ -123,7 +123,7 @@ Select up to three fields for grouping alerts. The groups will nest in the order :::{image} ../../../images/security-group-alerts.png :alt: Alerts table with Group alerts by drop-down -:class: screenshot +:screenshot: ::: Each group displays information such as the alerts' severity and how many users, hosts, and alerts are in the group. The information displayed varies depending on the selected fields. @@ -135,7 +135,7 @@ To interact with grouped alerts: :::{image} ../../../images/security-group-alerts-expand.png :alt: Expanded alert group with alerts table - :class: screenshot + :screenshot: ::: @@ -152,7 +152,7 @@ Click the **Full screen** button in the upper-right to view the table in full-sc :::{image} ../../../images/security-alert-table-toolbar-buttons.png :alt: Alerts table with toolbar buttons highlighted -:class: screenshot +:screenshot: ::: Use the view options drop-down in the upper-right of the Alerts table to control how alerts are displayed: @@ -162,7 +162,7 @@ Use the view options drop-down in the upper-right of the Alerts table to control :::{image} ../../../images/security-event-rendered-view.png :alt: Alerts table with the Event rendered view enabled -:class: screenshot +:screenshot: ::: ::::{tip} @@ -201,7 +201,7 @@ To change an alert’s status, do one of the following: :::{image} ../../../images/security-alert-change-status.png :alt: Bulk action menu with multiple alerts selected - :class: screenshot + :screenshot: ::: * [beta] To bulk-change the status of [grouped alerts](/solutions/security/detect-and-alert/manage-detection-alerts.md#group-alerts), select the **Take actions** menu for the group, then select a status. @@ -231,7 +231,7 @@ To apply or remove alert tags on multiple alerts, select the alerts you want to :::{image} ../../../images/security-bulk-apply-alert-tag.png :alt: Bulk action menu with multiple alerts selected -:class: screenshot +:screenshot: ::: @@ -255,14 +255,14 @@ Show users that have been assigned to alerts by adding the **Assignees** column :::{image} ../../../images/security-alert-assigned-alerts.png :alt: Alert assignees in the Alerts table -:class: screenshot +:screenshot: ::: Assigned users are automatically displayed in the alert details flyout. Up to two assigned users can be shown in the flyout. If an alert is assigned to three or more users, a numbered badge displays instead. :::{image} ../../../images/security-alert-flyout-assignees.png :alt: Alert assignees in the alert details flyout -:class: screenshot +:screenshot: ::: @@ -272,7 +272,7 @@ Click the **Assignees** filter above the Alerts table, then select the users you :::{image} ../../../images/security-alert-filter-assigned-alerts.png :alt: Filtering assigned alerts -:class: screenshot +:screenshot: ::: @@ -291,7 +291,7 @@ For information about exceptions and how to use them, refer to [Add and manage e :::{image} ../../../images/security-timeline-button.png :alt: Investigate in timeline button - :class: screenshot + :screenshot: ::: * To view multiple alerts in Timeline (up to 2,000), select the checkboxes next to the alerts, then click **Selected *x* alerts** → **Investigate in timeline**. diff --git a/solutions/security/detect-and-alert/manage-detection-rules.md b/solutions/security/detect-and-alert/manage-detection-rules.md index 6753c5724..bfc8d04cf 100644 --- a/solutions/security/detect-and-alert/manage-detection-rules.md +++ b/solutions/security/detect-and-alert/manage-detection-rules.md @@ -35,7 +35,7 @@ The Rules page allows you to view and manage all prebuilt and custom detection r :::{image} ../../../images/security-all-rules.png :alt: The Rules page -:class: screenshot +:screenshot: ::: On the Rules page, you can: @@ -198,7 +198,7 @@ You can snooze rule notifications from the **Installed Rules** tab, the rule det :::{image} ../../../images/security-rule-snoozing.png :alt: Rules snooze options -:class: screenshot +:screenshot: ::: @@ -266,14 +266,14 @@ Additionally, the **Setup guide** section provides guidance on setting up the ru :::{image} ../../../images/security-rule-details-prerequisites.png :alt: Rule details page with Related integrations -:class: screenshot +:screenshot: ::: You can also check rules' related integrations in the **Installed Rules** and **Rule Monitoring** tables. Click the **integrations** badge to display the related integrations in a popup. :::{image} ../../../images/security-rules-table-related-integrations.png :alt: Rules table with related integrations popup -:class: screenshot +:screenshot: ::: ::::{tip} diff --git a/solutions/security/detect-and-alert/mitre-attandckr-coverage.md b/solutions/security/detect-and-alert/mitre-attandckr-coverage.md index 3e9e13857..f261e1dbb 100644 --- a/solutions/security/detect-and-alert/mitre-attandckr-coverage.md +++ b/solutions/security/detect-and-alert/mitre-attandckr-coverage.md @@ -29,7 +29,7 @@ You can map custom rules to tactics in **Advanced settings** when creating or ed :::{image} ../../../images/security-rules-coverage.png :alt: MITRE ATT&CK® coverage page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/monitor-rule-executions.md b/solutions/security/detect-and-alert/monitor-rule-executions.md index 77dfe0c3b..68c79de68 100644 --- a/solutions/security/detect-and-alert/monitor-rule-executions.md +++ b/solutions/security/detect-and-alert/monitor-rule-executions.md @@ -21,7 +21,7 @@ To view a summary of all rule executions, including the most recent failures and :::{image} ../../../images/security-monitor-table.png :alt: monitor table -:class: screenshot +:screenshot: ::: On the **Rule Monitoring** tab, you can [sort and filter rules](../detect-and-alert/manage-detection-rules.md#sort-filter-rules) just like you can on the **Installed Rules** tab. @@ -42,7 +42,7 @@ To access a rule’s execution log, click the rule’s name to open its details, :::{image} ../../../images/security-rule-execution-logs.png :alt: Execution log table on the rule execution results tab -:class: screenshot +:screenshot: ::: You can hover over each column heading to display a tooltip about that column’s data. Click a column heading to sort the table by that column. @@ -81,7 +81,7 @@ To stop an active run, go to the appropriate row and click **Stop run** in the * :::{image} ../../../images/security-manual-rule-run-table.png :alt: Manual rule runs table on the rule execution results tab -:class: screenshot +:screenshot: ::: The Manual runs table displays important details such as: diff --git a/solutions/security/detect-and-alert/rule-exceptions.md b/solutions/security/detect-and-alert/rule-exceptions.md index 027d126ff..3bfc92f3c 100644 --- a/solutions/security/detect-and-alert/rule-exceptions.md +++ b/solutions/security/detect-and-alert/rule-exceptions.md @@ -19,7 +19,7 @@ You can create exceptions that apply exclusively to a single rule. These types o :::{image} ../../../images/security-exception-item-example.png :alt: An exception item -:class: screenshot +:screenshot: ::: ::::{note} @@ -34,7 +34,7 @@ If you want an exception to apply to multiple rules, you can add an exception to :::{image} ../../../images/security-rule-exceptions-page.png :alt: Shared Exception Lists page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/suppress-detection-alerts.md b/solutions/security/detect-and-alert/suppress-detection-alerts.md index 6b0bbdcf4..4db14ff8a 100644 --- a/solutions/security/detect-and-alert/suppress-detection-alerts.md +++ b/solutions/security/detect-and-alert/suppress-detection-alerts.md @@ -106,21 +106,21 @@ After an alert is moved to the `Closed` status, it will no longer suppress new a :::{image} ../../../images/security-suppressed-alerts-table.png :alt: Suppressed alerts icon and tooltip in Alerts table - :class: screenshot + :screenshot: ::: * **Alerts** table — Column for suppressed alerts count. Select **Fields** to open the fields browser, then add `kibana.alert.suppression.docs_count` to the table. :::{image} ../../../images/security-suppressed-alerts-table-column.png :alt: Suppressed alerts count field column in Alerts table - :class: screenshot + :screenshot: ::: * Alert details flyout — **Insights** → **Correlations** section: :::{image} ../../../images/security-suppressed-alerts-details.png :alt: Suppressed alerts in the Correlations section within the alert details flyout - :class: screenshot + :screenshot: ::: @@ -133,7 +133,7 @@ With alert suppression, detection alerts aren’t created for the grouped source :::{image} ../../../images/security-timeline-button.png :alt: Investigate in timeline button - :class: screenshot + :screenshot: ::: * Alert details flyout — Select **Take action** → **Investigate in timeline**. diff --git a/solutions/security/detect-and-alert/tune-detection-rules.md b/solutions/security/detect-and-alert/tune-detection-rules.md index 55fd00d61..7ff21a33b 100644 --- a/solutions/security/detect-and-alert/tune-detection-rules.md +++ b/solutions/security/detect-and-alert/tune-detection-rules.md @@ -59,7 +59,7 @@ For example, to prevent the [Unusual Process Execution Path - Alternate Data Str :::{image} ../../../images/security-rule-details-page.png :alt: Rule details page - :class: screenshot + :screenshot: ::: 3. Select the **Rule exceptions** tab, then click **Add rule exception**. @@ -71,7 +71,7 @@ For example, to prevent the [Unusual Process Execution Path - Alternate Data Str :::{image} ../../../images/security-process-exception.png :alt: Add Rule Exception UI - :class: screenshot + :screenshot: ::: 5. Click **Add rule exception**. @@ -104,7 +104,7 @@ Another useful technique is to assign lower risk scores to rules triggered by au :::{image} ../../../images/security-process-specific-exception.png :alt: Example of `is not` exception in the Add Rule Exception UI - :class: screenshot + :screenshot: ::: 4. Click **Add rule exception**. diff --git a/solutions/security/detect-and-alert/view-detection-alert-details.md b/solutions/security/detect-and-alert/view-detection-alert-details.md index f8e785617..30cf36cf8 100644 --- a/solutions/security/detect-and-alert/view-detection-alert-details.md +++ b/solutions/security/detect-and-alert/view-detection-alert-details.md @@ -37,7 +37,7 @@ To learn more about an alert, click the **View details** button from the Alerts :::{image} ../../../images/security-open-alert-details-flyout.gif :alt: Expandable flyout -:class: screenshot +:screenshot: ::: Use the alert details flyout to begin an investigation, open a case, or plan a response. Click **Take action** at the bottom of the flyout to find more options for interacting with the alert. @@ -54,7 +54,7 @@ The right panel provides an overview of the alert. Expand any of the collapsed s :::{image} ../../../images/security-alert-details-flyout-right-panel.png :alt: Right panel of the alert details flyout -:class: screenshot +:screenshot: ::: From the right panel, you can also: @@ -90,7 +90,7 @@ Some areas in the flyout provide previews when you click on them. For example, c :::{image} ../../../images/security-alert-details-flyout-preview-panel.gif :alt: Preview panel of the alert details flyout -:class: screenshot +:screenshot: ::: @@ -102,14 +102,14 @@ The left panel provides an expanded view of what’s shown in the right panel. T :::{image} ../../../images/security-expand-details-button.png :alt: Expand details button at the top of the alert details flyout - :class: screenshot + :screenshot: ::: * Click one of the section titles on the **Overview** tab within the right panel. :::{image} ../../../images/security-alert-details-flyout-left-panel.png :alt: Left panel of the alert details flyout - :class: screenshot + :screenshot: ::: @@ -120,7 +120,7 @@ The About section is located on the **Overview** tab in the right panel. It prov :::{image} ../../../images/security-about-section-rp.png :alt: About section of the Overview tab -:class: screenshot +:screenshot: ::: The About section has the following information: @@ -141,7 +141,7 @@ The Investigation section is located on the **Overview** tab in the right panel. :::{image} ../../../images/security-investigation-section-rp.png :alt: Investigation section of the Overview tab -:class: screenshot +:screenshot: ::: The Investigation section provides the following information: @@ -161,7 +161,7 @@ The Visualizations section is located on the **Overview** tab in the right panel :::{image} ../../../images/security-visualizations-section-rp.png :alt: Visualizations section of the Overview tab -:class: screenshot +:screenshot: ::: Click **Visualizations** to display the following previews: @@ -187,14 +187,14 @@ The **Visualize** tab allows you to maintain the context of the Alerts table, wh :::{image} ../../../images/security-visualize-tab-lp.png :alt: Expanded view of visualization details -:class: screenshot +:screenshot: ::: As you examine the alert’s related processes, you can also preview the alerts and events which are associated with those processes. Then, if you want to learn more about a particular alert or event, you can click **Show full alert details** to open the full details flyout. :::{image} ../../../images/security-visualize-tab-lp-alert-details.gif :alt: Examine alert details from event analyzer -:class: screenshot +:screenshot: ::: @@ -204,7 +204,7 @@ The Insights section is located on the **Overview** tab in the right panel. It o :::{image} ../../../images/security-insights-section-rp.png :alt: Insights section of the Overview tab -:class: screenshot +:screenshot: ::: @@ -214,7 +214,7 @@ The Entities overview provides high-level details about the user and host that a :::{image} ../../../images/security-entities-overview.png :alt: Overview of the entity details section in the right panel -:class: screenshot +:screenshot: ::: @@ -224,7 +224,7 @@ From the right panel, click **Entities** to open a detailed view of the host and :::{image} ../../../images/security-expanded-entities-view.png :alt: Expanded view of entity details -:class: screenshot +:screenshot: ::: @@ -234,7 +234,7 @@ The Threat intelligence overview shows matched indicators, which provide threat :::{image} ../../../images/security-threat-intelligence-overview.png :alt: Overview of threat intelligence on the alert -:class: screenshot +:screenshot: ::: The Threat intelligence overview provides the following information: @@ -254,7 +254,7 @@ The expanded threat intelligence view queries indices specified in the `security :::{image} ../../../images/security-expanded-threat-intelligence-view.png :alt: Expanded view of threat intelligence on the alert -:class: screenshot +:screenshot: ::: The expanded Threat intelligence view shows individual indicators within the alert document. You can expand and collapse indicator details by clicking the arrow button at the end of the indicator label. Each indicator is labeled with values from the `matched.field` and `matched.atomic` fields and displays the threat intelligence provider. @@ -294,7 +294,7 @@ The Correlations overview shows how an alert is related to other alerts and offe :::{image} ../../../images/security-correlations-overview.png :alt: Overview of available correlation data -:class: screenshot +:screenshot: ::: The Correlations overview provides the following information: @@ -317,7 +317,7 @@ From the right panel, click **Correlations** to open the expanded Correlations v :::{image} ../../../images/security-expanded-correlations-view.png :alt: Expanded view of correlation data -:class: screenshot +:screenshot: ::: In the expanded view, corelation data is organized into several tables: @@ -350,7 +350,7 @@ Update the date time picker for the table to show data from a different time ran :::{image} ../../../images/security-expanded-prevalence-view.png :alt: Expanded view of prevalence data -:class: screenshot +:screenshot: ::: The expanded Prevalence view provides the following details: @@ -372,7 +372,7 @@ The **Response** section is located on the **Overview** tab in the right panel. :::{image} ../../../images/security-response-action-rp.png :alt: Response section of the Overview tab -:class: screenshot +:screenshot: ::: @@ -387,5 +387,5 @@ Go to the **Notes** [page](/solutions/security/investigate/notes.md#manage-notes :::{image} ../../../images/security-notes-tab-lp.png :alt: Notes tab in the left panel -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/detect-and-alert/visualize-detection-alerts.md b/solutions/security/detect-and-alert/visualize-detection-alerts.md index c41b99f14..7f4e09dee 100644 --- a/solutions/security/detect-and-alert/visualize-detection-alerts.md +++ b/solutions/security/detect-and-alert/visualize-detection-alerts.md @@ -17,7 +17,7 @@ Visualize and group detection alerts by specific parameters in the visualization :::{image} ../../../images/security-alert-page-visualizations.png :alt: Alerts page with visualizations section highlighted -:class: screenshot +:screenshot: ::: Use the left buttons to select a view type (**Summary**, **Trend**, **Counts**, or **Treemap**), and use the right menus to select the ECS fields to use for grouping: @@ -43,7 +43,7 @@ Click the collapse icon (![Collapse icon](../../../images/security-collapse-icon :::{image} ../../../images/security-alert-page-viz-collapsed.png :alt: Alerts page with visualizations section collapsed -:class: screenshot +:screenshot: ::: @@ -59,7 +59,7 @@ You can hover and click on elements within the summary — such as severity leve :::{image} ../../../images/security-alerts-viz-summary.png :alt: Summary visualization for alerts -:class: screenshot +:screenshot: ::: @@ -74,7 +74,7 @@ The **Group by top** menu is unavailable for the trend view. :::{image} ../../../images/security-alerts-viz-trend.png :alt: Trend visualization for alerts -:class: screenshot +:screenshot: ::: @@ -84,7 +84,7 @@ The counts view shows the count of alerts in each group. By default, it groups a :::{image} ../../../images/security-alerts-viz-counts.png :alt: Counts visualization for alerts -:class: screenshot +:screenshot: ::: @@ -94,7 +94,7 @@ The treemap view shows the distribution of alerts as nested, proportionally-size :::{image} ../../../images/security-alerts-viz-treemap.png :alt: Treemap visualization for alerts -:class: screenshot +:screenshot: ::: Larger tiles represent more frequent alerts, and each tile’s color is based on the alerts' risk score: @@ -115,6 +115,6 @@ You can click on the treemap to narrow down the alerts displayed in both the tre :::{image} ../../../images/security-treemap-click.gif :alt: Animation of clicking the treemap -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/endpoint-response-actions.md b/solutions/security/endpoint-response-actions.md index 2601f6e1a..d6062d130 100644 --- a/solutions/security/endpoint-response-actions.md +++ b/solutions/security/endpoint-response-actions.md @@ -25,7 +25,7 @@ Response actions are supported on all endpoint platforms (Linux, macOS, and Wind :::{image} ../../images/security-response-console.png :alt: Response console UI :width: 90% -:class: screenshot +:screenshot: ::: Launch the response console from any of the following places in {{elastic-sec}}: @@ -302,7 +302,7 @@ This panel displays only the response actions that you have the user role or pri :::{image} ../../images/security-response-console-help-panel.png :alt: Help panel :width: 65% -:class: screenshot +:screenshot: ::: You can use this panel to build commands with less typing. Click the add icon (![Add icon](../../images/security-add-command-icon.png "")) to add a command to the input area, enter any additional parameters or a comment, then press **Return** to run the command. @@ -312,7 +312,7 @@ If the endpoint is running an older version of {{agent}}, some response actions :::{image} ../../images/security-response-console-unsupported-command.png :alt: Unsupported response action with tooltip :width: 65% -:class: screenshot +:screenshot: ::: @@ -323,5 +323,5 @@ Click **Response actions history** to display a log of the response actions perf :::{image} ../../images/security-response-actions-history-console.png :alt: Response actions history with a few past actions :width: 85% -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/endpoint-response-actions/isolate-host.md b/solutions/security/endpoint-response-actions/isolate-host.md index 54aa36b55..609c1de3b 100644 --- a/solutions/security/endpoint-response-actions/isolate-host.md +++ b/solutions/security/endpoint-response-actions/isolate-host.md @@ -27,7 +27,7 @@ Isolated hosts, however, can still send data to {{elastic-sec}}. You can also cr :::{image} ../../../images/security-isolated-host.png :alt: Endpoint page highlighting a host that's been isolated -:class: screenshot +:screenshot: ::: You can isolate a host from a detection alert’s details flyout, from the Endpoints page, or from the endpoint response console. Once a host is successfully isolated, an `Isolated` status displays next to the `Agent status` field, which you can view on the alert details flyout or Endpoints list table. @@ -107,7 +107,7 @@ After the host is successfully isolated, an **Isolated** status is added to the :::{image} ../../../images/security-host-isolated-notif.png :alt: Host isolated notification message :width: 50% -:class: screenshot +:screenshot: ::: @@ -156,7 +156,7 @@ After the host is successfully released, the **Isolated** status is removed from :::{image} ../../../images/security-host-released-notif.png :alt: Host released notification message :width: 50% -:class: screenshot +:screenshot: ::: @@ -169,5 +169,5 @@ Go to the **Endpoints** page, click an endpoint’s name, then click the **Respo :::{image} ../../../images/security-response-actions-history-endpoint-details.png :alt: Response actions history page UI :width: 90% -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/endpoint-response-actions/response-actions-history.md b/solutions/security/endpoint-response-actions/response-actions-history.md index 4522f26c5..b48c3d4f1 100644 --- a/solutions/security/endpoint-response-actions/response-actions-history.md +++ b/solutions/security/endpoint-response-actions/response-actions-history.md @@ -23,7 +23,7 @@ All of these contexts contain the same information and features. The following i :::{image} ../../../images/security-response-actions-history-page.png :alt: Response actions history page UI -:class: screenshot +:screenshot: ::: To filter and expand the information in the response actions history: diff --git a/solutions/security/explore/hosts-page.md b/solutions/security/explore/hosts-page.md index 078b455ac..bcf2c0ea4 100644 --- a/solutions/security/explore/hosts-page.md +++ b/solutions/security/explore/hosts-page.md @@ -10,7 +10,7 @@ The Hosts page provides a comprehensive overview of all hosts and host-related s :::{image} ../../../images/security-hosts-ov-pg.png :alt: Hosts page -:class: screenshot +:screenshot: ::: The Hosts page has the following sections: @@ -41,7 +41,7 @@ The tables within the **Events** and **Sessions** tabs include inline actions an :::{image} ../../../images/security-events-table.png :alt: Events table -:class: screenshot +:screenshot: ::: @@ -58,7 +58,7 @@ The host details page includes the following sections: :::{image} ../../../images/security-hosts-detail-pg.png :alt: Host's details page -:class: screenshot +:screenshot: ::: @@ -82,7 +82,7 @@ The host details flyout includes the following sections: :::{image} ../../../images/security-host-details-flyout.png :alt: Host details flyout -:class: screenshot +:screenshot: ::: @@ -109,7 +109,7 @@ If more than 10 alerts contributed to the risk scoring calculation, the remainin :::{image} ../../../images/security-host-risk-inputs.png :alt: Host risk inputs -:class: screenshot +:screenshot: ::: @@ -119,7 +119,7 @@ The **Asset Criticality** section displays the selected host’s [asset critical :::{image} ../../../images/security-host-asset-criticality.png :alt: Asset criticality -:class: screenshot +:screenshot: ::: Click **Assign** to assign a criticality level to the selected host, or **Change** to change the currently assigned criticality level. @@ -140,5 +140,5 @@ This section displays details such as the host ID, when the host was first and l :::{image} ../../../images/security-host-observed-data.png :alt: Host observed data -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/explore/network-page.md b/solutions/security/explore/network-page.md index 892f2c1eb..ef8617ad3 100644 --- a/solutions/security/explore/network-page.md +++ b/solutions/security/explore/network-page.md @@ -10,7 +10,7 @@ The Network page provides key network activity metrics in an interactive map, an :::{image} ../../../images/security-network-ui.png :alt: network ui -:class: screenshot +:screenshot: ::: @@ -79,7 +79,7 @@ The IP’s details page includes the following sections: :::{image} ../../../images/security-IP-detail-pg.png :alt: IP details page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/explore/users-page.md b/solutions/security/explore/users-page.md index c910a4952..2149cecad 100644 --- a/solutions/security/explore/users-page.md +++ b/solutions/security/explore/users-page.md @@ -10,7 +10,7 @@ The Users page provides a comprehensive overview of user data to help you unders :::{image} ../../../images/security-users-page.png :alt: User's page -:class: screenshot +:screenshot: ::: The Users page has the following sections: @@ -52,7 +52,7 @@ The user details page includes the following sections: :::{image} ../../../images/security-user-details-pg.png :alt: User details page -:class: screenshot +:screenshot: ::: @@ -76,7 +76,7 @@ The user details flyout includes the following sections: :::{image} ../../../images/security-user-details-flyout.png :alt: User details flyout -:class: screenshot +:screenshot: ::: @@ -103,7 +103,7 @@ If more than 10 alerts contributed to the risk scoring calculation, the remainin :::{image} ../../../images/security-user-risk-inputs.png :alt: User risk inputs -:class: screenshot +:screenshot: ::: @@ -113,7 +113,7 @@ The **Asset Criticality** section displays the selected user’s [asset critical :::{image} ../../../images/security-user-asset-criticality.png :alt: Asset criticality -:class: screenshot +:screenshot: ::: Click **Assign** to assign a criticality level to the selected user, or **Change** to change the currently assigned criticality level. @@ -130,5 +130,5 @@ This section displays details such as the user ID, when the user was first and l :::{image} ../../../images/security-user-observed-data.png :alt: User observed data -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/get-started/agentless-integrations.md b/solutions/security/get-started/agentless-integrations.md index d807cbc55..0b24d8c4c 100644 --- a/solutions/security/get-started/agentless-integrations.md +++ b/solutions/security/get-started/agentless-integrations.md @@ -6,10 +6,6 @@ mapped_urls: # Agentless integrations [agentless-integrations] -::::{warning} -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -:::: - Agentless integrations provide a means to ingest data while avoiding the orchestration, management, and maintenance needs associated with standard ingest infrastructure. Using agentless integrations makes manual agent deployment unnecessary, allowing you to focus on your data instead of the agent that collects it. diff --git a/solutions/security/get-started/configure-advanced-settings.md b/solutions/security/get-started/configure-advanced-settings.md index 4b936fda4..82b8e1d44 100644 --- a/solutions/security/get-started/configure-advanced-settings.md +++ b/solutions/security/get-started/configure-advanced-settings.md @@ -38,7 +38,7 @@ To access advanced settings, go to **Stack Management** → **Advanced Settings* :::{image} ../../../images/security-solution-advanced-settings.png :alt: solution advanced settings -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/get-started/create-runtime-fields-in-elastic-security.md b/solutions/security/get-started/create-runtime-fields-in-elastic-security.md index 77e6b6fd3..a5d4eeb48 100644 --- a/solutions/security/get-started/create-runtime-fields-in-elastic-security.md +++ b/solutions/security/get-started/create-runtime-fields-in-elastic-security.md @@ -24,14 +24,14 @@ To create a runtime field: :::{image} ../../../images/security-fields-browser.png :alt: Fields browser - :class: screenshot + :screenshot: ::: * In Timeline, go to the bottom of the sidebar, then click **Add a field**. The **Create field** flyout opens. :::{image} ../../../images/security-create-runtime-fields-timeline.png :alt: Create runtime fields button in Timeline - :class: screenshot + :screenshot: ::: 3. Enter a **Name** for the new field. diff --git a/solutions/security/get-started/data-views-elastic-security.md b/solutions/security/get-started/data-views-elastic-security.md index bbe402275..092e3a541 100644 --- a/solutions/security/get-started/data-views-elastic-security.md +++ b/solutions/security/get-started/data-views-elastic-security.md @@ -50,7 +50,7 @@ The default {{data-source}} is defined by the `securitySolution:defaultIndex` se The first time a user visits {{elastic-sec}} within a given {{kib}} [space](/deploy-manage/manage-spaces.md), the default {{data-source}} generates in that space and becomes active. ::::{note} -In {{stack}}, your {{kib}} space must have the **Data View Management** [feature visibility](/deploy-manage/manage-spaces.md#spaces-control-feature-visibility) setting enabled for the default {{data-source}} to generate and become active in your space. +In {{stack}}, your {{kib}} space must have the **Data View Management** [feature visibility](/deploy-manage/manage-spaces.md) setting enabled for the default {{data-source}} to generate and become active in your space. :::: diff --git a/solutions/security/get-started/elastic-security-ui.md b/solutions/security/get-started/elastic-security-ui.md index 00b311f66..b95896af9 100644 --- a/solutions/security/get-started/elastic-security-ui.md +++ b/solutions/security/get-started/elastic-security-ui.md @@ -16,7 +16,7 @@ Filter for alerts, events, processes, and other important security data by enter :::{image} ../../../images/security-search-bar.png :alt: search bar -:class: screenshot +:screenshot: ::: * To refine your search results, select **Add Filter** (![Add filter icon](../../../images/security-add-filter-icon.png "")), then enter the field, operator (such as `is not` or `is between`), and value for your filter. @@ -39,7 +39,7 @@ Many {{elastic-sec}} histograms, graphs, and tables display an **Inspect** butto :::{image} ../../../images/security-inspect-icon-context.png :alt: Inspect icon :width: 400px -:class: screenshot +:screenshot: ::: Other visualizations display an options menu (![Three-dot menu icon](../../../images/security-three-dot-icon.png "")), which allows you to inspect the visualization’s queries, add it to a new or existing case, or open it in Lens for customization. @@ -47,7 +47,7 @@ Other visualizations display an options menu (![Three-dot menu icon](../../../im :::{image} ../../../images/security-viz-options-menu-open.png :alt: Options menu opened :width: 500px -:class: screenshot +:screenshot: ::: @@ -58,7 +58,7 @@ Throughout the {{security-app}}, you can hover over many data fields and values :::{image} ../../../images/security-inline-actions-menu.png :alt: Inline additional actions menu :width: 500px -:class: screenshot +:screenshot: ::: In some visualizations, these actions are available in the legend by clicking a value’s options icon (![Vertical three-dot icon](../../../images/security-three-dot-icon-vertical.png "")). @@ -66,7 +66,7 @@ In some visualizations, these actions are available in the legend by clicking a :::{image} ../../../images/security-inline-actions-legend.png :alt: Actions in a visualization legend :width: 650px -:class: screenshot +:screenshot: ::: Inline actions include the following (some actions are unavailable in some contexts): @@ -226,7 +226,7 @@ Use your keyboard to interact with draggable elements in the Elastic Security UI :::{image} ../../../images/security-timeline-accessiblity-keyboard-focus.gif :alt: timeline accessiblity keyboard focus :width: 650px -:class: screenshot +:screenshot: ::: * Press `Enter` on an element with keyboard focus to display its menu and press `Tab` to apply focus sequentially to menu options. The `f`, `o`, `a`, `t`, `c` hotkeys are automatically enabled during this process and offer an alternative way to interact with menu options. @@ -234,21 +234,21 @@ Use your keyboard to interact with draggable elements in the Elastic Security UI :::{image} ../../../images/security-timeline-accessiblity-keyboard-focus-hotkeys.gif :alt: timeline accessiblity keyboard focus hotkeys :width: 500px -:class: screenshot +:screenshot: ::: * Press the spacebar once to begin dragging an element to a different location and press it a second time to drop it. Use the directional arrows to move the element around the UI. :::{image} ../../../images/security-timeline-ui-accessiblity-drag-n-drop.gif :alt: timeline ui accessiblity drag n drop -:class: screenshot +:screenshot: ::: * If an event has an event renderer, press the `Shift` key and the down directional arrow to apply keyboard focus to the event renderer and `Tab` or `Shift` + `Tab` to navigate between fields. To return to the cells in the current row, press the up directional arrow. To move to the next row, press the down directional arrow. :::{image} ../../../images/security-timeline-accessiblity-event-renderers.gif :alt: timeline accessiblity event renderers -:class: screenshot +:screenshot: ::: @@ -261,7 +261,7 @@ Use your keyboard to navigate through rows, columns, and menu options in the Ela :::{image} ../../../images/security-timeline-accessiblity-directional-arrows.gif :alt: timeline accessiblity directional arrows :width: 500px -:class: screenshot +:screenshot: ::: * Press the `Tab` key to navigate through a table cell with multiple elements, such as buttons, field names, and menus. Pressing the `Tab` key will sequentially apply keyboard focus to each element in the table cell. @@ -269,19 +269,19 @@ Use your keyboard to navigate through rows, columns, and menu options in the Ela :::{image} ../../../images/security-timeline-accessiblity-tab-keys.gif :alt: timeline accessiblity tab keys :width: 400px -:class: screenshot +:screenshot: ::: * Use `CTRL + Home` to shift keyboard focus to the first cell in a row. Likewise, use `CTRL + End` to move keyboard focus to the last cell in the row. :::{image} ../../../images/security-timeline-accessiblity-shifting-keyboard-focus.gif :alt: timeline accessiblity shifting keyboard focus -:class: screenshot +:screenshot: ::: * Use the `Page Up` and `Page Down` keys to scroll through the page. :::{image} ../../../images/security-timeline-accessiblity-page-up-n-down.gif :alt: timeline accessiblity page up n down -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/get-started/enable-threat-intelligence-integrations.md b/solutions/security/get-started/enable-threat-intelligence-integrations.md index 7232ec6c9..d6208b832 100644 --- a/solutions/security/get-started/enable-threat-intelligence-integrations.md +++ b/solutions/security/get-started/enable-threat-intelligence-integrations.md @@ -20,7 +20,7 @@ You can connect to threat intelligence sources using an [{{agent}} integration]( :::{image} ../../../images/getting-started-threat-intelligence-view.png :alt: The Threat Intelligence view on the Overview dashboard -:class: screenshot +:screenshot: ::: There are a few scenarios when data won’t display in the Threat Intelligence view: diff --git a/solutions/security/get-started/ingest-data-to-elastic-security.md b/solutions/security/get-started/ingest-data-to-elastic-security.md index 04d209282..5e6d10dd3 100644 --- a/solutions/security/get-started/ingest-data-to-elastic-security.md +++ b/solutions/security/get-started/ingest-data-to-elastic-security.md @@ -56,7 +56,7 @@ On the Integrations page, you can select the **Beats only** filter to only view :::{image} ../../../images/security-add-integrations.png :alt: Shows button to add integrations -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/investigate/add-osquery-response-actions.md b/solutions/security/investigate/add-osquery-response-actions.md index 79ae2875d..1b0ef4803 100644 --- a/solutions/security/investigate/add-osquery-response-actions.md +++ b/solutions/security/investigate/add-osquery-response-actions.md @@ -25,7 +25,7 @@ Osquery Response Actions allow you to add live queries to custom query rules so :::{image} ../../../images/security-available-response-actions-osquery.png :alt: The Osquery response action -:class: screenshot +:screenshot: ::: @@ -64,7 +64,7 @@ You can add Osquery Response Actions to new or existing custom query rules. Quer :::{image} ../../../images/security-setup-single-query.png :alt: Shows how to set up a single query - :class: screenshot + :screenshot: ::: 3. Click the **Osquery** icon to add more live queries (optional). @@ -96,5 +96,5 @@ Refer to [Examine Osquery results](/solutions/security/investigate/examine-osque :::{image} ../../../images/security-osquery-results-tab.png :alt: Shows how to set up a single query -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/investigate/cases.md b/solutions/security/investigate/cases.md index 0ec50601d..2bd5f85fc 100644 --- a/solutions/security/investigate/cases.md +++ b/solutions/security/investigate/cases.md @@ -19,7 +19,7 @@ You can also send cases to these external systems by [configuring external conne :::{image} ../../../images/security-cases-home-page.png :alt: Case UI Home -:class: screenshot +:screenshot: ::: ::::{note} diff --git a/solutions/security/investigate/configure-case-settings.md b/solutions/security/investigate/configure-case-settings.md index 1708e1a85..2c1c819ed 100644 --- a/solutions/security/investigate/configure-case-settings.md +++ b/solutions/security/investigate/configure-case-settings.md @@ -11,7 +11,7 @@ First, find **Cases** in the navigation menu or search for `Security/Cases` by u :::{image} ../../../images/security-cases-settings.png :alt: Shows the case settings page -:class: screenshot +:screenshot: ::: ::::{note} @@ -89,7 +89,7 @@ You can add optional and required fields for customized case collaboration. :::{image} ../../../images/security-cases-add-custom-field.png :alt: Add a custom field in case settings - :class: screenshot + :screenshot: ::: 2. You must provide a field label and type (text or toggle). You can optionally designate it as a required field and provide a default value. @@ -114,7 +114,7 @@ To create a template: :::{image} ../../../images/security-cases-add-template.png :alt: Add a template in case settings - :class: screenshot + :screenshot: ::: 2. You must provide a template name and case severity. You can optionally add template tags and a description, values for each case field, and a case connector. @@ -154,5 +154,5 @@ Deleting a custom observable type deletes all instances of it. :::{image} ../../../images/security-cases-observable-types.png :alt: Add an observable type in case settings -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/investigate/examine-osquery-results.md b/solutions/security/investigate/examine-osquery-results.md index ddebf9cbe..6e4420ac3 100644 --- a/solutions/security/investigate/examine-osquery-results.md +++ b/solutions/security/investigate/examine-osquery-results.md @@ -20,7 +20,7 @@ Results for single queries appear on the **Results** tab. When you run a query, :::{image} ../../../images/security-single-query-results.png :alt: Shows query results -:class: screenshot +:screenshot: ::: @@ -30,7 +30,7 @@ Results for each query in the pack appear in the **Results** tab. Click the expa :::{image} ../../../images/security-pack-query-results.png :alt: Shows query results -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/investigate/indicators-of-compromise.md b/solutions/security/investigate/indicators-of-compromise.md index 8a91f083a..d2eaa39ef 100644 --- a/solutions/security/investigate/indicators-of-compromise.md +++ b/solutions/security/investigate/indicators-of-compromise.md @@ -22,7 +22,7 @@ The Indicators page collects data from enabled threat intelligence feeds and pro :::{image} ../../../images/security-indicators-table.png :alt: Shows the Indicators page -:class: screenshot +:screenshot: ::: @@ -54,7 +54,7 @@ After you add indicators to the Indicators page, you can [examine](#examine-indi :::{image} ../../../images/security-interact-with-indicators-table.gif :alt: interact with indicators table -:class: screenshot +:screenshot: ::: @@ -73,7 +73,7 @@ Learn more about an indicator by clicking **View details**, then opening the Ind :::{image} ../../../images/security-indicator-details-flyout.png :alt: Shows the Indicator details flyout - :class: screenshot + :screenshot: ::: @@ -84,7 +84,7 @@ Investigate an indicator in [Timeline](/solutions/security/investigate/timeline. :::{image} ../../../images/security-indicator-query-timeline.png :alt: Shows the results of an indicator being investigated in Timeline -:class: screenshot +:screenshot: ::: When you add an indicator to Timeline, a new Timeline opens with an auto-generated KQL query. The query contains the indicator field-value pair that you selected plus the field-value pair of the automatically mapped source event. By default, the query’s time range is set to seven days before and after the indicator’s `timestamp`. @@ -98,7 +98,7 @@ The following image shows a file hash indictor being investigated in Timeline. T :::{image} ../../../images/security-indicator-in-timeline.png :alt: Shows the results of an indicator being investigated in Timeline -:class: screenshot +:screenshot: ::: The auto-generated query contains the indicator field-value pair (mentioned previously) and the auto-mapped source event field-value pair, which is: @@ -125,7 +125,7 @@ To add indicators to cases: :::{image} ../../../images/security-indicator-added-to-case.png :alt: An indicator attached to a case -:class: screenshot +:screenshot: ::: @@ -154,7 +154,7 @@ To remove an indicator attached to a case, click the **More actions** (**…​ :::{image} ../../../images/security-remove-indicator.png :alt: Removing an indicator from a case -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/investigate/notes.md b/solutions/security/investigate/notes.md index db4341b84..c442cbc2b 100644 --- a/solutions/security/investigate/notes.md +++ b/solutions/security/investigate/notes.md @@ -22,7 +22,7 @@ After notes are created, the **Add note** icon displays a notification dot. In t :::{image} ../../../images/security-new-note-alert-event.png :alt: New note added to an alert -:class: screenshot +:screenshot: ::: @@ -39,7 +39,7 @@ After notes are created, the **Notes** Timeline tab displays the total number of :::{image} ../../../images/security-new-note-timeline-tab.png :alt: New note added to a Timeline -:class: screenshot +:screenshot: ::: @@ -56,5 +56,5 @@ Use the **Notes** page to view and interact with all existing notes. To access t :::{image} ../../../images/security-notes-management-page.png :alt: Notes management page -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/investigate/open-manage-cases.md b/solutions/security/investigate/open-manage-cases.md index 4ba72c0f4..52504ef72 100644 --- a/solutions/security/investigate/open-manage-cases.md +++ b/solutions/security/investigate/open-manage-cases.md @@ -39,7 +39,7 @@ Open a new case to keep track of security issues and share their details with co :::{image} ../../../images/security-cases-ui-open.png :alt: Shows an open case -:class: screenshot +:screenshot: ::: % This wasn't in the Serverless docs. Might be an ESS-only feature. @@ -75,7 +75,7 @@ From the Cases page, you can search existing cases and filter them by attributes :::{image} ../../../images/security-cases-home-page.png :alt: Case UI Home -:class: screenshot +:screenshot: ::: To explore a case, click on its name. You can then: @@ -112,7 +112,7 @@ Click on an existing case to access its summary. The case summary, located under :::{image} ../../../images/security-cases-summary.png :alt: Shows you a summary of the case -:class: screenshot +:screenshot: ::: @@ -122,7 +122,7 @@ To edit, delete, or quote a comment, select the appropriate option from the **Mo :::{image} ../../../images/security-cases-manage-comments.png :alt: Shows you a summary of the case -:class: screenshot +:screenshot: ::: @@ -132,7 +132,7 @@ To explore the alerts attached to a case, click the **Alerts** tab. In the table :::{image} ../../../images/security-cases-alert-tab.png :alt: Shows you the Alerts tab -:class: screenshot +:screenshot: ::: ::::{note} @@ -147,7 +147,7 @@ To upload files to a case, click the **Files** tab: :::{image} ../../../images/security-cases-files.png :alt: A list of files attached to a case -:class: screenshot +:screenshot: ::: You can set file types and sizes by configuring your [{{kib}} case settings](kibana://reference/configuration-reference/cases-settings.md). @@ -175,7 +175,7 @@ Add a Lens visualization to your case to portray event and alert data through ch :::{image} ../../../images/security-add-vis-to-case.gif :alt: Shows how to add a visualization to a case -:class: screenshot +:screenshot: ::: To add a Lens visualization to a comment within your case: @@ -202,7 +202,7 @@ After a visualization has been added to a case, you can modify or interact with :::{image} ../../../images/security-cases-open-vis.png :alt: Shows where the Open Visualization option is -:class: screenshot +:screenshot: ::: @@ -241,7 +241,7 @@ Go to the **Similar cases** tab to access other cases with the same observables. :::{image} ../../../images/security-cases-add-observables.png :alt: Shows you where to add observables -:class: screenshot +:screenshot: ::: @@ -251,7 +251,7 @@ Each case has a universally unique identifier (UUID) that you can copy and share :::{image} ../../../images/security-cases-copy-case-id.png :alt: Copy Case ID option in More actions menu 30% -:class: screenshot +:screenshot: ::: @@ -298,7 +298,7 @@ To export a case: :::{image} ../../../images/security-cases-export-button.png :alt: Shows the export saved objects workflow -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/investigate/osquery.md b/solutions/security/investigate/osquery.md index af7ce7872..7bae16f14 100644 --- a/solutions/security/investigate/osquery.md +++ b/solutions/security/investigate/osquery.md @@ -57,7 +57,7 @@ To inspect hosts, run a query against one or more agents or policies, then view :::{image} ../../../images/kibana-enter-query.png :alt: Select saved query dropdown name showing query name and description - :class: screenshot + :screenshot: ::: 6. Click **Submit**. @@ -85,7 +85,7 @@ The **Live queries history** section on the **Live queries** tab shows a log of :::{image} ../../../images/kibana-live-query-check-results.png :alt: Results of OSquery - :class: screenshot + :screenshot: ::: @@ -140,7 +140,7 @@ You can run packs as live queries or schedule packs to run for one or more agent :::{image} ../../../images/kibana-scheduled-pack.png :alt: Shows queries in the pack and details about each query - :class: screenshot + :screenshot: ::: 3. View scheduled query results in [**Discover**](../../../explore-analyze/discover.md) or the drag-and-drop [**Lens**](../../../explore-analyze/visualize/lens.md) editor. diff --git a/solutions/security/investigate/run-osquery-from-alerts.md b/solutions/security/investigate/run-osquery-from-alerts.md index 8ef70809f..c25172fd2 100644 --- a/solutions/security/investigate/run-osquery-from-alerts.md +++ b/solutions/security/investigate/run-osquery-from-alerts.md @@ -52,7 +52,7 @@ To run Osquery from an alert: :::{image} ../../../images/security-setup-query.png :alt: Shows how to set up a single query - :class: screenshot + :screenshot: ::: 5. Click **Submit**. Query results will display within the flyout. diff --git a/solutions/security/investigate/run-osquery-from-investigation-guides.md b/solutions/security/investigate/run-osquery-from-investigation-guides.md index 154fb1557..a0afb1d90 100644 --- a/solutions/security/investigate/run-osquery-from-investigation-guides.md +++ b/solutions/security/investigate/run-osquery-from-investigation-guides.md @@ -26,7 +26,7 @@ Detection rule investigation guides suggest steps for triaging, analyzing, and r :::{image} ../../../images/security-osquery-investigation-guide.png :alt: Shows a live query in an investigation guide -:class: screenshot +:screenshot: ::: @@ -58,7 +58,7 @@ You can only add Osquery to investigation guides for custom rules because prebui :::{image} ../../../images/security-setup-osquery-investigation-guide.png :alt: Shows results from running a query from an investigation guide - :class: screenshot + :screenshot: ::: 5. Click **Save changes** to add the query to the rule’s investigation guide. @@ -89,5 +89,5 @@ You can only add Osquery to investigation guides for custom rules because prebui :::{image} ../../../images/security-run-query-investigation-guide.png :alt: Shows results from running a query from an investigation guide - :class: screenshot + :screenshot: ::: diff --git a/solutions/security/investigate/session-view.md b/solutions/security/investigate/session-view.md index b58a33631..e2ee26323 100644 --- a/solutions/security/investigate/session-view.md +++ b/solutions/security/investigate/session-view.md @@ -47,7 +47,7 @@ Session View is accessible from the **Hosts**, **Alerts**, and **Timelines** pag :::{image} ../../../images/security-session-view-action-icon-detail.png :alt: Detail of the Open Session View button - :class: screenshot + :screenshot: ::: * On the Hosts page (**Explore** → **Hosts**), select the **Sessions** or the **Events** tab. From either of these tabs, click the **Open Session View** button for an event or session. @@ -59,7 +59,7 @@ The Session View UI has the following features: :::{image} ../../../images/security-session-view-terminal-labeled.png :alt: Detail of Session view with labeled UI elements -:class: screenshot +:screenshot: ::: 1. The **Close Session** and **Full screen** buttons. @@ -78,21 +78,21 @@ Session View includes additional badges not pictured above: :::{image} ../../../images/security-session-view-alert-types-badge.png :alt: The alert badge for a command with all three alert types - :class: screenshot + :screenshot: ::: * The **Exec user change** badge highlights exec user changes, such as when a user escalates to root: :::{image} ../../../images/security-session-view-exec-user-change-badge.png :alt: The Exec user change badge - :class: screenshot + :screenshot: ::: * The **Output** badge appears next to commands that generated terminal output. Click it to view that command’s output in terminal output view. :::{image} ../../../images/security-session-view-output-badge.png :alt: The Output badge - :class: screenshot + :screenshot: ::: @@ -123,7 +123,7 @@ You can configure several additional settings by clicking **Advanced settings** :::{image} ../../../images/security-session-view-output-viewer.png :alt: Terminal output view -:class: screenshot +:screenshot: ::: 1. Search bar. Use to find and highlight search terms within the current session. The left and right arrows allow you to navigate through search results. diff --git a/solutions/security/investigate/timeline-templates.md b/solutions/security/investigate/timeline-templates.md index f8bac1846..de51ecdcf 100644 --- a/solutions/security/investigate/timeline-templates.md +++ b/solutions/security/investigate/timeline-templates.md @@ -48,14 +48,14 @@ Regular Timeline filter :::{image} ../../../images/security-template-filter-value.png :alt: Timeline template filter value - :class: screenshot + :screenshot: ::: Template filter :::{image} ../../../images/security-timeline-template-filter.png :alt: timeline template filter -:class: screenshot +:screenshot: ::: @@ -63,7 +63,7 @@ When you [convert a template to a Timeline](/solutions/security/investigate/time :::{image} ../../../images/security-invalid-filter.png :alt: Invalid events filter -:class: screenshot +:screenshot: ::: To enable the filter, either specify a value or change it to a field’s existing filter (refer to [Edit existing filters](/solutions/security/investigate/timeline.md#pivot)). @@ -84,7 +84,7 @@ To enable the filter, either specify a value or change it to a field’s existin :::{image} ../../../images/security-create-a-timeline-template-field.png :alt: Shows an example of a Timeline template - :class: screenshot + :screenshot: ::: ::::{tip} @@ -102,7 +102,7 @@ To create a template for process-related alerts on a specific host: :::{image} ../../../images/security-template-query-example.png :alt: template query example -:class: screenshot +:screenshot: ::: When alerts generated by rules associated with this template are investigated in Timeline, the host name is `Linux_stafordshire-061`, whereas the process name value is retrieved from the alert’s `process.name` field. @@ -116,7 +116,7 @@ You can view, duplicate, export, delete, and create templates from existing Time :::{image} ../../../images/security-all-actions-timeline-ui.png :alt: All actions Timeline UI - :class: screenshot + :screenshot: ::: 2. Click the **All actions** icon in the relevant row, and then select the action: diff --git a/solutions/security/investigate/timeline.md b/solutions/security/investigate/timeline.md index 4ed9cafae..59950dd6c 100644 --- a/solutions/security/investigate/timeline.md +++ b/solutions/security/investigate/timeline.md @@ -12,7 +12,7 @@ You can drag or send fields of interest to a Timeline to create the desired quer :::{image} ../../../images/security-timeline-ui-updated.png :alt: example Timeline with several events -:class: screenshot +:screenshot: ::: In addition to Timelines, you can create and attach Timeline templates to [detection rules](/solutions/security/detect-and-alert.md). Timeline templates allow you to define the source event fields used when you investigate alerts in Timeline. You can select whether the fields use predefined values or values retrieved from the alert. For more information, refer to [Timeline templates](/solutions/security/investigate/timeline-templates.md). @@ -56,7 +56,7 @@ Many types of events automatically appear in preconfigured views that provide re :::{image} ../../../images/security-timeline-ui-renderer.png :alt: example timeline with the event renderer highlighted -:class: screenshot +:screenshot: ::: The example above displays the Flow event renderer, which highlights the movement of data between its source and destination. If you see a particular part of the rendered event that interests you, you can drag it up to the drop zone below the query bar for further investigation. @@ -81,7 +81,7 @@ To add a field from the sidebar, hover over it, and click the **Add field as a c :::{image} ../../../images/security-timeline-sidebar.png :alt: Shows the sidebar that allows you to configure the columns that display in Timeline -:class: screenshot +:screenshot: ::: @@ -103,7 +103,7 @@ Click a filter to access additional operations such as **Add filter**, **Clear a :::{image} ../../../images/security-timeline-ui-filter-options.png :alt: timeline ui filter options -:class: screenshot +:screenshot: ::: Here are examples of various types of filters: @@ -113,7 +113,7 @@ Field with value :::{image} ../../../images/security-timeline-filter-value.png :alt: timeline filter value - :class: screenshot + :screenshot: ::: @@ -122,7 +122,7 @@ Field exists :::{image} ../../../images/security-timeline-field-exists.png :alt: timeline field exists - :class: screenshot + :screenshot: ::: @@ -131,7 +131,7 @@ Exclude results :::{image} ../../../images/security-timeline-filter-exclude.png :alt: timeline filter exclude - :class: screenshot + :screenshot: ::: @@ -140,7 +140,7 @@ Temporarily disable :::{image} ../../../images/security-timeline-disable-filter.png :alt: timeline disable filter - :class: screenshot + :screenshot: ::: @@ -210,7 +210,7 @@ The following image shows what matched ordered events look like in the Timeline :::{image} ../../../images/security-correlation-tab-eql-query.png :alt: a Timeline's correlation tab -:class: screenshot +:screenshot: ::: From the **Correlation** tab, you can also do the following: @@ -262,7 +262,7 @@ You can use {{esql}} in Timeline by opening the **{{esql}}** tab. From there, yo :::{image} ../../../images/security-esql-tab.png :alt: Example of the ES|QL tab in Timeline -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/investigate/visual-event-analyzer.md b/solutions/security/investigate/visual-event-analyzer.md index 3ad8f02f6..ccc04db81 100644 --- a/solutions/security/investigate/visual-event-analyzer.md +++ b/solutions/security/investigate/visual-event-analyzer.md @@ -47,7 +47,7 @@ To find events that can be visually analyzed: :::{image} ../../../images/security-analyze-event-button.png :alt: analyze event button - :class: screenshot + :screenshot: ::: ::::{note} @@ -57,7 +57,7 @@ To find events that can be visually analyzed: :::{image} ../../../images/security-analyze-event-timeline.png :alt: analyze event timeline - :class: screenshot + :screenshot: ::: @@ -79,42 +79,42 @@ To understand what fields were used to create the process, select the **Process :::{image} ../../../images/security-process-schema.png :alt: process schema -:class: screenshot +:screenshot: ::: Click the **Legend** to show the state of each process node. :::{image} ../../../images/security-node-legend.png :alt: node legend -:class: screenshot +:screenshot: ::: Use the date and time filter to analyze the event within a specific time range. By default, the selected time range matches that of the table from which you opened the alert. :::{image} ../../../images/security-date-range-selection.png :alt: date range selection -:class: screenshot +:screenshot: ::: Select a different data view to further filter the alert’s related events. :::{image} ../../../images/security-data-view-selection.png :alt: data view selection -:class: screenshot +:screenshot: ::: To expand the analyzer to a full screen, select the **Full Screen** icon above the left panel. :::{image} ../../../images/security-full-screen-analyzer.png :alt: full screen analyzer -:class: screenshot +:screenshot: ::: The left panel contains a list of all processes related to the event, starting with the event chain’s first process. **Analyzed Events** — the event you selected to analyze from the events list or Timeline — are highlighted with a light blue outline around the cube. :::{image} ../../../images/security-process-list.png :alt: process list -:class: screenshot +:screenshot: ::: In the graphical view, you can: @@ -127,7 +127,7 @@ In the graphical view, you can: :::{image} ../../../images/security-graphical-view.png :alt: graphical view -:class: screenshot +:screenshot: ::: @@ -145,7 +145,7 @@ To learn more about each related process, select the process in the left panel o :::{image} ../../../images/security-process-details.png :alt: process details -:class: screenshot +:screenshot: ::: When you first select a process, it appears in a loading state. If loading data for a given process fails, click **Reload `{{process-name}}`** beneath the process to reload the data. @@ -156,14 +156,14 @@ Events are categorized based on the `event.category` value. :::{image} ../../../images/security-event-type.png :alt: event type -:class: screenshot +:screenshot: ::: When you select an `event.category` pill, all the events within that category are listed in the left panel. To display more details about a specific event, select it from the list. :::{image} ../../../images/security-event-details.png :alt: event details -:class: screenshot +:screenshot: ::: ::::{note} @@ -178,5 +178,5 @@ In the example screenshot below, five alerts were generated by the analyzed even :::{image} ../../../images/security-alert-pill.png :alt: alert pill -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/manage-elastic-defend/blocklist.md b/solutions/security/manage-elastic-defend/blocklist.md index 3e97c6559..fded9ae73 100644 --- a/solutions/security/manage-elastic-defend/blocklist.md +++ b/solutions/security/manage-elastic-defend/blocklist.md @@ -69,7 +69,7 @@ The **Blocklist** page displays all the blocklist entries that have been added t :::{image} ../../../images/security-blocklist.png :alt: blocklist -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/manage-elastic-defend/endpoints.md b/solutions/security/manage-elastic-defend/endpoints.md index 4ae23cb96..66e710a40 100644 --- a/solutions/security/manage-elastic-defend/endpoints.md +++ b/solutions/security/manage-elastic-defend/endpoints.md @@ -21,7 +21,7 @@ The **Endpoints** list displays all hosts running {{elastic-defend}} and their r :::{image} ../../../images/security-endpoints-pg.png :alt: Endpoints page -:class: screenshot +:screenshot: ::: The Endpoints list provides the following data: @@ -62,7 +62,7 @@ Click any link in the **Endpoint** column to display host details in a flyout. Y :::{image} ../../../images/security-host-flyout.png :alt: Endpoint details flyout -:class: screenshot +:screenshot: ::: @@ -72,7 +72,7 @@ The endpoint details flyout also includes the **Response actions history** tab, :::{image} ../../../images/security-response-actions-history-endpoint-details.png :alt: Response actions history with a few past actions -:class: screenshot +:screenshot: ::: @@ -89,7 +89,7 @@ Users must have permission to read/write to {{fleet}} APIs to make changes to th :::{image} ../../../images/security-integration-pg.png :alt: Integration page -:class: screenshot +:screenshot: ::: Users who have unique configuration and security requirements can select **Show advanced settings** to configure the policy to support advanced use cases. Hover over each setting to view its description. @@ -101,7 +101,7 @@ Advanced settings are not recommended for most users. :::{image} ../../../images/security-integration-advanced-settings.png :alt: Integration page -:class: screenshot +:screenshot: ::: @@ -128,7 +128,7 @@ If you need help troubleshooting a configuration failure, refer to [](/troublesh :::{image} ../../../images/security-config-status.png :alt: Config status details -:class: screenshot +:screenshot: ::: @@ -138,7 +138,7 @@ To filter the Endpoints list, use the search bar to enter a query using [{{kib}} :::{image} ../../../images/security-filter-endpoints.png :alt: filter endpoints -:class: screenshot +:screenshot: ::: ::::{note} diff --git a/solutions/security/manage-elastic-defend/event-filters.md b/solutions/security/manage-elastic-defend/event-filters.md index 0c502a910..1a6f59104 100644 --- a/solutions/security/manage-elastic-defend/event-filters.md +++ b/solutions/security/manage-elastic-defend/event-filters.md @@ -41,7 +41,7 @@ Create event filters from the **Hosts** page or the **Event filters** page. :::{image} ../../../images/security-event-filter.png :alt: Add event filter flyout - :class: screenshot + :screenshot: ::: 2. Fill in these fields in the **Details** section: @@ -94,7 +94,7 @@ The **Event filters** page displays all the event filters that have been added t :::{image} ../../../images/security-event-filters-list.png :alt: event filters list -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/manage-elastic-defend/host-isolation-exceptions.md b/solutions/security/manage-elastic-defend/host-isolation-exceptions.md index 938eeefab..6357f42eb 100644 --- a/solutions/security/manage-elastic-defend/host-isolation-exceptions.md +++ b/solutions/security/manage-elastic-defend/host-isolation-exceptions.md @@ -51,7 +51,7 @@ The **Host isolation exceptions** page displays all the host isolation exception :::{image} ../../../images/security-host-isolation-exceptions-ui.png :alt: List of host isolation exceptions -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/manage-elastic-defend/policies.md b/solutions/security/manage-elastic-defend/policies.md index 6475b3200..3dc4cbb88 100644 --- a/solutions/security/manage-elastic-defend/policies.md +++ b/solutions/security/manage-elastic-defend/policies.md @@ -17,5 +17,5 @@ Click on an integration policy’s name to configure its settings. For more info :::{image} ../../../images/security-policy-list.png :alt: policy list -:class: screenshot +:screenshot: ::: diff --git a/solutions/security/manage-elastic-defend/trusted-applications.md b/solutions/security/manage-elastic-defend/trusted-applications.md index f604ab731..a23b1f598 100644 --- a/solutions/security/manage-elastic-defend/trusted-applications.md +++ b/solutions/security/manage-elastic-defend/trusted-applications.md @@ -74,7 +74,7 @@ The **Trusted applications** page displays all the trusted applications that hav :::{image} ../../../images/security-trusted-apps-list.png :alt: trusted apps list -:class: screenshot +:screenshot: ::: diff --git a/troubleshoot/deployments/esf/elastic-serverless-forwarder.md b/troubleshoot/deployments/esf/elastic-serverless-forwarder.md index f905b53f3..6e4339d20 100644 --- a/troubleshoot/deployments/esf/elastic-serverless-forwarder.md +++ b/troubleshoot/deployments/esf/elastic-serverless-forwarder.md @@ -25,7 +25,7 @@ For example, if you don’t increase the visibility timeout for an SQS queue as ## Prevent unexpected costs [preventing-unexpected-costs] -It is important to monitor the Elastic Serverless Forwarder Lambda function for timeouts to prevent unexpected costs. You can use the [AWS Lambda integration](https://docs.elastic.co/en/integrations/aws/lambda) for this. If the timeouts are constant, you should throttle the Lambda function to stop its execution before proceeding with any troubleshooting steps. In most cases, constant timeouts will cause the records and messages from the event triggers to go back to their sources and trigger the function again, which will cause further timeouts and force a loop that will incure unexpected high costs. For more information on throttling Lambda functions, refer to [AWS docs](https://docs.aws.amazon.com/lambda/latest/operatorguide/throttling.md). +It is important to monitor the Elastic Serverless Forwarder Lambda function for timeouts to prevent unexpected costs. You can use the [AWS Lambda integration](https://docs.elastic.co/en/integrations/aws/lambda) for this. If the timeouts are constant, you should throttle the Lambda function to stop its execution before proceeding with any troubleshooting steps. In most cases, constant timeouts will cause the records and messages from the event triggers to go back to their sources and trigger the function again, which will cause further timeouts and force a loop that will incure unexpected high costs. For more information on throttling Lambda functions, refer to [AWS docs](https://docs.aws.amazon.com/lambda/latest/operatorguide/throttling.html). ## Increase debug information [_increase_debug_information] diff --git a/troubleshoot/elasticsearch/add-tier.md b/troubleshoot/elasticsearch/add-tier.md index faa2e0c86..ef518f8fe 100644 --- a/troubleshoot/elasticsearch/add-tier.md +++ b/troubleshoot/elasticsearch/add-tier.md @@ -32,7 +32,7 @@ In order to get the shards assigned we need enable a new tier in the deployment. :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. Determine which tier an index expects for assignment. [Retrieve](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) the configured value for the `index.routing.allocation.include._tier_preference` setting: diff --git a/troubleshoot/elasticsearch/allow-all-cluster-allocation.md b/troubleshoot/elasticsearch/allow-all-cluster-allocation.md index be7e1b6d7..9756748a1 100644 --- a/troubleshoot/elasticsearch/allow-all-cluster-allocation.md +++ b/troubleshoot/elasticsearch/allow-all-cluster-allocation.md @@ -32,7 +32,7 @@ We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.e :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. Inspect the `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): diff --git a/troubleshoot/elasticsearch/allow-all-index-allocation.md b/troubleshoot/elasticsearch/allow-all-index-allocation.md index 59bcdf043..adcf4a8f5 100644 --- a/troubleshoot/elasticsearch/allow-all-index-allocation.md +++ b/troubleshoot/elasticsearch/allow-all-index-allocation.md @@ -33,7 +33,7 @@ In order to get the shards assigned we’ll need to change the value of the [con :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. Inspect the `index.routing.allocation.enable` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: diff --git a/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md b/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md index 0d30caa5d..25382bf1c 100644 --- a/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md +++ b/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md @@ -36,7 +36,7 @@ Reducing the replicas of an index can potentially reduce search throughput and d :::{image} ../../images/elasticsearch-reference-reduce_replicas.png :alt: Reducing replicas - :class: screenshot + :screenshot: ::: 6. Continue this process until the cluster is healthy again. diff --git a/troubleshoot/elasticsearch/diagnose-unassigned-shards.md b/troubleshoot/elasticsearch/diagnose-unassigned-shards.md index ed3ed03d9..4db72a6ae 100644 --- a/troubleshoot/elasticsearch/diagnose-unassigned-shards.md +++ b/troubleshoot/elasticsearch/diagnose-unassigned-shards.md @@ -30,7 +30,7 @@ In order to diagnose the unassigned shards, follow the next steps: :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. View the unassigned shards using the [cat shards API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-shards). diff --git a/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md b/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md index 5d71a0abc..bc2bf3e0d 100644 --- a/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md +++ b/troubleshoot/elasticsearch/diagnosing-corrupted-repositories.md @@ -28,7 +28,7 @@ First mark the repository as read-only on the secondary deployments: :::{image} ../../images/elasticsearch-reference-repositories.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. The repositories table should now be visible. Click on the pencil icon at the right side of the repository to be marked as read-only. On the Edit page that opened scroll down and check "Read-only repository". Click "Save". Alternatively if deleting the repository altogether is preferable, select the checkbox at the left of the repository name in the repositories table and click the "Remove repository" red button at the top left of the table. @@ -41,7 +41,7 @@ Note that we’re now configuring the primary (current) deployment. :::{image} ../../images/elasticsearch-reference-repositories.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 2. Click on the pencil icon at the right side of the repository. On the Edit page that opened scroll down and click "Save", without making any changes to the existing settings. diff --git a/troubleshoot/elasticsearch/elasticsearch-client-java-api-client/typed-keys-serialization.md b/troubleshoot/elasticsearch/elasticsearch-client-java-api-client/typed-keys-serialization.md index ded112925..5d3be9b6a 100644 --- a/troubleshoot/elasticsearch/elasticsearch-client-java-api-client/typed-keys-serialization.md +++ b/troubleshoot/elasticsearch/elasticsearch-client-java-api-client/typed-keys-serialization.md @@ -5,7 +5,7 @@ mapped_pages: # Typed keys serialization [serialize-without-typed-keys] -{{es}} search requests accept a `typed_key` parameter that allow returning type information along with the name in aggregation and suggestion results (see the [aggregations documentation](https://www.elastic.co/guide/en/elasticsearch/reference/master/search-aggregations.html#return-agg-type) for additional details). +{{es}} search requests accept a `typed_key` parameter that allow returning type information along with the name in aggregation and suggestion results (see the [aggregations documentation](/explore-analyze/query-filter/aggregations.md#return-agg-type) for additional details). The Java API Client always adds this parameter to search requests, as type information is needed to know the concrete class that should be used to deserialize aggregation and suggestion results. diff --git a/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md b/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md index 9272f5a90..c3e620c9f 100644 --- a/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md +++ b/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md @@ -19,7 +19,7 @@ mapped_pages: :::{image} ../../images/elasticsearch-reference-increase-disk-capacity-master-node.png :alt: Increase disk capacity of master nodes - :class: screenshot + :screenshot: ::: 4. Choose a larger than the pre-selected capacity configuration from the drop-down menu and click `save`. Wait for the plan to be applied and the problem should be resolved. diff --git a/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md b/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md index 65bd7baa9..8cfd8d38b 100644 --- a/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md +++ b/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md @@ -19,7 +19,7 @@ mapped_pages: :::{image} ../../images/elasticsearch-reference-increase-disk-capacity-other-node.png :alt: Increase disk capacity of other nodes - :class: screenshot + :screenshot: ::: 4. Choose a larger than the pre-selected capacity configuration from the drop-down menu and click `save`. Wait for the plan to be applied and the problem should be resolved. diff --git a/troubleshoot/elasticsearch/increase-capacity-data-node.md b/troubleshoot/elasticsearch/increase-capacity-data-node.md index 90b628dca..c7b24c34b 100644 --- a/troubleshoot/elasticsearch/increase-capacity-data-node.md +++ b/troubleshoot/elasticsearch/increase-capacity-data-node.md @@ -17,28 +17,28 @@ In order to increase the disk capacity of the data nodes in your cluster: :::{image} ../../images/elasticsearch-reference-autoscaling_banner.png :alt: Autoscaling banner - :class: screenshot + :screenshot: ::: Or you can go to `Actions > Edit deployment`, check the checkbox `Autoscale` and click `save` at the bottom of the page. :::{image} ../../images/elasticsearch-reference-enable_autoscaling.png :alt: Enabling autoscaling - :class: screenshot + :screenshot: ::: 4. If autoscaling has succeeded the cluster should return to `healthy` status. If the cluster is still out of disk, please check if autoscaling has reached its limits. You will be notified about this by the following banner: :::{image} ../../images/elasticsearch-reference-autoscaling_limits_banner.png :alt: Autoscaling banner - :class: screenshot + :screenshot: ::: or you can go to `Actions > Edit deployment` and look for the label `LIMIT REACHED` as shown below: :::{image} ../../images/elasticsearch-reference-reached_autoscaling_limits.png :alt: Autoscaling limits reached - :class: screenshot + :screenshot: ::: If you are seeing the banner click `Update autoscaling settings` to go to the `Edit` page. Otherwise, you are already in the `Edit` page, click `Edit settings` to increase the autoscaling limits. After you perform the change click `save` at the bottom of the page. diff --git a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md index 027e95642..4bc69f32e 100644 --- a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md @@ -32,7 +32,7 @@ In order to get the shards assigned we’ll need to increase the number of shard :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): diff --git a/troubleshoot/elasticsearch/increase-shard-limit.md b/troubleshoot/elasticsearch/increase-shard-limit.md index 271d67a37..20e8f68c5 100644 --- a/troubleshoot/elasticsearch/increase-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-shard-limit.md @@ -32,7 +32,7 @@ In order to get the shards assigned we’ll need to increase the number of shard :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: diff --git a/troubleshoot/elasticsearch/increase-tier-capacity.md b/troubleshoot/elasticsearch/increase-tier-capacity.md index 05369882e..f142c4c52 100644 --- a/troubleshoot/elasticsearch/increase-tier-capacity.md +++ b/troubleshoot/elasticsearch/increase-tier-capacity.md @@ -30,7 +30,7 @@ One way to get the replica shards assigned is to add an availability zone. This :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: @@ -59,7 +59,7 @@ Now that you know the tier, you want to increase the number of nodes in that tie :::{image} ../../images/elasticsearch-reference-ess-advanced-config-data-tiers.png :alt: {{kib}} Console -:class: screenshot +:screenshot: ::: * Option 1: Increase the size per zone diff --git a/troubleshoot/elasticsearch/repeated-snapshot-failures.md b/troubleshoot/elasticsearch/repeated-snapshot-failures.md index 022bb615b..28709bacc 100644 --- a/troubleshoot/elasticsearch/repeated-snapshot-failures.md +++ b/troubleshoot/elasticsearch/repeated-snapshot-failures.md @@ -30,7 +30,7 @@ In order to check the status of failing {{slm}} policies we need to go to Kibana :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. [Retrieve](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-get-lifecycle) the {{slm}} policy: diff --git a/troubleshoot/elasticsearch/restore-from-snapshot.md b/troubleshoot/elasticsearch/restore-from-snapshot.md index 170f67afc..5b211ea1b 100644 --- a/troubleshoot/elasticsearch/restore-from-snapshot.md +++ b/troubleshoot/elasticsearch/restore-from-snapshot.md @@ -30,7 +30,7 @@ In order to restore the indices and data streams that are missing data: :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. To view the affected indices using the [cat indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices). diff --git a/troubleshoot/elasticsearch/security/trb-security-kerberos.md b/troubleshoot/elasticsearch/security/trb-security-kerberos.md index 4ace23459..c74f1477b 100644 --- a/troubleshoot/elasticsearch/security/trb-security-kerberos.md +++ b/troubleshoot/elasticsearch/security/trb-security-kerberos.md @@ -36,7 +36,7 @@ Make sure that: * You have installed curl version 7.49 or above as older versions of curl have known Kerberos bugs. * The curl installed on your machine has `GSS-API`, `Kerberos` and `SPNEGO` features listed when you invoke command `curl -V`. If not, you will need to compile `curl` version with this support. -To download latest curl version visit [https://curl.haxx.se/download.html](https://curl.haxx.se/download.md) +To download latest curl version visit [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html) As Kerberos logs are often cryptic in nature and many things can go wrong as it depends on external services like DNS and NTP. You might have to enable additional debug logs to determine the root cause of the issue. diff --git a/troubleshoot/elasticsearch/security/trb-security-maccurl.md b/troubleshoot/elasticsearch/security/trb-security-maccurl.md index 05b815db4..aa58551d4 100644 --- a/troubleshoot/elasticsearch/security/trb-security-maccurl.md +++ b/troubleshoot/elasticsearch/security/trb-security-maccurl.md @@ -12,7 +12,7 @@ mapped_pages: **Resolution:** -Apple’s integration of `curl` with their keychain technology disables the `--cacert` option. See [http://curl.haxx.se/mail/archive-2013-10/0036.html](http://curl.haxx.se/mail/archive-2013-10/0036.md) for more information. +Apple’s integration of `curl` with their keychain technology disables the `--cacert` option. See [http://curl.haxx.se/mail/archive-2013-10/0036.html](http://curl.haxx.se/mail/archive-2013-10/0036.html) for more information. You can use another tool, such as `wget`, to test certificates. Alternately, you can add the certificate for the signing certificate authority MacOS system keychain, using a procedure similar to the one detailed at the [Apple knowledge base](http://support.apple.com/kb/PH14003). Be sure to add the signing CA’s certificate and not the server’s certificate. diff --git a/troubleshoot/elasticsearch/start-ilm.md b/troubleshoot/elasticsearch/start-ilm.md index 642dbac5d..f288ac500 100644 --- a/troubleshoot/elasticsearch/start-ilm.md +++ b/troubleshoot/elasticsearch/start-ilm.md @@ -41,7 +41,7 @@ In order to start {{ilm}} we need to go to Kibana and execute the [start command :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. [Start](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start) {{ilm}}: @@ -129,7 +129,7 @@ In order to start {{slm}} we need to go to Kibana and execute the [start command :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. [Start](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-start) {{slm}}: diff --git a/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md b/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md index 8c339e399..be66f2fa6 100644 --- a/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md +++ b/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md @@ -32,7 +32,7 @@ In order to get the shards assigned we need to call the [migrate to data tiers r :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. First, let’s [stop](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop) {{ilm}} diff --git a/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md b/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md index 739b64469..617a8220b 100644 --- a/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md +++ b/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md @@ -33,7 +33,7 @@ If you’re confident your changes won’t destabilize the cluster, you can temp :::{image} ../../images/elasticsearch-reference-kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. Check the current status of the cluster according the shards capacity indicator: @@ -243,7 +243,7 @@ If you’re confident your changes won’t destabilize the cluster, you can temp :::{image} ../../images/kibana-console.png :alt: {{kib}} Console - :class: screenshot + :screenshot: ::: 4. Check the current status of the cluster according the shards capacity indicator: diff --git a/troubleshoot/ingest/fleet/common-problems.md b/troubleshoot/ingest/fleet/common-problems.md index 108d78758..f5cbfe8af 100644 --- a/troubleshoot/ingest/fleet/common-problems.md +++ b/troubleshoot/ingest/fleet/common-problems.md @@ -373,14 +373,14 @@ If you want to omit the raw events from the diagnostic, add the flag `--exclude- :::{image} ../../../images/fleet-collect-agent-diagnostics1.png :alt: Collect agent diagnostics under agent details - :class: screenshot + :screenshot: ::: 4. In the **Request Diagnostics** pop-up, select **Collect additional CPU metrics** if you’d like detailed CPU data. :::{image} ../../../images/fleet-collect-agent-diagnostics2.png :alt: Collect agent diagnostics confirmation pop-up - :class: screenshot + :screenshot: ::: 5. Click the **Request diagnostics** button. diff --git a/troubleshoot/kibana/access.md b/troubleshoot/kibana/access.md index 703a9b2fd..6210addb8 100644 --- a/troubleshoot/kibana/access.md +++ b/troubleshoot/kibana/access.md @@ -45,7 +45,7 @@ To view the {{kib}} status page, use the status endpoint. For example, `localhos :::{image} ../../images/kibana-kibana-status-page-7_14_0.png :alt: Kibana server status page -:class: screenshot +:screenshot: ::: For JSON-formatted server status details, use the `localhost:5601/api/status` API endpoint. diff --git a/troubleshoot/kibana/alerts.md b/troubleshoot/kibana/alerts.md index 06af2f2ad..25ed4bb59 100644 --- a/troubleshoot/kibana/alerts.md +++ b/troubleshoot/kibana/alerts.md @@ -34,7 +34,7 @@ The following debugging tools are available: :::{image} ../../images/kibana-rule-details-alerts-inactive.png :alt: Alerting management details -:class: screenshot +:screenshot: ::: @@ -44,7 +44,7 @@ When creating or editing an index threshold rule, you see a graph of the data th :::{image} ../../images/kibana-index-threshold-chart.png :alt: Index Threshold chart -:class: screenshot +:screenshot: ::: The end date is related to the check interval for the rule. You can use this view to see if the rule is getting the data you expect, and visually compare to the threshold value (a horizontal line in the graph). If the graph does not contain any lines except for the threshold line, then the rule has an issue, for example, no data is available given the specified index and fields or there is a permission error. Diagnosing these may be difficult - but there may be log messages for error conditions. @@ -83,7 +83,7 @@ The **{{stack-manage-app}}** > **{{rules-ui}}** page contains an error banner th :::{image} ../../images/kibana-rules-management-health.png :alt: Rule management page with the errors banner -:class: screenshot +:screenshot: ::: diff --git a/troubleshoot/kibana/maps.md b/troubleshoot/kibana/maps.md index 451e33598..61ff3f66e 100644 --- a/troubleshoot/kibana/maps.md +++ b/troubleshoot/kibana/maps.md @@ -18,12 +18,12 @@ Maps uses the [{{es}} vector tile search API](https://www.elastic.co/docs/api/do :::{image} ../../images/kibana-vector_tile_inspector.png :alt: vector tile inspector -:class: screenshot +:screenshot: ::: :::{image} ../../images/kibana-requests_inspector.png :alt: requests inspector -:class: screenshot +:screenshot: ::: diff --git a/troubleshoot/observability/amazon-data-firehose.md b/troubleshoot/observability/amazon-data-firehose.md index 6ccc62d68..33331def5 100644 --- a/troubleshoot/observability/amazon-data-firehose.md +++ b/troubleshoot/observability/amazon-data-firehose.md @@ -16,7 +16,7 @@ The backup settings in the delivery stream specify how failed delivery requests ## Scaling [aws-firehose-troubleshooting-scaling] -Firehose can [automatically scale](https://docs.aws.amazon.com/firehose/latest/dev/limits.md) to handle very high throughput. If your Elastic deployment is not properly configured for the data volume coming from Firehose, it could cause a bottleneck, which may lead to increased ingest times or indexing failures. +Firehose can [automatically scale](https://docs.aws.amazon.com/firehose/latest/dev/limits.html) to handle very high throughput. If your Elastic deployment is not properly configured for the data volume coming from Firehose, it could cause a bottleneck, which may lead to increased ingest times or indexing failures. There are several facets to optimizing the underlying Elasticsearch performance, but Elastic Cloud provides several ready-to-use hardware profiles which can provide a good starting point. Other factors which can impact performance are [shard sizing](../../deploy-manage/production-guidance/optimize-performance/size-shards.md), [indexing configuration](../../deploy-manage/production-guidance/optimize-performance/indexing-speed.md), and [index lifecycle management (ILM)](../../manage-data/lifecycle/index-lifecycle-management.md). diff --git a/troubleshoot/observability/apm-agent-python/apm-python-agent.md b/troubleshoot/observability/apm-agent-python/apm-python-agent.md index 1400b9fa8..b0f174407 100644 --- a/troubleshoot/observability/apm-agent-python/apm-python-agent.md +++ b/troubleshoot/observability/apm-agent-python/apm-python-agent.md @@ -14,12 +14,12 @@ Below are some resources and tips for troubleshooting and debugging the python a * [Disable the Agent](#disable-agent) -## Easy Fixes [easy-fixes] +## Easy Fixes [easy-fixes] Before you try anything else, go through the following sections to ensure that the agent is configured correctly. This is not an exhaustive list, but rather a list of common problems that users run into. -### Debug Mode [debug-mode] +### Debug Mode [debug-mode] Most frameworks support a debug mode. Generally, this mode is intended for non-production environments and provides detailed error messages and logging of potentially sensitive data. Because of these security issues, the agent will not collect traces if the app is in debug mode by default. @@ -34,7 +34,7 @@ apm = ElasticAPM(app, service_name="flask-app") ``` -### `psutil` for Metrics [psutil-metrics] +### `psutil` for Metrics [psutil-metrics] To get CPU and system metrics on non-Linux systems, `psutil` must be installed. The agent should automatically show a warning on start if it is not installed, but sometimes this warning can be suppressed. Install `psutil` and metrics should be collected by the agent and sent to the APM Server. @@ -43,19 +43,19 @@ python3 -m pip install psutil ``` -### Credential issues [apm-server-credentials] +### Credential issues [apm-server-credentials] In order for the agent to send data to the APM Server, it may need an [`API_KEY`](asciidocalypse://docs/apm-agent-python/docs/reference/configuration.md#config-api-key) or a [`SECRET_TOKEN`](asciidocalypse://docs/apm-agent-python/docs/reference/configuration.md#config-secret-token). Double check your APM Server settings and make sure that your credentials are configured correctly. Additionally, check that [`SERVER_URL`](asciidocalypse://docs/apm-agent-python/docs/reference/configuration.md#config-server-url) is correct. -## Django `check` and `test` [django-test] +## Django `check` and `test` [django-test] When used with Django, the agent provides two management commands to help debug common issues. Head over to the [Django troubleshooting section](asciidocalypse://docs/apm-agent-python/docs/reference/django-support.md#django-troubleshooting) for more information. -## Agent logging [agent-logging] +## Agent logging [agent-logging] -To get the agent to log more data, all that is needed is a [Handler](https://docs.python.org/3/library/logging.md#handler-objects) which is attached either to the `elasticapm` logger or to the root logger. +To get the agent to log more data, all that is needed is a [Handler](https://docs.python.org/3/library/logging.html#handler-objects) which is attached either to the `elasticapm` logger or to the root logger. Note that if you attach the handler to the root logger, you also need to explicitly set the log level of the `elasticapm` logger: @@ -66,7 +66,7 @@ apm_logger.setLevel(logging.DEBUG) ``` -### Django [django-agent-logging] +### Django [django-agent-logging] The simplest way to log more data from the agent is to add a console logging Handler to the `elasticapm` logger. Here’s a (very simplified) example: @@ -88,14 +88,14 @@ LOGGING = { ``` -### Flask [flask-agent-logging] +### Flask [flask-agent-logging] Flask [recommends using `dictConfig()`](https://flask.palletsprojects.com/en/1.1.x/logging/) to set up logging. If you’re using this format, adding logging for the agent will be very similar to the [instructions for Django above](#django-agent-logging). Otherwise, you can use the [generic instructions below](#generic-agent-logging). -### Generic instructions [generic-agent-logging] +### Generic instructions [generic-agent-logging] Creating a console Handler and adding it to the `elasticapm` logger is easy: @@ -119,10 +119,10 @@ console_handler.setLevel(logging.DEBUG) logger.addHandler(console_handler) ``` -See the [python logging docs](https://docs.python.org/3/library/logging.md) for more details about Handlers (and information on how to format your logs using Formatters). +See the [python logging docs](https://docs.python.org/3/library/logging.html) for more details about Handlers (and information on how to format your logs using Formatters). -## Disable the Agent [disable-agent] +## Disable the Agent [disable-agent] In the unlikely event the agent causes disruptions to a production application, you can disable the agent while you troubleshoot. diff --git a/troubleshoot/observability/explore-data.md b/troubleshoot/observability/explore-data.md index 98a6b513c..10b5a8ea8 100644 --- a/troubleshoot/observability/explore-data.md +++ b/troubleshoot/observability/explore-data.md @@ -11,7 +11,7 @@ Based on your synthetic monitoring, user experience, and mobile experience data, :::{image} ../../images/observability-exploratory-view.png :alt: Explore {{data-source}} for Monitor duration -:class: screenshot +:screenshot: ::: @@ -71,7 +71,7 @@ Based on the Uptime data you are sending to your deployment, you can create vari :::{image} ../../images/observability-exploratory-view-uptime.png :alt: Explore data for Uptime -:class: screenshot +:screenshot: ::: | | | @@ -86,7 +86,7 @@ Based on the {{user-experience}} data from your instrumented applications, you c :::{image} ../../images/observability-exploratory-view-ux-page-load-time.png :alt: Explore data for {{user-experience}} (page load time) -:class: screenshot +:screenshot: ::: | | | diff --git a/troubleshoot/observability/inspect.md b/troubleshoot/observability/inspect.md index 988ff0a5b..e606c0cae 100644 --- a/troubleshoot/observability/inspect.md +++ b/troubleshoot/observability/inspect.md @@ -9,7 +9,7 @@ The **Inspect** view in {{kib}} allows you to view information about all request :::{image} ../../images/observability-inspect-flyout.png :alt: Inspector flyout in the {{uptime-app}} -:class: screenshot +:screenshot: ::: Many requests go into building visualizations in {{kib}}. For example, to render visualizations in the {{uptime-app}}, {{kib}} needs to request a list of all your monitors, data about the availability of each monitor over time, and more. If something goes wrong, the Inspect view can help you report an issue and troubleshoot with Elastic support. @@ -32,7 +32,7 @@ To enable inspect across apps: :::{image} ../../images/observability-inspect-enable.png :alt: {{kib}} Advanced Settings {{observability}} section with Inspect ES queries enabled -:class: screenshot +:screenshot: ::: @@ -44,7 +44,7 @@ Click the **Request** dropdown to see all the requests used to make the current :::{image} ../../images/observability-inspect-flyout-dropdown.png :alt: Inspector flyout dropdown for selecting a request to inspect -:class: screenshot +:screenshot: ::: Toggle between the **Statistics**, **Request**, and **Response** tabs to see details for a single request. @@ -74,20 +74,20 @@ Request timestamp :::{image} ../../images/observability-inspect-flyout-statistics.png :alt: Inspector flyout Statistics tab -:class: screenshot +:screenshot: ::: The **Request** tab shows the exact syntax used in the request. You can click **Copy to clipboard** to copy the request or **Open in Console** to open it in the [{{kib}} console](../../explore-analyze/query-filter/tools/console.md). :::{image} ../../images/observability-inspect-flyout-request.png :alt: Inspector flyout Request tab with exact syntax -:class: screenshot +:screenshot: ::: The **Response** tab shows the exact response used in the visualizations on the page. You can click **Copy to clipboard** to copy the response. :::{image} ../../images/observability-inspect-flyout-response.png :alt: Inspector flyout Response tab with exact response -:class: screenshot +:screenshot: ::: diff --git a/troubleshoot/observability/troubleshooting-infrastructure-monitoring/understanding-no-results-found-message.md b/troubleshoot/observability/troubleshooting-infrastructure-monitoring/understanding-no-results-found-message.md index 593fdde85..deae1f8bf 100644 --- a/troubleshoot/observability/troubleshooting-infrastructure-monitoring/understanding-no-results-found-message.md +++ b/troubleshoot/observability/troubleshooting-infrastructure-monitoring/understanding-no-results-found-message.md @@ -34,5 +34,5 @@ This could be for any of these reasons: :::{image} ../../../images/observability-turn-on-system-metrics.png :alt: Screenshot showing system cpu and diskio metrics selected for collection - :class: screenshot + :screenshot: ::: diff --git a/troubleshoot/security/detection-rules.md b/troubleshoot/security/detection-rules.md index 5194aa237..71fda9c07 100644 --- a/troubleshoot/security/detection-rules.md +++ b/troubleshoot/security/detection-rules.md @@ -24,7 +24,7 @@ If a {{ml}} rule is failing, check to make sure the required {{ml}} jobs are run :::{image} ../../images/security-rules-ts-ml-job-stopped.png :alt: Rule details page with ML job stopped - :class: screenshot + :screenshot: ::: 2. If a required {{ml}} job isn’t running, turn on the **Run job** toggle next to it. @@ -92,7 +92,7 @@ A field can have type conflicts *and* be unmapped in specified indices. :::{image} ../../images/security-warning-icon-message.png :alt: Shows the warning icon and message -:class: screenshot +:screenshot: ::: @@ -104,7 +104,7 @@ In the following example, the selected field has been defined as different types :::{image} ../../images/security-warning-type-conflicts.png :alt: Warning for fields with type conflicts -:class: screenshot +:screenshot: ::: @@ -116,7 +116,7 @@ In the following example, the selected field is unmapped across two indices. :::{image} ../../images/security-warning-unmapped-fields.png :alt: Warning for unmapped fields -:class: screenshot +:screenshot: ::: ::::: @@ -177,7 +177,7 @@ For example, say an event occurred at 10:00 but wasn’t ingested into {{es}} un :::{image} ../../images/security-timestamp-override.png :alt: timestamp override -:class: screenshot +:screenshot: ::: diff --git a/troubleshoot/security/elastic-defend.md b/troubleshoot/security/elastic-defend.md index cd7add62d..1b4bb22cc 100644 --- a/troubleshoot/security/elastic-defend.md +++ b/troubleshoot/security/elastic-defend.md @@ -28,7 +28,7 @@ Integration policy response information is also available from the **Endpoints** :::{image} ../../images/security-unhealthy-agent-fleet.png :alt: Agent details page in {{fleet}} with Unhealthy status and integration failures -:class: screenshot +:screenshot: ::: Common causes of failure in the {{elastic-defend}} integration policy include missing prerequisites or unexpected system configuration. Consult the following topics to resolve a specific error: @@ -79,7 +79,7 @@ If you encounter a `“Required transform failed”` notice on the Endpoints pag :::{image} ../../images/security-endpoints-transform-failed.png :alt: Endpoints page with Required transform failed notice -:class: screenshot +:screenshot: ::: To restart a transform that’s not running: @@ -93,7 +93,7 @@ To restart a transform that’s not running: :::{image} ../../images/security-transforms-start.png :alt: Transforms page with Start option selected - :class: screenshot + :screenshot: ::: 4. On the confirmation message that displays, click **Start** to restart the transform.