Skip to content

feat(aci milestone 3): anomaly detection condition handler #88647

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/sentry/incidents/grouptype.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from typing import Any

from sentry import features
from sentry.incidents.handlers.condition import * # noqa
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imported so that the condition handler gets added to the registry. Maybe there's a better solution?

from sentry.incidents.metric_alert_detector import MetricAlertsDetectorValidator
from sentry.incidents.models.alert_rule import AlertRuleDetectionType, ComparisonDeltaChoices
from sentry.incidents.utils.types import MetricDetectorUpdate, QuerySubscriptionUpdate
Expand Down
5 changes: 5 additions & 0 deletions src/sentry/incidents/handlers/condition/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
__all__ = [
"AnomalyDetectionHandler",
]

from .anomaly_detection_handler import AnomalyDetectionHandler
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import logging
from typing import Any

from django.conf import settings

from sentry.net.http import connection_from_url
from sentry.seer.anomaly_detection.get_anomaly_data import get_anomaly_data_from_seer
from sentry.seer.anomaly_detection.types import (
AnomalyDetectionSeasonality,
AnomalyDetectionSensitivity,
AnomalyDetectionThresholdType,
AnomalyType,
)
from sentry.snuba.models import QuerySubscription
from sentry.workflow_engine.models import Condition, DataPacket
from sentry.workflow_engine.registry import condition_handler_registry
from sentry.workflow_engine.types import DataConditionHandler, DetectorPriorityLevel

logger = logging.getLogger(__name__)

SEER_ANOMALY_DETECTION_CONNECTION_POOL = connection_from_url(
settings.SEER_ANOMALY_DETECTION_URL,
timeout=settings.SEER_ANOMALY_DETECTION_TIMEOUT,
)

SEER_EVALUATION_TO_DETECTOR_PRIORITY = {
AnomalyType.HIGH_CONFIDENCE.value: DetectorPriorityLevel.HIGH,
AnomalyType.LOW_CONFIDENCE.value: DetectorPriorityLevel.MEDIUM,
AnomalyType.NONE.value: DetectorPriorityLevel.OK,
}


# placeholder until we create this in the workflow engine model
class DetectorError(Exception):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling this out as a TODO (need to add it to the workflow engine + build exception handling for it within process_detectors)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this could live in either sentry/workflow_engine/processors/detector.py or a new utils file in the same dir?

pass


@condition_handler_registry.register(Condition.ANOMALY_DETECTION)
class AnomalyDetectionHandler(DataConditionHandler[DataPacket]):
group = DataConditionHandler.Group.DETECTOR_TRIGGER
comparison_json_schema = {
"type": "object",
"properties": {
"sensitivity": {
"type": "string",
"enum": [*AnomalyDetectionSensitivity],
},
"seasonality": {
"type": "string",
"enum": [*AnomalyDetectionSeasonality],
},
"threshold_type": {
"type": "integer",
"enum": [*AnomalyDetectionThresholdType],
},
},
"required": ["sensitivity", "seasonality", "threshold_type"],
"additionalProperties": False,
}

@staticmethod
def evaluate_value(update: DataPacket, comparison: Any) -> DetectorPriorityLevel:
sensitivity = comparison["sensitivity"]
seasonality = comparison["seasonality"]
threshold_type = comparison["threshold_type"]

subscription: QuerySubscription = QuerySubscription.objects.get(id=update.source_id)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we have handling around this in case it doesn't exist?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's let it fail loudly—I don't think this should happen.


subscription_update = update.packet

anomaly_data = get_anomaly_data_from_seer(
sensitivity=sensitivity,
seasonality=seasonality,
threshold_type=threshold_type,
subscription=subscription,
subscription_update=subscription_update,
)
# covers both None and []
if not anomaly_data:
# something went wrong during evaluation
raise DetectorError("Error during Seer data evaluation process.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this return False instead of raising an exception, if you're going to build exception handling for it? what's the outcome of the exception handling?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want this to raise an error instead of returning False, because returning False indicates that we should set the detector priority level to OK.

I think we actually want to change this condition handler to emit multiple detector priority levels according to the anomaly detection result 🤔


anomaly_type = anomaly_data[0].get("anomaly", {}).get("anomaly_type")
if anomaly_type == AnomalyType.NO_DATA.value:
raise DetectorError("Project doesn't have enough data for detector to evaluate")
elif anomaly_type is None:
raise DetectorError("Seer response contained no evaluation data")

return SEER_EVALUATION_TO_DETECTOR_PRIORITY[anomaly_type]
6 changes: 3 additions & 3 deletions src/sentry/incidents/subscription_processor.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@
QuerySubscriptionUpdate,
)
from sentry.models.project import Project
from sentry.seer.anomaly_detection.get_anomaly_data import get_anomaly_data_from_seer
from sentry.seer.anomaly_detection.get_anomaly_data import get_anomaly_data_from_seer_legacy
from sentry.seer.anomaly_detection.utils import anomaly_has_confidence, has_anomaly
from sentry.snuba.dataset import Dataset
from sentry.snuba.models import QuerySubscription
Expand Down Expand Up @@ -382,9 +382,9 @@ def process_update(self, subscription_update: QuerySubscriptionUpdate) -> None:
},
)
with metrics.timer(
"incidents.subscription_processor.process_update.get_anomaly_data_from_seer"
"incidents.subscription_processor.process_update.get_anomaly_data_from_seer_legacy"
):
potential_anomalies = get_anomaly_data_from_seer(
potential_anomalies = get_anomaly_data_from_seer_legacy(
alert_rule=self.alert_rule,
subscription=self.subscription,
last_update=self.last_update.timestamp(),
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Generated by Django 5.2.1 on 2025-05-21 23:05

from django.db import migrations, models

import sentry.db.models.fields.citext
from sentry.new_migrations.migrations import CheckedMigration


class Migration(CheckedMigration):
# This flag is used to mark that a migration shouldn't be automatically run in production.
# This should only be used for operations where it's safe to run the migration after your
# code has deployed. So this should not be used for most operations that alter the schema
# of a table.
# Here are some things that make sense to mark as post deployment:
# - Large data migrations. Typically we want these to be run manually so that they can be
# monitored and not block the deploy for a long period of time while they run.
# - Adding indexes to large tables. Since this can take a long time, we'd generally prefer to
# run this outside deployments so that we don't block them. Note that while adding an index
# is a schema change, it's completely safe to run the operation after the code has deployed.
# Once deployed, run these manually via: https://develop.sentry.dev/database-migrations/#migration-deployment

is_post_deployment = False

dependencies = [
("sentry", "0908_increase_email_field_length"),
]

operations = [
migrations.AlterField(
model_name="email",
name="email",
field=sentry.db.models.fields.citext.CIEmailField(max_length=200, unique=True),
),
migrations.AlterField(
model_name="useremail",
name="email",
field=models.EmailField(max_length=200),
),
]
127 changes: 125 additions & 2 deletions src/sentry/seer/anomaly_detection/get_anomaly_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,22 @@

from sentry.conf.server import SEER_ANOMALY_DETECTION_ENDPOINT_URL
from sentry.incidents.models.alert_rule import AlertRule
from sentry.incidents.utils.types import MetricDetectorUpdate
from sentry.net.http import connection_from_url
from sentry.seer.anomaly_detection.types import (
AlertInSeer,
AnomalyDetectionConfig,
AnomalyDetectionSeasonality,
AnomalyDetectionSensitivity,
AnomalyDetectionThresholdType,
DataSourceType,
DetectAnomaliesRequest,
DetectAnomaliesResponse,
TimeSeriesPoint,
)
from sentry.seer.anomaly_detection.utils import translate_direction
from sentry.seer.signed_seer_api import make_signed_seer_api_request
from sentry.snuba.models import QuerySubscription
from sentry.snuba.models import QuerySubscription, SnubaQuery
from sentry.utils import json, metrics
from sentry.utils.json import JSONDecodeError

Expand All @@ -27,7 +32,8 @@
)


def get_anomaly_data_from_seer(
# TODO: delete this once we deprecate the AlertRule model
def get_anomaly_data_from_seer_legacy(
alert_rule: AlertRule,
subscription: QuerySubscription,
last_update: float,
Expand Down Expand Up @@ -153,3 +159,120 @@ def get_anomaly_data_from_seer(
)
return None
return ts


def get_anomaly_data_from_seer(
sensitivity: AnomalyDetectionSensitivity,
seasonality: AnomalyDetectionSeasonality,
threshold_type: AnomalyDetectionThresholdType,
subscription: QuerySubscription,
subscription_update: MetricDetectorUpdate,
) -> list[TimeSeriesPoint] | None:
snuba_query: SnubaQuery = subscription.snuba_query
aggregation_value = subscription_update["values"].get("value")
source_id = subscription.id
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we're passing in the query subscription, we don't need to pass source_id and source_type.

source_type = DataSourceType.SNUBA_QUERY_SUBSCRIPTION
if aggregation_value is None:
logger.error(
"Invalid aggregation value", extra={"source_id": source_id, "source_type": source_type}
)
return None

extra_data = {
"subscription_id": subscription.id,
"organization_id": subscription.project.organization.id,
"project_id": subscription.project_id,
"source_id": source_id,
"source_type": source_type,
}

anomaly_detection_config = AnomalyDetectionConfig(
time_period=int(snuba_query.time_window / 60),
sensitivity=sensitivity,
direction=translate_direction(threshold_type),
expected_seasonality=seasonality,
)
context = AlertInSeer(
source_id=source_id,
source_type=source_type,
cur_window=TimeSeriesPoint(
timestamp=subscription_update["timestamp"].timestamp(), value=aggregation_value
),
)
detect_anomalies_request = DetectAnomaliesRequest(
organization_id=subscription.project.organization.id,
project_id=subscription.project_id,
config=anomaly_detection_config,
context=context,
)
extra_data["dataset"] = snuba_query.dataset
try:
logger.info("Sending subscription update data to Seer", extra=extra_data)
response = make_signed_seer_api_request(
SEER_ANOMALY_DETECTION_CONNECTION_POOL,
SEER_ANOMALY_DETECTION_ENDPOINT_URL,
json.dumps(detect_anomalies_request).encode("utf-8"),
)
except (TimeoutError, MaxRetryError):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything below this line is error handling (copied from the legacy method)

logger.warning("Timeout error when hitting anomaly detection endpoint", extra=extra_data)
return None

if response.status > 400:
logger.error(
"Error when hitting Seer detect anomalies endpoint",
extra={
"response_data": response.data,
**extra_data,
},
)
return None
try:
decoded_data = response.data.decode("utf-8")
except AttributeError:
logger.exception(
"Failed to parse Seer anomaly detection response",
extra={
"ad_config": anomaly_detection_config,
"context": context,
"response_data": response.data,
"response_code": response.status,
},
)
return None

try:
results: DetectAnomaliesResponse = json.loads(decoded_data)
except JSONDecodeError:
logger.exception(
"Failed to parse Seer anomaly detection response",
extra={
"ad_config": anomaly_detection_config,
"context": context,
"response_data": decoded_data,
"response_code": response.status,
},
)
return None

if not results.get("success"):
logger.error(
"Error when hitting Seer detect anomalies endpoint",
extra={
"error_message": results.get("message", ""),
**extra_data,
},
)
return None

ts = results.get("timeseries")
if not ts:
logger.warning(
"Seer anomaly detection response returned no potential anomalies",
extra={
"ad_config": anomaly_detection_config,
"context": context,
"response_data": results.get("message"),
},
)
return None
return ts
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add tests for this method? Probably can mostly copy/paste from the existing ones, just don't wanna lose the coverage when we delete the old stuff.

41 changes: 38 additions & 3 deletions src/sentry/seer/anomaly_detection/types.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from enum import Enum
from enum import IntEnum, StrEnum
from typing import NotRequired, TypedDict


Expand All @@ -13,8 +13,16 @@ class TimeSeriesPoint(TypedDict):
anomaly: NotRequired[Anomaly]


class DataSourceType(IntEnum):
SNUBA_QUERY_SUBSCRIPTION = 0


class AlertInSeer(TypedDict):
id: int
id: NotRequired[int]
source_id: NotRequired[
int
] # during our dual processing rollout, some requests will be sending ID and some will send source_id/source_type
source_type: NotRequired[DataSourceType]
cur_window: NotRequired[TimeSeriesPoint]


Expand Down Expand Up @@ -69,8 +77,35 @@ class DetectAnomaliesResponse(TypedDict):
timeseries: list[TimeSeriesPoint]


class AnomalyType(Enum):
class AnomalyType(StrEnum):
HIGH_CONFIDENCE = "anomaly_higher_confidence"
LOW_CONFIDENCE = "anomaly_lower_confidence"
NONE = "none"
NO_DATA = "no_data"


class AnomalyDetectionSensitivity(StrEnum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"


class AnomalyDetectionSeasonality(StrEnum):
"""All combinations of multi select fields for anomaly detection alerts
We do not anticipate adding more
"""

AUTO = "auto"
HOURLY = "hourly"
DAILY = "daily"
WEEKLY = "weekly"
HOURLY_DAILY = "hourly_daily"
HOURLY_WEEKLY = "hourly_weekly"
HOURLY_DAILY_WEEKLY = "hourly_daily_weekly"
DAILY_WEEKLY = "daily_weekly"


class AnomalyDetectionThresholdType(IntEnum):
ABOVE = 0
BELOW = 1
ABOVE_AND_BELOW = 2
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved this file to the incidents directory.

This file was deleted.

Loading
Loading