Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Some prebuilt rules match source events in elastic-cloud-logs-* #3225

Closed
banderror opened this issue Oct 25, 2023 · 5 comments
Closed
Assignees
Labels
bug Something isn't working discuss

Comments

@banderror
Copy link

banderror commented Oct 25, 2023

FYI: this bug is SDH-linked.

Summary

One user reported that the Cobalt Strike Command and Control Beacon rule generates alerts based on source events from the elastic-cloud-logs-* index in their ESS environment.

This rule has only the following index patterns that don't match elastic-cloud-logs-*: [packetbeat-*, auditbeat-*, filebeat-*, logs-network_traffic.*]. Despite that, it looks like somehow in ESS Cloud logs-network_traffic.* or filebeat-* index patterns match concrete indices of elastic-cloud-logs-*, presumably via aliases. Specifically, in the user's environment they see alerts generated from source events from indices like this one:

 "signal.ancestors.index": [
      ".ds-elastic-cloud-logs-8-2023.10.17-000376"
    ],

Suggestion

In Security UI, when users create a custom rule, by default we explicitly exclude elastic-cloud-logs-* indices by adding -*elastic-cloud-logs-* to the list of index patterns:

Screenshot 2023-10-24 at 19 13 06

Probably we should add -*elastic-cloud-logs-* to:

  • either just all our prebuilt rules
  • or to prebuilt rules having any index patterns containing logs- or filebeat-?
@banderror banderror added bug Something isn't working discuss labels Oct 25, 2023
@w0rk3r w0rk3r self-assigned this Oct 25, 2023
@SHolzhauer
Copy link
Contributor

I believe logs-* somehow points to the elastic-loud-logs, have seen same behavior in our cluster and excluding it did fix it.

@w0rk3r
Copy link
Contributor

w0rk3r commented Dec 5, 2023

@banderror, I'm closing this issue as resolved. The bug, which seems to be an edge case, only affected one rule (confirmed by telemetry) was made possible due to a syntax error in a Lucene query (This query type is used in 5 rules, and that will be reduced to 1 after #3194 merges) fixed in #3196. Let me know if you want to discuss it further.

Related Internal Slack thread

@w0rk3r w0rk3r closed this as completed Dec 5, 2023
@banderror
Copy link
Author

Hey @w0rk3r, thank you for getting back with this update, and sorry that I didn't reply earlier.

That sounds good to me. I checked the links and the changes in those PRs. I guess what I still don't understand is how it is possible that a rule's query, being incorrect/inaccurate/etc, can cause the rule to query wrong indices - in our case, the elastic-loud-logs indices that should never be queried by detection rules, I assume? Could you please elaborate on that part?

Thus my suggestion is to just exclude these indices explicitly, even if we fix all queries in prebuilt rules.

@w0rk3r
Copy link
Contributor

w0rk3r commented Dec 7, 2023

Hey @banderror, Sounds like a Lucene/detection engine bug to me.

IMO, we don't need to exclude explicitly because this rule was the only occurrence across all of our rules (900+) and was caused by a syntax bug in a language that we are not using but replacing.

It may not be the best approach too. If it didn't respect the indexes defined in the first place, it could ignore the exclusion too. Perhaps we should open an issue to investigate the root cause that made the rule query other indices than specified. Wdyt?

@banderror
Copy link
Author

If it didn't respect the indexes defined in the first place, it could ignore the exclusion too. Perhaps we should open an issue to investigate the root cause that made the rule query other indices than specified. Wdyt?

@w0rk3r That's a good point 👍 I agree that knowing what's happening is important to make informed decisions about fixes. I'll bring this up to our tech leads and see what they think about investigating what might be the exact reason. Thank you for your feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working discuss
Projects
None yet
Development

No branches or pull requests

3 participants