Skip to content

Commit

Permalink
[Security Solution] [GenAi] Give the security AI assistant access to …
Browse files Browse the repository at this point in the history
…the current time (#211200)

## Summary

This PR adds a new tool that gives the LLM access to the current time.
The tool returns the time in the timezone configured on Kibana as well
as the UTC time.

Changes:
- Add time tool
- Also increased the speed of the assistant stream making the assistant
feel more snappy
([here](https://github.com/elastic/kibana/pull/211200/files#diff-d4dd2f3b250247285fee3300a6d38cf622f2724daa87947fe58111bae9d3d655R12)).
The reasons for keeping the small delay (of 10 ms) is because it helps
smooth out the stream.

<img width="500" alt="image"
src="https://github.com/user-attachments/assets/e613f9fb-a0f5-4559-88df-6d8ea0e5d042"
/>

## How to test
- Check that stack management > advanced settings > timezone is set to
"browser"
- Open the security assistant
- Ask "what is the current time". You should get back the time in your
local timezone + the equivalent GMT timezone (UTC and GMT are
equivalent)
- Go to stack management > advanced settings and set "Time zone" to a
different timezone (a timezone with a different timezone offset).
- Go to the assistant and ask again, "What is the current time". You
should get back the time in the timezone that you just configured and
the UTC equivalent.
- Other questions to try out:
- "What was the time exactly one week ago? Rounded to the nearest
hour.". The result should be correct depending on what you have
configured in advanced settings.
- "Write an esql query that gets 100 records from the .logs index from
the last week. Use the absolute time in the query." (may need to prompt
again to have the query include the absolute time)
- "When is my birthday", The assistant responds with "I don't know but
you can tell me". You reply with "It was exactly 3 weeks ago". The
assistant should create a KB document with the correct date.
 

## Considerations:
- When asked "Which security labs content was published in the last 2
months", gemini-1-5-pro-002 often responds incorrectly
([trace](https://smith.langchain.com/o/b739bf24-7ba4-4994-b632-65dd677ac74e/projects/p/6bfddf7b-1225-4e97-ac9f-6cdf9158ac35?timeModel=%7B%22duration%22%3A%227d%22%7D&peek=4f5244a3-68fd-45e3-b1df-6c80e739377f)).
GPT4o performs better and does not return an incorrect result when asked
this question
([trace](https://smith.langchain.com/o/b739bf24-7ba4-4994-b632-65dd677ac74e/projects/p/6bfddf7b-1225-4e97-ac9f-6cdf9158ac35?timeModel=%7B%22duration%22%3A%227d%22%7D&peek=61bc4c12-d5ea-48be-8460-3e891d2e243b)).
- You will notice that the formatted time string contains the time in
the user's timezone and in UTC timezone (e.g. `Current time: 14/02/2025,
00:33:12 UTC-07:00 (14/02/2025, 07:33:12 UTC+00:00)`). The reason for
this is that the weaker LLMs sometimes make mistakes when converting
from one timezone to another. Therefore I have included both in the
formatted message. * If the user is in UTC, then the UTC timezone will
not be repeated.

## How is the current time string formatted?

The formatted time string is added directly into the system prompt.
Bellow is the logic for how the string is formatted.

- If the user's kibana timezone setting is "Browser"
1. and their browser timezone is not UTC, then the format is `Current
time: Thu, Feb 13, 2025 11:33 PM UTC-08:00 (7:33 AM UTC)` (where the
first timezone is the client timezone, the one from the browser)
2. and their browser is in UTC, then the format is `Current time: Thu,
Feb 13, 2025 11:33 PM UTC+00:00`
- If the user's kibana timezone is set to something other than "Browser"
1. and the Kibana timezone setting is not UTC equivalent, then the
format is `Current time: Thu, Feb 13, 2025 11:33 PM UTC-08:00 (7:33 AM
UTC)` (where the first timezone is the one from the Kibana timezone
setting)
2. and their kibana timezone is UTC equivalent, then the format is
`Current time: Thu, Feb 13, 2025 11:33 PM UTC+00:00`

### Checklist

Check the PR satisfies following conditions. 

Reviewers should verify this PR satisfies this list as well.

- [x] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/src/platform/packages/shared/kbn-i18n/README.md)
- [x]
[Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html)
was added for features that require explanation or tutorials
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
- [x] If a plugin configuration key changed, check if it needs to be
allowlisted in the cloud and added to the [docker
list](https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/resources/base/bin/kibana-docker)
- [x] This was checked for breaking HTTP API changes, and any breaking
changes have been approved by the breaking-change committee. The
`release_note:breaking` label should be applied in these situations.
- [x] [Flaky Test
Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner/1) was
used on any tests changed
- [x] The PR description includes the appropriate Release Notes section,
and the correct `release_note:*` label is applied per the
[guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process)

### Identify risks

Does this PR introduce any risks? For example, consider risks like hard
to test bugs, performance regression, potential of data loss.

Describe the risk, its severity, and mitigation for each identified
risk. Invite stakeholders and evaluate how to proceed before merging.

- [ ] [See some risk
examples](https://github.com/elastic/kibana/blob/main/RISK_MATRIX.mdx)
- [ ] ...

---------

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
  • Loading branch information
3 people authored Feb 28, 2025
1 parent bbc3b45 commit 7dce6e6
Show file tree
Hide file tree
Showing 26 changed files with 339 additions and 20 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

import { z } from '@kbn/zod';

import { NonEmptyString } from '../common_attributes.gen';
import { NonEmptyString, ScreenContext } from '../common_attributes.gen';
import { Replacements } from '../conversations/common_attributes.gen';

export type ExecuteConnectorRequestParams = z.infer<typeof ExecuteConnectorRequestParams>;
Expand All @@ -42,6 +42,7 @@ export const ExecuteConnectorRequestBody = z.object({
size: z.number().optional(),
langSmithProject: z.string().optional(),
langSmithApiKey: z.string().optional(),
screenContext: ScreenContext.optional(),
});
export type ExecuteConnectorRequestBodyInput = z.input<typeof ExecuteConnectorRequestBody>;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,8 @@ paths:
type: string
langSmithApiKey:
type: string
screenContext:
$ref: '../common_attributes.schema.yaml#/components/schemas/ScreenContext'
responses:
'200':
description: Successful static response
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,14 @@ export type SortOrder = z.infer<typeof SortOrder>;
export const SortOrder = z.enum(['asc', 'desc']);
export type SortOrderEnum = typeof SortOrder.enum;
export const SortOrderEnum = SortOrder.enum;

/**
* User screen context
*/
export type ScreenContext = z.infer<typeof ScreenContext>;
export const ScreenContext = z.object({
/**
* The local timezone of the user
*/
timeZone: z.string().optional(),
});
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,11 @@ components:
enum:
- 'asc'
- 'desc'

ScreenContext:
description: User screen context
type: object
properties:
timeZone:
description: The local timezone of the user
type: string
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
import { z } from '@kbn/zod';

import { Replacements } from '../conversations/common_attributes.gen';
import { ScreenContext } from '../common_attributes.gen';

export type PostEvaluateBody = z.infer<typeof PostEvaluateBody>;
export const PostEvaluateBody = z.object({
Expand All @@ -29,6 +30,7 @@ export const PostEvaluateBody = z.object({
langSmithApiKey: z.string().optional(),
langSmithProject: z.string().optional(),
replacements: Replacements.optional().default({}),
screenContext: ScreenContext.optional(),
size: z.number().optional().default(20),
});

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,8 @@ components:
replacements:
$ref: "../conversations/common_attributes.schema.yaml#/components/schemas/Replacements"
default: {}
screenContext:
$ref: '../common_attributes.schema.yaml#/components/schemas/ScreenContext'
size:
type: number
default: 20
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ const fetchConnectorArgs: FetchConnectorExecuteAction = {
message: 'This is a test',
conversationId: 'test',
replacements: {},
screenContext: {
timeZone: 'America/New_York',
},
};
const streamingDefaults = {
method: 'POST',
Expand Down Expand Up @@ -73,7 +76,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...staticDefaults,
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{}}',
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
Expand All @@ -85,7 +88,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...streamingDefaults,
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gen-ai","replacements":{}}',
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gen-ai","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
Expand All @@ -102,7 +105,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...streamingDefaults,
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".bedrock","replacements":{}}',
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".bedrock","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
Expand All @@ -119,7 +122,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...streamingDefaults,
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gemini","replacements":{}}',
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".gemini","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
Expand All @@ -136,7 +139,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...streamingDefaults,
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".bedrock","replacements":{}}',
body: '{"message":"This is a test","subAction":"invokeStream","conversationId":"test","actionTypeId":".bedrock","replacements":{},"screenContext":{"timeZone":"America/New_York"}}',
}
);
});
Expand All @@ -156,7 +159,7 @@ describe('API tests', () => {
'/internal/elastic_assistant/actions/connector/foo/_execute',
{
...staticDefaults,
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{"auuid":"real.hostname"},"alertsIndexPattern":".alerts-security.alerts-default","size":30}',
body: '{"model":"gpt-4","message":"This is a test","subAction":"invokeAI","conversationId":"test","actionTypeId":".gen-ai","replacements":{"auuid":"real.hostname"},"screenContext":{"timeZone":"America/New_York"},"alertsIndexPattern":".alerts-security.alerts-default","size":30}',
}
);
});
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@ import { HttpSetup } from '@kbn/core/public';
import {
API_VERSIONS,
ApiConfig,
MessageMetadata,
Replacements,
ScreenContext,
MessageMetadata,
} from '@kbn/elastic-assistant-common';
import { API_ERROR } from '../translations';
import { getOptionalRequestParams } from '../helpers';
Expand All @@ -29,6 +30,7 @@ export interface FetchConnectorExecuteAction {
signal?: AbortSignal | undefined;
size?: number;
traceOptions?: TraceOptions;
screenContext: ScreenContext;
}

export interface FetchConnectorExecuteResponse {
Expand All @@ -53,6 +55,7 @@ export const fetchConnectorExecuteAction = async ({
signal,
size,
traceOptions,
screenContext,
}: FetchConnectorExecuteAction): Promise<FetchConnectorExecuteResponse> => {
// TODO add streaming support for gemini with langchain on
const isStream = assistantStreamingEnabled;
Expand All @@ -73,6 +76,7 @@ export const fetchConnectorExecuteAction = async ({
traceOptions?.langSmithProject === '' ? undefined : traceOptions?.langSmithProject,
langSmithApiKey:
traceOptions?.langSmithApiKey === '' ? undefined : traceOptions?.langSmithApiKey,
screenContext,
...optionalRequestParams,
};

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ import type {
} from '@kbn/elastic-assistant-common';
import { isEmpty } from 'lodash/fp';

import moment from 'moment';
import * as i18n from './translations';
import { useAssistantContext } from '../../../assistant_context';
import { DEFAULT_ATTACK_DISCOVERY_MAX_ALERTS } from '../../../assistant_context/constants';
Expand Down Expand Up @@ -210,6 +211,9 @@ export const EvaluationSettings: React.FC = React.memo(() => {
langSmithProject,
runName,
size: Number(size),
screenContext: {
timeZone: moment.tz.guess(),
},
};
performEvaluation(evalParams);
}, [
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
import { HttpSetup } from '@kbn/core-http-browser';
import { useCallback, useRef, useState } from 'react';
import { ApiConfig, Replacements } from '@kbn/elastic-assistant-common';
import moment from 'moment';
import { useAssistantContext } from '../../assistant_context';
import { fetchConnectorExecuteAction, FetchConnectorExecuteResponse } from '../api';
import * as i18n from './translations';
Expand Down Expand Up @@ -65,6 +66,9 @@ export const useSendMessage = (): UseSendMessage => {
signal: abortController.current.signal,
size: knowledgeBase.latestAlerts,
traceOptions,
screenContext: {
timeZone: moment.tz.guess(),
},
});
} finally {
clearTimeout(timeoutId);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,5 @@ export const CAPABILITIES = `${BASE_PATH}/capabilities`;
Licensing requirements
*/
export const MINIMUM_AI_ASSISTANT_LICENSE = 'enterprise' as const;

export const DEFAULT_DATE_FORMAT_TZ = 'dateFormat:tz' as const;
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ import {
ExecuteConnectorRequestBody,
Message,
Replacements,
ScreenContext,
} from '@kbn/elastic-assistant-common';
import { StreamResponseWithHeaders } from '@kbn/ml-response-stream/server';
import { PublicMethodsOf } from '@kbn/utility-types';
Expand All @@ -24,6 +25,7 @@ import { AnalyticsServiceSetup } from '@kbn/core-analytics-server';
import { TelemetryParams } from '@kbn/langchain/server/tracers/telemetry/telemetry_tracer';
import type { LlmTasksPluginStart } from '@kbn/llm-tasks-plugin/server';
import { SavedObjectsClientContract } from '@kbn/core-saved-objects-api-server';
import { CoreRequestHandlerContext } from '@kbn/core/server';
import { ResponseBody } from '../types';
import type { AssistantTool } from '../../../types';
import { AIAssistantKnowledgeBaseDataClient } from '../../../ai_assistant_data_clients/knowledge_base';
Expand All @@ -50,6 +52,7 @@ export interface AgentExecutorParams<T extends boolean> {
connectorId: string;
conversationId?: string;
contentReferencesStore: ContentReferencesStore;
core: CoreRequestHandlerContext;
dataClients?: AssistantDataClients;
esClient: ElasticsearchClient;
langChainMessages: BaseMessage[];
Expand All @@ -65,6 +68,7 @@ export interface AgentExecutorParams<T extends boolean> {
request: KibanaRequest<unknown, unknown, ExecuteConnectorRequestBody>;
response?: KibanaResponseFactory;
savedObjectsClient: SavedObjectsClientContract;
screenContext?: ScreenContext;
size?: number;
systemPrompt?: string;
telemetry: AnalyticsServiceSetup;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ export interface GetDefaultAssistantGraphParams {
signal?: AbortSignal;
tools: StructuredTool[];
replacements: Replacements;
getFormattedTime?: () => string;
}

export type DefaultAssistantGraph = ReturnType<typeof getDefaultAssistantGraph>;
Expand All @@ -57,6 +58,7 @@ export const getDefaultAssistantGraph = ({
signal,
tools,
replacements,
getFormattedTime,
}: GetDefaultAssistantGraphParams) => {
try {
// Default graph state
Expand Down Expand Up @@ -125,6 +127,10 @@ export const getDefaultAssistantGraph = ({
reducer: (x: string, y?: string) => y ?? x,
default: () => '',
}),
formattedTime: Annotation<string>({
reducer: (x: string, y?: string) => y ?? x,
default: getFormattedTime ?? (() => ''),
}),
});

// Default node parameters
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ import { AssistantTool, AssistantToolParams } from '../../../..';
import { promptGroupId as toolsGroupId } from '../../../prompt/tool_prompts';
import { promptDictionary } from '../../../prompt';
import { promptGroupId } from '../../../prompt/local_prompt_object';

jest.mock('./graph');
jest.mock('./helpers');
jest.mock('langchain/agents');
Expand Down Expand Up @@ -85,6 +86,13 @@ describe('callAssistantGraph', () => {
traceOptions: {},
responseLanguage: 'English',
contentReferencesStore: newContentReferencesStoreMock(),
core: {
uiSettings: {
client: {
get: jest.fn().mockResolvedValue('Browser'),
},
},
},
} as unknown as AgentExecutorParams<boolean>;

beforeEach(() => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ import { getPrompt, resolveProviderAndModel } from '@kbn/security-ai-prompts';
import { isEmpty } from 'lodash';
import { localToolPrompts, promptGroupId as toolsGroupId } from '../../../prompt/tool_prompts';
import { promptGroupId } from '../../../prompt/local_prompt_object';
import { getModelOrOss } from '../../../prompt/helpers';
import { getFormattedTime, getModelOrOss } from '../../../prompt/helpers';
import { getPrompt as localGetPrompt, promptDictionary } from '../../../prompt';
import { getLlmClass } from '../../../../routes/utils';
import { EsAnonymizationFieldsSchema } from '../../../../ai_assistant_data_clients/anonymization_fields/types';
Expand All @@ -30,6 +30,7 @@ import { GraphInputs } from './types';
import { getDefaultAssistantGraph } from './graph';
import { invokeGraph, streamGraph } from './helpers';
import { transformESSearchToAnonymizationFields } from '../../../../ai_assistant_data_clients/anonymization_fields/helpers';
import { DEFAULT_DATE_FORMAT_TZ } from '../../../../../common/constants';

export const callAssistantGraph: AgentExecutor<true | false> = async ({
abortSignal,
Expand All @@ -39,6 +40,7 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
connectorId,
contentReferencesStore,
conversationId,
core,
dataClients,
esClient,
inference,
Expand All @@ -53,6 +55,7 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
replacements,
request,
savedObjectsClient,
screenContext,
size,
systemPrompt,
telemetry,
Expand Down Expand Up @@ -218,6 +221,11 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
actionsClient,
})
: { provider: llmType };

const uiSettingsDateFormatTimezone = await core.uiSettings.client.get<string>(
DEFAULT_DATE_FORMAT_TZ
);

const assistantGraph = getDefaultAssistantGraph({
agentRunnable,
dataClients,
Expand All @@ -230,6 +238,11 @@ export const callAssistantGraph: AgentExecutor<true | false> = async ({
replacements,
// some chat models (bedrock) require a signal to be passed on agent invoke rather than the signal passed to the chat model
...(llmType === 'bedrock' ? { signal: abortSignal } : {}),
getFormattedTime: () =>
getFormattedTime({
screenContextTimezone: request.body.screenContext?.timeZone,
uiSettingsDateFormatTimezone,
}),
});
const inputs: GraphInputs = {
responseLanguage,
Expand Down
Loading

0 comments on commit 7dce6e6

Please sign in to comment.