1. Install using the latest tar.gz or .spl file
2. Add your OpenAI Org & API Key with the setup page:
(ref: https://beta.openai.com/account/org-settings & https://beta.openai.com/account/api-keys)
3. Use the search command: | openai org="YOUR_ORG_ID" prompt="your prompt"
The command will create a "ChatCompletion", "Completion", "Edit" or "Moderate" request to the OpenAI API:
ref: https://beta.openai.com/docs/api-reference/
The following options are supported by the command:
org - Default: null - Explanation: Required, the organization ID you added with the setup page
prompt - Explanation: Optional, your prompt, question, or request to OpenAI
model - Default: gpt-3.5-turbo - Explanation: Optional, which GPT3 model to use (ref: https://beta.openai.com/docs/models/gpt-3)
task - Default: completion - Explanation: Optional, the task you wish to complete from this list (Complete,Edit,Moderate)
instruction - Default: null - Explanation: Optional, the instruction you want the Edit/Edits to follow. Note this is only valid when task=edit
max_tokens - Default: 1024 - Explanation: Optional, the maximum number of tokens to generate in the completion.
stop - Default: null - Explanation: Optional, up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
temperature - Default: 0.5 - Explanation: Optional, what sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or temperature but not both.
top_p - Default: null - Explanation: Optional, an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
n - Default: 1 - Explanation: Optional, how many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
A simple completion example:
| openai org="YOUR_ORG_ID" prompt="When was GA, USA founded" model=text-davinci-003 task=completion
A simple edit example:
| openai org="YOUR_ORG_ID" prompt="Orenge" model=text-davinic-edit-001 task=edit
A simple edit with instructions example:
| openai org="YOUR_ORG_ID" prompt="When was GA, USA founded" model=text-davinic-edit-001 task=edit instruction="expand the acronyms"
A simple moderation example:
| openai org="YOUR_ORG_ID" prompt="I want to kill" model=text-moderation-stable task=moderate
Data cleaning examples:
Getting 5 incorrect spellings of a US City and then using AI to correct the spelling:
Chat examples:
| openai org="YOUR_ORG_ID" prompt="write a hello world js please"
Mapping Example:
`comment("Grab some data from an internal index and combine it into one field called raw")`
index=_internal sourcetype=splunk_web_access
| head 10
| rename _raw as raw
| fields raw
| mvcombine raw
`comment("Ask ChatGPT what's the best sourcetype to use for the data")`
| map [| openai org={Your ORG HERE} model=gpt-4.0 prompt="What is the best Splunk sourcetype for this data? \n".$raw$]
`comment("Parse the reponse, dropping all but the value of the content field from the response message")`
| spath input=openai_response
| rename choices{}.message.content as response
| table response