Skip to content

Splunk TA for sending completion requests to ChatGPT

License

Notifications You must be signed in to change notification settings

hmfaoruq/ta-openai-api

 
 

Repository files navigation

ta-openai-api

1. Install using the latest tar.gz or .spl file

2. Add your OpenAI Org & API Key with the setup page:

(ref: https://beta.openai.com/account/org-settings & https://beta.openai.com/account/api-keys)

image

3. Use the search command: | openai org="YOUR_ORG_ID" prompt="your prompt"

chatresponse1

The command will create a "ChatCompletion", "Completion", "Edit" or "Moderate" request to the OpenAI API:

ref: https://beta.openai.com/docs/api-reference/

The following options are supported by the command:

org - Default: null - Explanation: Required, the organization ID you added with the setup page

prompt - Explanation: Optional, your prompt, question, or request to OpenAI

model - Default: gpt-3.5-turbo - Explanation: Optional, which GPT3 model to use (ref: https://beta.openai.com/docs/models/gpt-3)

task - Default: completion - Explanation: Optional, the task you wish to complete from this list (Complete,Edit,Moderate)

instruction - Default: null - Explanation: Optional, the instruction you want the Edit/Edits to follow. Note this is only valid when task=edit

max_tokens - Default: 1024 - Explanation: Optional, the maximum number of tokens to generate in the completion.

stop - Default: null - Explanation: Optional, up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

temperature - Default: 0.5 - Explanation: Optional, what sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or temperature but not both.

top_p - Default: null - Explanation: Optional, an alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

n - Default: 1 - Explanation: Optional, how many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

A simple completion example:

| openai org="YOUR_ORG_ID" prompt="When was GA, USA founded" model=text-davinci-003 task=completion

completion

A simple edit example:

| openai org="YOUR_ORG_ID" prompt="Orenge" model=text-davinic-edit-001 task=edit

edit

A simple edit with instructions example:

| openai org="YOUR_ORG_ID" prompt="When was GA, USA founded" model=text-davinic-edit-001 task=edit instruction="expand the acronyms"

edit with instructions

A simple moderation example:

| openai org="YOUR_ORG_ID" prompt="I want to kill" model=text-moderation-stable task=moderate

moderation

Data cleaning examples:

Getting 5 incorrect spellings of a US City and then using AI to correct the spelling:

dataCleaning

Chat examples:

| openai org="YOUR_ORG_ID" prompt="write a hello world js please"

gpt3 5

Mapping Example:

`comment("Grab some data from an internal index and combine it into one field called raw")`
index=_internal sourcetype=splunk_web_access
| head 10
| rename _raw as raw
| fields raw
| mvcombine raw

`comment("Ask ChatGPT what's the best sourcetype to use for the data")`
| map [| openai org={Your ORG HERE} model=gpt-4.0 prompt="What is the best Splunk sourcetype for this data? \n".$raw$]

`comment("Parse the reponse, dropping all but the value of the content field from the response message")`
| spath input=openai_response
| rename choices{}.message.content as response
| table response

image

About

Splunk TA for sending completion requests to ChatGPT

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.6%
  • Cython 1.1%
  • C 0.8%
  • JavaScript 0.3%
  • Roff 0.2%
  • CSS 0.0%