Skip to content

Command Builder

Danil edited this page Jun 19, 2024 · 2 revisions

Here comes the command builder, a tool to help you build commands for your bot.

Note

This is fairly new feature so may contain some bugs. Please report if you find one. Thanks!

Important

If you want to enable some command that disabled by default just remove disabled item from that command trait list.

How and why to use it?

This is the core of the application that allows you to add new commands, change how they behave and what they do. This is better than writing a command from scratch as it does most of the work for you. Let's look at this sample:

commands:
  - name: solly
    prefix: '!'
    type: command-global
    provider: groq-cloud
    model: llama3-70b-8192
    model_settings:
      max_tokens: 60
      seed: 69
      temperature: 1.2
    settings:
      prompt-file: soldier
      message-suffix: 'End your response with "I love buckets btw."'
      allow-prompt-overwrite: false
      allow-long: false
      enable-soft-limit: true
      soft-limit-length: 128
      enable-hard-limit: true
      hard-limit-length: 300
    traits:
      - admin-only
      - openai-moderated
      - empty-prompt-message-response:
          msg: "Hello, MAGGOT!"

  ### MORE COMMANDS BELOW ###

Requirements

The first requirement that commands.yaml file must have settings key. This will be the list for all the commands that you want to be available for your.

The second requirement is that you must have name, prefix and type.

Here's the breakdown of what each key does:

  • name: This is the name of your command. It should be unique and descriptive. It will be used to refer to your command in various places such as !clear command.
  • prefix: This is the prefix for the command. You can, technically, leave it empty but that's not recommended. In this example you can that command in game using (prefix + name) !solly command.
  • type: This is the type of the command. It will define the behavior of the command. The list of all available types is available in the Command Types section of this document.

The rest of requirements depends on the type of the command.

In this case we're using command-global type which means that we're going to define a command that will be using LLM provider. So, we must include provider and if that LLM provider requires model then model must be included.

Options

Since we're using command-global type you allowed to specify generation options which will be passed to the LLM provider. To do that you must include model_settings section. In that section you can specify any number of options which will be passed to the provider. In this case we're using seed, max_tokens and temperature options. The list of available settings available in Model Settings section of this document.

settings key is used to tweak command behaviour without changing the command behavior too much. The list of available settings available in Command Settings section of this document.

And the final traits key that will define the checks that user or prompt or whatever else will be added in the future has to pass to get to the command execution. It also can hugely change function behaviour if some specified condition is met (empty-prompt-message-response in this case) In this case we're using is_admin trait to make sure that only admins can execute this command. We're using openai-moderated trait to make sure that the command is only executed when the user prompt is moderated by OpenAI. Each command type has its own set of traits. The list of available traits is available in Command Traits section of this document.

Command Types

Command type Description
quick-query This command type lets you fire off a single completion request to the LLM provider without saving a chat history.
command-global This command type lets you keep track of user messages as well as AI responses. Every message from anyone will be saved in one shared chat tied to that one command.
command-private Similar to the command-global keeps track of user and AI messages, but each user has its own chat. Noone can interrupt user's private chat, its tied to a user.

Command Settings

Command settings are used to change the behaviour of the command without changing its behavior too much.

Available settings per command type

Command type Available settings
quick-query
  • prompt-file
  • enable-soft-limit
  • soft-limit-length
  • message-suffix
  • greeting
  • allow-prompt-overwrite
  • allow-long
  • enable-hard-limit
  • hard-limit-length
  • allow-img
  • img-detail
  • img-screen-id
command-global
command-private

Settings breakdown

Setting Type Default Description
prompt-file string N/A Specifies default prompt file for a chat instead of passing \prompt argument in chat. (see. Prompts for the full list of available prompts.
enable-soft-limit boolean True If set to True ask model to answer in specified amount of characters. The amount specified by soft-limit-length. An alternative to max_tokens generation option.
soft-limit-length number 128 Sets the amount of characters for enable-soft-limit command.
message-suffix string N/A Sets the message that will be appended to the system message at the end.
greeting string N/A Sets the message that will added as first LLM response and WILL NOT be sent to user. General usage is to direct LLM in some direction by manually overwriting its first message.
allow-prompt-overwrite boolean True Sets whether user allowed to overwrite prompt-file passing \prompt argument in chat. Useful in case you want to create a command that will always roleplay as some character.
allow-long boolean True Sets whether user allowed to use \l argument in chat.
enable-hard-limit boolean True If set to True message will stripped after the amount of characters specified by hard-limit-length. Recommend to keep enabled to prevent model from flooding chat with text and getting you muted in chat. More info
hard-limit-length number 300 Sets the amount of characters for enable-hard-limit command. This approximately equals to 3 chat messages.
allow-img boolean False Sets whether user allowed to use \img argument in chat.
img-detail string low Set the image quality to either low or high.
img-screen-id number 1 Sets the monitor id.

Command Traits

Traits define the checks that user or prompt or whatever else will be added in the future has to pass to get to the command execution. It also can hugely change function behaviour if some specified condition is met (or unmet).

Available traits per command type

Command type Available settings
quick-query
  • openai-moderated
  • admin-only
  • disabled
  • deny-empty-prompt
  • empty-prompt-message-response
command-global
command-private

Traits breakdown

Setting Arguments Description
openai-moderated Command will only execute if user prompt was not flagged during OpenAI moderation stage.
admin-only Only user that launched this program has access to this command.
disabled Simply disables the command.
deny-empty-prompt Won't allow users to send empty prompts.
empty-prompt-message-response - msg: string message you want to be send when the prompt is empty. Will send specified message (msg) if user prompt is empty.

Model Settings

This section specifies model settings for the LLM completion generation model.

They're not organized in one place yet.

But you can check them if you follow those links.