Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Groq is great, system role next? #105

Closed
teufortressIndustries opened this issue May 1, 2024 · 6 comments · Fixed by #107
Closed

Groq is great, system role next? #105

teufortressIndustries opened this issue May 1, 2024 · 6 comments · Fixed by #107
Labels
enhancement New feature or request

Comments

@teufortressIndustries
Copy link
Contributor

The Groq's speed is impressive, the responses are instant as if all of them were pre-made. But it lacks personality.
While we have CUSTOM_PROMPT that addresses this issue, it would be nice to utilize the system role to determine its behavior.
I believe it's more efficient than inputting
[user message] (act like Soldier from the 2007 hit game Team Fortress 2. Soldier talks like blah blah blah... He likes buckets and blah blah blah...)
every single time.

@dborodin836
Copy link
Owner

Hi, @teufortressIndustries.

So, I'm currently working on revamping how we handle command settings because the current setup is a total mess. Everything's stored in this config.ini file, and it's getting out of control fast. I'm trying to come up with a better way to manage all this, making it super customizable for each command and allowing user to create as many commands as he wants. I've got a YAML sample of what I'm thinking:

commands:
  - name: '!solly'
    type: quick-query
    provider: open-ai
    model: gpt-3.5-turbo-1106
    model_settings:
      max_tokens: 60
    chat_settings:
      # This should resolve the issue. 
      # It stores the soldier's (prompts/soldier.txt) prompt in the system's role message.
      prompt-file: soldier
      message-suffix: 'End your response with "I love buckets btw."'
      enable-soft-limit: false
      allow-prompt-overwrite: false
      allow-long: false
    traits:
      - admin-only
      - openai-moderated
      - empty-prompt-message-response:
          - "Hello there! I'm ChatGPT, integrated into Team Fortress 2. Ask me anything!"

There are some things that need to be considered to deem this feature complete. I haven't decided what to do with chats because there is 1 global chat and 1 private chat for each user.

Anyway, if you want to experiment with it, the available traits and settings are in the modules.builder module.

@dborodin836 dborodin836 added the enhancement New feature or request label May 2, 2024
@teufortressIndustries
Copy link
Contributor Author

Hello, @dborodin836.
This is nice, giving people bricks instead of fully built houses makes it more flexible, requiring fewer updates for each desired feature. I'll try the new branch.

@teufortressIndustries
Copy link
Contributor Author

teufortressIndustries commented May 2, 2024

Hmm, I tried the built-in !solly command, but I changed it a bit. Instead of OpenAI, I used Groq, and removed the admin-only trait. However, sometimes it responds like a generic AI instead of acting like a soldier. And its responses are way more than the set max_tokens: 60 and changed it to global-chat (which is the reason why it doesn't work as you said you haven't decided anything with it)

@dborodin836
Copy link
Owner

quick-query was the only one working. I've (hopefully) fixed global and private chats, as well as the !clear command. But I've barely done any testing yet. 🤞

@teufortressIndustries
Copy link
Contributor Author

teufortressIndustries commented May 3, 2024

I haven't closely tested them but command-global seems to work fine, the AI coherently responded like Soldier.
But when I changed the empty-prompt-message-response trait to something Soldier would say and tried the other !heavy command (which I left unedited) the AI yelled "MAGGOTS!" at the beginning of the response for some reason, maybe it wasn't the empty-prompt-message-response but the problem with the chat history or the model I was using (llama3-70b-8192).

@dborodin836
Copy link
Owner

I didn't manage to find what caused that, but I did find a few other bugs. There's still a possibility that it was just a coincidence, but that's weird nonetheless. I will check that a bit more deeply when the documentation is finished.

@dborodin836 dborodin836 linked a pull request Jun 18, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants