Skip to content

Getting started

Rickard Edén edited this page Nov 4, 2024 · 5 revisions

By chance, you already have a supported backend. Either kobold.cpp or an openai api compatible (which means most) LLM server will* work. Select your preference in llm_config.yaml and then configure the backend in one of the backend_*.yaml files. Regarding openai api's, backend_llama_cpp.yaml is probably most suited if you're running local, while backend_openai.yaml is if you're using openai actual.

*No guarantees.

  • Python required. Either 3.8 or 3.10 minimum.
  • Download repo, either with git clone git@github.com:neph1/LlamaTale.git or as a zip. Master branch should be stable.
  • Run pip install -r requirements.txt
  • Start your backend, KoboldCpp or OpenAI compatible. (port 5001 by default, or change in llm_config.yaml)
  • Start the demo with python -m stories.prancingllama.story

Optional:

  • (Recommended) If you'd rather play in a browser, add the --web flag and connect to http://localhost:8180/tale/story
  • If you have a v2 character card and want to skip character creation, add --character path_to_character
  • If you want to load a v2 character as a follower, type load_character path_to_character_relative_to_story_folder in the game prompt
  • Check https://github.com/neph1/LlamaTale/wiki/Creating-a-character if you want to make a 'full featured' LlamaTale character.