-
Notifications
You must be signed in to change notification settings - Fork 12
KeyError: 'llama-3' in /fastchat/conversation.py when running ToolGen #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, the released version of fastchat does not contain the template for Llama-3. You may need to install it from the code. The follow snippet shows how to do it.
|
Thanks for your answering! Then all thing will be fine. |
Here is another question: |
First run inference on queries to generate trajectories. For toolgen, it's inference_toolgen_pipeline_virtual.sh. Then, convert the trajectory format: scripts/convert_answer/run_convert_answer.sh. Run scripts/pass_rate/run_pass_rate.sh for pass rate evaluation. Run scripts/preference/run_preference.sh for win rate evaluation. Note that run the evaluation will cost GPT-4 credits. |
You can use other scripts in |
Hi! Very wonderful work.
Here I set up the env following your requirements.txt and try to this code below:
`
import json
from OpenAgent.agents.toolgen.toolgen import ToolGen
from OpenAgent.tools.src.rapidapi.rapidapi import RapidAPIWrapper
with open("keys.json", 'r') as f:
keys = json.load(f)
toolbench_key = keys['TOOLBENCH_KEY']
rapidapi_wrapper = RapidAPIWrapper(
toolbench_key=toolbench_key,
rapidapi_key="",
)
toolgen = ToolGen(
"reasonwang/ToolGen-Llama-3-8B",
indexing="Atomic",
tools=rapidapi_wrapper,
)
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": "I'm a football fan and I'm curious about the different team names used in different leagues and countries. Can you provide me with an extensive list of football team names and their short names? It would be great if I could access more than 7000 team names. Additionally, I would like to see the first 25 team names and their short names using the basic plan."}
]
toolgen.restart()
toolgen.start(
single_chain_max_step=16,
start_messages=messages
)
`
I got the error:
Traceback (most recent call last): File "/data2/ToolGen/run_toolgen_local.py", line 24, in <module> toolgen.start( File "/data2/ToolGen/OpenAgent/agents/base.py", line 54, in start out_node = self.do_chain(self.tree.root, single_chain_max_step) File "/data2/ToolGen/OpenAgent/agents/base.py", line 148, in do_chain new_message, error_code, total_tokens = self.get_agent_response(now_node) File "/data2/ToolGen/OpenAgent/agents/base.py", line 73, in get_agent_response new_message, error_code, total_tokens = self.parse(tools=self.io_func.tools, File "/data2/ToolGen/OpenAgent/agents/toolgen/toolgen.py", line 467, in parse conv, roles = self.convert_to_fastchat_format( File "/data2/ToolGen/OpenAgent/agents/toolgen/toolgen.py", line 357, in convert_to_fastchat_format conv = get_conv_template(self.template) File "/home/anaconda3/envs/toolgen/lib/python3.10/site-packages/fastchat/conversation.py", line 415, in get_conv_template return conv_templates[name].copy() KeyError: 'llama-3'
and I use your hugging face model.
I am confused. My env is set up with the requirements.txt where the fschat=0.2.36
any help?
The text was updated successfully, but these errors were encountered: