How do I specify the chat format for the phi3.5 model? It seems like no one knows #1729
Unanswered
suhyun01150
asked this question in
Q&A
Replies: 1 comment
-
Hello! I have been using it without specifying the prompt format and have obtained interesting results. https://github.com/controlecidadao/samantha_ia This video shows Phi 3.5 vs Gemma 2 in an intelligence challenge about human nature using the interface. Llama 3.1 plays the role of judge: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
After setting chat_format to None, can I execute phi3, gemma, etc., exactly as described in the README?
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are an assistant who perfectly describes images."},
{
"role": "user",
"content": "Describe this image in detail please."
}
]
)
If it works, how does it do so? When I load the model, I see an output like this. Does it automatically recognize the format?
tokenizer.chat_template str = {% for message in messages %}{% if me...
somebody help me~~
Beta Was this translation helpful? Give feedback.
All reactions