Loading of models with context window greater than 4096 #1584
Unanswered
Darrshan-Sankar
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
ggml-org/llama.cpp#2402 (comment)
When i use Langchain, it has the parameter rope_freq_scale. From the above link for fix, could anyone help me and verify is the rope_freq_scale and rope_scale available in Langchain, which runs on Llama-cpp-python is same?
Beta Was this translation helpful? Give feedback.
All reactions