library doesn't print verbose information #364
dhirajsuvarna
started this conversation in
General
Replies: 1 comment 1 reply
-
@dhirajsuvarna Hi What model your using in model_path? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am using llama-cpp-python to load a LLM model and want to see its running times during prediction.
For this i am instantiating the model in the following manner -

but i am not able to see any output regarding the timings.
I am doing this on google colab.
Am I missing something?
Beta Was this translation helpful? Give feedback.
All reactions