Replies: 2 comments 3 replies
-
see e.g. here https://github.com/woheller69/LLAMA_TK_CHAT/blob/main/LLAMA_TK_GUI.py |
Beta Was this translation helpful? Give feedback.
3 replies
-
inference is running in a separate thread, so it does not block the UI. And when I want to stop inference I raise an exception. Not ideal, but obviously there is no other way at the moment |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
When loading a model the UI will freeze until the process is done. I tried using a thread but I don't know how to interrupt/cancel a load. How do you do this?
Beta Was this translation helpful? Give feedback.
All reactions