Reasoning Model Better <think> Integration #1250
RockasMockas
started this conversation in
Ideas
Replies: 1 comment 4 replies
-
I'm not sure why you'd want to do this. Most LLMs are stateless so by design so you need to pass in the entire conversation history with every response. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Something that I think could be a general benefit for anyone using reasoning models (local, or via API) would be to have a config option that allows automatically omitting all content in
<think>
tags from being included in subsequent calls to the LLM in the same chat. Cuts down how much context is used and makes the more verbose models a bit better to work with.Would you be interested in having this integrated into the plugin if I submitted a PR?
(Having a way to customize ... text visualization (ie. grey coloring) could also be nice, though based on the comments on the fidget integration I presume you're trying to be pretty stylistic-agnostic in general)
Beta Was this translation helpful? Give feedback.
All reactions