Replies: 2 comments 2 replies
-
This won't be something I'll be adding anytime soon for the reason that I use GitHub Copilot and this feature would be complex and time consuming. If you wanted to discuss making this a PR I would be more than happy to accept it. My thoughts are that it would likely require a daemon which sends the context to the LLM. But how this is tied into a Chat Buffer I'm undecided. |
Beta Was this translation helpful? Give feedback.
-
I think maybe in the early versions, a special operator "@" can be introduced in chat or inline strategies. Utilize the capabilities of LSP or treesitter to support adding necessary information manually in the context given to the model in the form of @symbol or @function. I think this would be very helpful for most functions related to writing tests or understanding code. At the same time, this capability can also serve as a basis for automatically obtaining necessary context in the future. |
Beta Was this translation helpful? Give feedback.
-
Currently, manually selecting the context for LLM seems cumbersome and inefficient, and in some usage scenarios, it also cannot achieve good results. For example, when writing test code based on given code content, it often depends on the current function context and various function or struct definitions it relies on. Since these parts are not provided with context, they can only rely on LLM to guess. Perhaps we can rely on LSP and TreeSitter to provide more comprehensive contexts in these usage scenarios.
Beta Was this translation helpful? Give feedback.
All reactions