-
This started happening quite recently, but the output from the LLM no longer streams into the chat buffer - I have to wait for it all to show up at once. I'm using the copilot adapter with the gpt4o model and have no complicated config. I've tried reverting to older versions of codecompanion but no luck. Any help in debugging this issue is appreciated. I'm on neovim 0.11. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
Myself (and I assume many others) aren't experiencing this so I'd advise pairing back with the minimal.lua file in the repository. Are you changing the |
Beta Was this translation helpful? Give feedback.
So interestingly...that seem to be a
gpt-4o
issue - it doesn't support streaming. Try thegpt-4o-2024-11-20
model.Edit - Seems an odd decision on GitHub's part