-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
O1 parsing code replace fix #686
base: main
Are you sure you want to change the base?
Conversation
…vante.nvim into o1-complete-fix
Hello, avante.nvim has refactored the prompt and is now using aider's prompt. It will no longer rely on line numbers returned by LLM, so it seems this issue won't occur anymore. You can try updating and testing it. |
Awesome I saw the new change and it's great architecture wise :) To fix this, I think the best option is either to change the "on_chunk" function to handle non-streaming requests (the function fails to work in the openai "parse_response_without_stream" function), or to separate the searching/replacing from the "on_chunk" itself, and allowing it to work both during streaming and non streaming - in the same place in the code. I will try to work on one of the options now to let the O1 models I use work with it fluently too, will update on progress :) |
@yetone if is_first_chunk then
is_first_chunk = false
self:update_content(content_prefix .. chunk, { stream = false, scroll = true })
return Since O1 models give a result it one chunk, this code was just returning the content without replacing the relevant lines for parsing. I've basically removed this as this doesn't affect the Claude and other streaming models and provides the ability to work with O1 models as they are meant too. |
@@ -1214,11 +1218,7 @@ function Sidebar:create_input(opts) | |||
transformed_response = transformed.content | |||
prev_is_searching = transformed.is_searching | |||
local cur_displayed_response = generate_display_content(transformed) | |||
if is_first_chunk then | |||
is_first_chunk = false | |||
self:update_content(content_prefix .. chunk, { stream = false, scroll = true }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this logic is removed, **Generating response ...**\n
will always exist in the response buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From all my tests it did remove the **Generating response ...**\n
line
Did I miss something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bug, thank you for helping to find it. I will fix it next.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that if this code is kept one way or another, then the "transformed_response" is not printed out which doesn't parse the result of O1 or non-stream models. Now that I've removed this line it does work, so there is no bug isn't there? 😂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is currently no issue, mainly because I did not update the response buffer using the stream method. This was my oversight. I will use the stream method to update the response buffer next, so the judgment of is_first_chunk should be retained.
@@ -282,18 +282,22 @@ local function extract_code_snippets(response_content) | |||
local explanation = "" | |||
|
|||
for idx, line in ipairs(vim.split(response_content, "\n")) do | |||
-- Remove bold formatting markers for consistent matching | |||
local clean_line = line:gsub("%*%*", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current Replace Lines: 2-23
is generated by code, not returned by LLM, so no special handling is needed because it will definitely be Replace Lines: 2-23
, and there won't be any other situation.
O1 models sometimes return the following format when marking lines to replaces:
This format doesn't allow completion to work correctly, so I've added a fix that clears the current line to parse the "Replace Lines" correctly on O1 models too.
Since the new code only cleans the
**
and`
characters -> it still works as expected with Claude and all other models.