Skip to content

[Bugfix][Frontend] Fixed issue where requests with duplicate request IDs might be sent to EngineCore simultaneously #15326

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

hidva
Copy link
Contributor

@hidva hidva commented Mar 22, 2025

Currently, vllm allows users to send duplicate request IDs. At the same time, numerous modules in EngineCore use request IDs as dictionary keys, such as KVCacheManager.req_to_blocks. This is based on the assumption that EngineCore always expects the Frontend to first abort a request before adding a new one with the same request ID:

# req1, req2 have the same request_id.
(EngineCoreRequestType.ADD, req1(request_id=RequestId))
(EngineCoreRequestType.ABORT, req1)
(EngineCoreRequestType.ADD, req2(request_id=RequestId))

Currently, AsyncLLM ensures that duplicate request IDs must first be aborted before they can be added through the sequence AsyncLLM._add_request -> OutputProcessor.add_request:

# OutputProcessor.add_request
request_id = request.request_id
if request_id in self.request_states:
    raise ValueError(f"Request id {request_id} already running.")

# AsyncLLM.abort
async def abort(self, request_id: str) -> None:
    """Abort RequestId in OutputProcessor and EngineCore."""

    request_ids = self.output_processor.abort_requests((request_id,))
    # BUG!
    # This operation is not atomic, and there might be a time window during which
    # the request has already been removed from OutputProcessor.request_states,
    # but the corresponding ABORT has not yet been issued to EngineCore.
    await self.engine_core.abort_requests_async(request_ids)

    if self.log_requests:
        logger.info("Aborted request %s.", request_id)

We can easily simulate the potential bug by enlarging the possible time window with an await asyncio.sleep(13) inserted at the BUG point:
image

To fix this issue, we categorized completed requests into two types:

  • abort req, handle_abort_reqs
  • finished req, _handle_finished_reqs

And ensured that the scope of request visibility in the Frontend always includes the scope of request visibility in EngineCore.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Mar 22, 2025
@robertgshaw2-redhat
Copy link
Collaborator

Thanks for your contribution! I agree that this is a race condition. Appreciate you digging in

self.handle_abort_reqs(request_ids_to_abort)
return request_ids_to_abort

def flatten_req_to_abort(self, req_ids: Iterable[str]) -> list[str]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we call this something more descriptive? get_parent_and_children_reqs?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should probably also reflect the fact that the parent request is being removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the fact that the parent request is being removed

Yes, Do you have any good suggestions? How about try_pop_parent?

@robertgshaw2-redhat
Copy link
Collaborator

Thanks a ton! I reviewed the implementation in detail and you have fixed the problem! Just left some minor comments about naming the functions and comments. Ping me on slack when this is ready!

@njhill
Copy link
Member

njhill commented Mar 26, 2025

Thanks for this @hidva, I agree with @robertgshaw2-redhat's comments.

However, I was already thinking it might be more robust to have the engine return finished notifications for all requests, including those whose abort is initiated from the front-end process. Currently it just stops sending any outputs for these but we could change it so that there will be a terminating RequestOutput with "aborted" finish_reason in these cases.

Then we can clean up the output processor request states based on these responses rather than the current logic that's a bit disjoint.

Another reason to do this is that in addition to the leak that you pointed out, there may still be a bug where such aborted requests aren't captured properly in the metrics, because _update_stats_from_finished never gets called for them.

@mergify mergify bot added the tpu Related to Google TPUs label Mar 27, 2025
@hidva
Copy link
Contributor Author

hidva commented Mar 27, 2025

Apologies for the delay; I was on vacation until now. I will continue to follow up on this PR.

@hidva
Copy link
Contributor Author

hidva commented Mar 27, 2025

the engine return finished notifications for all requests,

However, there are indeed some scenarios where only the frontend can notify the engine to stop outputting, such as the presence of a stop string or when the client disconnects. If we let the engine return finished notifications for all requests, how should the engine be aware of such external conditions like client disconnection?

_update_stats_from_finished never gets called for them.

Yes, we should add a call to _update_stats_from_finished within handle_abort_reqs, and at the same time, ensure that _update_stats_from_finished is idempotent. This way, requests that are aborted due to client disconnection can also be captured properly in the metrics.

In other words, after we introduced the concepts of aborted requests and finished requests, we also introduced two interfaces: finish_request()(renamed to free_finished_reqs) and handle_abort_reqs()(renamed to free_aborted_reqs). All finished requests must ultimately call free_finished_reqs() to complete resource cleanup, and similarly, all aborted requests must call free_aborted_reqs() to complete resource cleanup. And all resource cleanup should be idempotent. See Commit: Unified the resource cleanup for aborted and finished requests

@njhill
Copy link
Member

njhill commented Mar 27, 2025

Thanks @hidva just to be clear, I think this PR would be good to merge in its current form but that we should consider a follow-on to address the other things I mentioned.

the engine return finished notifications for all requests,

However, there are indeed some scenarios where only the frontend can notify the engine to stop outputting, such as the presence of a stop string or when the client disconnects. If we let the engine return finished notifications for all requests, how should the engine be aware of such external conditions like client disconnection?

The front-end would still initiate the aborts in the same way, i.e. for client disconnection and stop strings. It's just that the engine would now be guaranteed to subsequently return a final RequestOutput for these with aborted finish reason (this will require a change in the engine of course).

_update_stats_from_finished never gets called for them.

Yes, we should add a call to _update_stats_from_finished within handle_abort_reqs, and at the same time, ensure that _update_stats_from_finished is idempotent. This way, requests that are aborted due to client disconnection can also be captured properly in the metrics.

In other words, after we introduced the concepts of aborted requests and finished requests, we also introduced two interfaces: finish_request()(renamed to free_finished_reqs) and handle_abort_reqs()(renamed to free_aborted_reqs). All finished requests must ultimately call free_finished_reqs() to complete resource cleanup, and similarly, all aborted requests must call free_aborted_reqs() to complete resource cleanup. And all resource cleanup should be idempotent. See Commit: Unified the resource cleanup for aborted and finished requests

Regardless of the idempotence I think that it would be nice if we always do the cleanup when receiving the final response for a given request, irrespective of how it was terminated.

@mergify mergify bot removed the tpu Related to Google TPUs label Mar 28, 2025
@hidva
Copy link
Contributor Author

hidva commented Apr 1, 2025

@njhill Is there anything else that needs to be done for this PR? Also, I'm not sure why the two tests are failing.

@njhill
Copy link
Member

njhill commented Apr 2, 2025

@hidva it seems that the test is hanging. Could you try merging in the latest main again? It's possible that it's a side-effect of the changes.

@mergify mergify bot added tpu Related to Google TPUs and removed tpu Related to Google TPUs labels Apr 9, 2025
hidva added 4 commits April 15, 2025 15:54
…IDs might be sent to EngineCore simultaneously

Signed-off-by: 盏一 <zhanyi.ww@alibaba-inc.com>
Signed-off-by: 盏一 <zhanyi.ww@alibaba-inc.com>
Signed-off-by: 盏一 <zhanyi.ww@alibaba-inc.com>
Signed-off-by: 盏一 <zhanyi.ww@alibaba-inc.com>
@hidva
Copy link
Contributor Author

hidva commented Apr 15, 2025

@njhill Could you help me rerun the Entrypoints test? It seems like a fluke, and I don't have the necessary permissions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants