Skip to content

Fix Whisper crash caused by invalid max_num_batched_tokens config #17853

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 9, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions vllm/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -2050,6 +2050,13 @@ def __post_init__(self) -> None:
_MULTIMODAL_MODEL_MAX_NUM_BATCHED_TOKENS,
)

# When using default settings,
# Ensure max_num_batched_tokens does not exceed model limit.
# Some models (e.g., Whisper) have embeddings tied to max length.
self.max_num_batched_tokens = min(
self.max_num_seqs * self.max_model_len,
self.max_num_batched_tokens)

Comment on lines +2054 to +2059
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we should only warn the user rather than silently set max_num_batched_tokens.
Also checking the limit below right after it was upper bounded here seems wasteful doesn't it?

Copy link
Contributor Author

@inkcherry inkcherry May 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review!
The crash occurs during the memory profiling stage., the model performs an execution using max_num_batched_tokens/ max_num_seqs per seq, but the length may exceed the embedding position limit, FYI https://github.com/vllm-project/vllm/blob/376786fac1fc50e8d788a39a91fa28d1709ad48b/vllm/model_executor/models/whisper.py#L416C7-L416C59. Therefore, we should ensure that max_num_batched_tokens < max_model_len*num_seqs

  • for default settings, we take the minimum value to ensure safety. (Note: satisfying this clipping condition typically requires both max_num_seqs and max_model_len to be small, this does not affect the vast majority of use cases.)

  • for user-defined settings: I’ve replaced the error check with a warning instead.

self.max_num_encoder_input_tokens = self.max_num_batched_tokens
self.encoder_cache_size = self.max_num_batched_tokens

Expand Down Expand Up @@ -2090,6 +2097,13 @@ def _verify_args(self) -> None:
"be greater than or equal to max_num_seqs "
f"({self.max_num_seqs}).")

if self.max_num_batched_tokens > self.max_num_seqs * self.max_model_len:
logger.warning(
"max_num_batched_tokens (%d) exceeds max_num_seqs"
"* max_model_len (%d). This may lead to unexpected behavior.",
self.max_num_batched_tokens,
self.max_num_seqs * self.max_model_len)

if self.num_lookahead_slots < 0:
raise ValueError(
"num_lookahead_slots "
Expand Down