Skip to content

Commit 3b5ccf8

Browse files
committed
fix lint
Signed-off-by: Lu Fang <fanglu@fb.com>
1 parent b31355d commit 3b5ccf8

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

vllm/attention/backends/flash_attn.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -624,8 +624,8 @@ def __init__(
624624
"FlashAttention does not support block-sparse attention.")
625625
if use_irope:
626626
logger.warning(
627-
"Using irope in V0 is not supported yet, it will fall back to global attention for long context, which could impact accuracy"
628-
)
627+
"Using irope in V0 is not supported yet, it will fall back "
628+
"to global attention for long context.")
629629
self.num_heads = num_heads
630630
self.head_size = head_size
631631
self.scale = float(scale)

0 commit comments

Comments
 (0)