Skip to content

Commit 18ea6b0

Browse files
committed
remove wrong assertion logic
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
1 parent 451da4b commit 18ea6b0

File tree

1 file changed

+0
-1
lines changed
  • vllm/model_executor/layers/fused_moe

1 file changed

+0
-1
lines changed

vllm/model_executor/layers/fused_moe/layer.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -503,7 +503,6 @@ def forward_cuda(
503503
indices_type=torch.uint32 if self.moe.use_pplx_kernels else None)
504504

505505
if self.rocm_aiter_moe_enabled:
506-
assert not apply_router_weight_on_input
507506
assert expert_map is None
508507
return self.rocm_aiter_fused_experts(
509508
hidden_states=x,

0 commit comments

Comments
 (0)