Skip to content

Conversation

ayushsatyam146
Copy link
Contributor

@ayushsatyam146 ayushsatyam146 commented Aug 19, 2025

Purpose

Fixes #23073. This PR unifies mamba and attention backend selection logic by removing the separate get_mamba_attn_backend() function and implementing the standard Layer.get_attn_backend() interface for all layer types.

Fixes the issue where mamba models used duplicate backend selection logic instead of the unified approach.

Changes Made:

  • Deleted vllm/v1/attention/backends/mamba_selectors.py to eliminate duplicate code
  • Updated MambaMixer and MambaMixer2 classes to implement get_attn_backend() method
  • Modified gpu_model_runner.py to use the unified get_attn_backends_for_layers() function for all layer types
  • Removed mamba-specific backend selection logic from model runner
  • Updated affected model files to use the new unified interface

This makes mamba models more pluggable and follows the same pattern as regular attention layers:

  • MambaMixer.get_attn_backend()Mamba1AttentionBackend
  • MambaMixer2.get_attn_backend()Mamba2AttentionBackend

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Aug 19, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively unifies the attention backend selection logic for Mamba and standard attention layers, which is a great improvement for code maintainability and pluggability. The changes in the model runner and layer classes are well-implemented. My main feedback is on the testing side, where I've identified an opportunity to consolidate and improve the new tests for better clarity and correctness.

@ayushsatyam146 ayushsatyam146 force-pushed the mamaba-attantion-backend-fix branch 3 times, most recently from a9d32c9 to f648aab Compare August 19, 2025 10:28
Copy link

mergify bot commented Aug 21, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ayushsatyam146.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Aug 21, 2025
@Josephasafg
Copy link
Contributor

I think this is a good and cleaner approach overall. Lets let @tdoublep / @heheda12345 give their opinion as well

@ayushsatyam146 ayushsatyam146 force-pushed the mamaba-attantion-backend-fix branch from 13ce598 to 6c756ad Compare August 21, 2025 16:15
@mergify mergify bot removed the needs-rebase label Aug 21, 2025
@ayushsatyam146 ayushsatyam146 force-pushed the mamaba-attantion-backend-fix branch from 6c756ad to 956451c Compare August 21, 2025 16:25
@ayushsatyam146 ayushsatyam146 force-pushed the mamaba-attantion-backend-fix branch from 956451c to e0b056e Compare August 21, 2025 16:30
@ayushsatyam146
Copy link
Contributor Author

Hi @heheda12345 @LucasWilkinson @Josephasafg, this PR needed rebase because ShortConvAttentionBackend was added and my PR needed to accommodate it's changes as well. I have rebased it and addressed all concerns, including some new suggestions by @Josephasafg. Please review, Thanks!

@ayushsatyam146
Copy link
Contributor Author

Hi @LucasWilkinson @heheda12345 @Josephasafg, this is a gentle reminder to take a look at this PR. I have added all the suggested changes and addressed all the concerns regarding this PR. Please let me know if anything else needs a change.

Copy link
Collaborator

@heheda12345 heheda12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Only a few small comments.

    Use Layer.get_attn_backend() interface for all layer types instead of
    separate mamba-specific backend selection logic.

Signed-off-by: Ayush Satyam <[email protected]>
@ayushsatyam146 ayushsatyam146 force-pushed the mamaba-attantion-backend-fix branch from cb65706 to 9901487 Compare August 25, 2025 04:00
@ayushsatyam146
Copy link
Contributor Author

Thanks @heheda12345 for reviewing again. I addressed your comments and there was another merge conflict that needed to be addressed so I fixed that too. PTAL when you get time, Thanks!
cc: @LucasWilkinson @Josephasafg @killershrimp

Copy link
Collaborator

@heheda12345 heheda12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the clean-up.

@heheda12345 heheda12345 changed the title refactor: Unify mamba and attention backend selection [Attention] Unify mamba and attention backend selection Aug 25, 2025
@heheda12345 heheda12345 enabled auto-merge (squash) August 25, 2025 04:43
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 25, 2025
Copy link
Collaborator

@LucasWilkinson LucasWilkinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM; thanks for doing this!

@ayushsatyam146
Copy link
Contributor Author

The CI checks have failed @heheda12345 @LucasWilkinson and I am not sure if it is because of my changes or not. What should I do in this case? Am I supposed to patch a fix for this or is this a flaky CI check?

@heheda12345 heheda12345 merged commit 5c4b6e6 into vllm-project:main Aug 25, 2025
53 checks passed
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
ekagra-ranjan pushed a commit to ekagra-ranjan/vllm that referenced this pull request Sep 4, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Refactor]: Get rid of get_mamba_attn_backend

5 participants