Skip to content

Conversation

DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Apr 7, 2025

This PR reverts the default max_num_seqs from 1024 to 256 for hardware other than H100/200 since many people have run into OOM when using V1.

Note: This does not solve #15664 which is about OOM even when using the same max_num_seqs as V0.

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 7, 2025
Copy link

github-actions bot commented Apr 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the documentation Improvements or additions to documentation label Apr 7, 2025
@DarkLight1337 DarkLight1337 changed the title [V1] Revert the default max_num_seqs to 256 for most hardware [V1] Revert the default max_num_seqs to V0 values for most hardware Apr 7, 2025
Signed-off-by: DarkLight1337 <[email protected]>
@mergify mergify bot added the v1 label Apr 7, 2025
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is good for now. I would prefer if we could build heuristics on actual hardware specs, like the amount of memory available on each GPU, number of SMs, or even the ratio of model memory to kv cache memory - can follow up with this.

@mgoin mgoin merged commit 66d433b into vllm-project:main Apr 7, 2025
43 checks passed
@DarkLight1337 DarkLight1337 deleted the v1-default-max-num-seqs branch April 8, 2025 02:00
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
@WoosukKwon
Copy link
Collaborator

@DarkLight1337 @mgoin Did we compare the performance before and after this PR? I think we should be more careful in changing this performance-related configs.

@DarkLight1337
Copy link
Member Author

No. But many people reported OOM issues because of the new defaults. We should make V1 compatible with V0 with minimal changes required as we promised originally.

lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
@Wesley-Jzy
Copy link

I also see this problem even in H100 and have to use less gpu_memory_utilize to make 0.8.5 V1 workable . Do you have some ideas?

@DarkLight1337
Copy link
Member Author

The CUDA graph in V1 takes up more memory compared to V0 as it is more comprehensive

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants