Skip to content

Conversation

heheda12345
Copy link
Collaborator

@heheda12345 heheda12345 commented May 12, 2025

Should be merged after worker side change #17945 and kv cache manager changes #17999 #18001 #18003

The kv cache manager part of hybrid memory allocator. Most worker side changes in this PR are already included in #17945 and pls only review code inside vllm/v1/core at this moment. See #16101 and #13296 for the design.

Correctness

model: google/gemma-3-12b-it

this pr, gsm8k

lm_eval --model vllm --tasks gsm8k --model_args pretrained=google/gemma-3-12b-it --batch_size auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.8840 ± 0.0088
strict-match 5 exact_match 0.8772 ± 0.0090

main, gsm8k

uvx --with vllm --extra-index-url https://wheels.vllm.ai/e60f550b3825cbce2d3c7e882b029e2c1d914d8d lm_eval --model vllm --tasks gsm8k --model_args pretrained=google/gemma-3-12b-it --batch_size auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.8795 ± 0.0090
strict-match 5 exact_match 0.8726 ± 0.0092

this pr, mmlu

lm_eval --model vllm --tasks mmlu --model_args pretrained=google/gemma-3-12b-it --batch_size auto

Groups Version Filter n-shot Metric Value Stderr
mmlu 2 none acc 0.7147 ± 0.0036
- humanities 2 none acc 0.6389 ± 0.0065
- other 2 none acc 0.7673 ± 0.0073
- social sciences 2 none acc 0.8190 ± 0.0068
- stem 2 none acc 0.6743 ± 0.0080

main, mmlu

uvx --with vllm --extra-index-url https://wheels.vllm.ai/e60f550b3825cbce2d3c7e882b029e2c1d914d8d lm_eval --model vllm --tasks mmlu --model_args pretrained=google/gemma-3-12b-it --batch_size auto

Groups Version Filter n-shot Metric Value Stderr
mmlu 2 none acc 0.7147 ± 0.0036
- humanities 2 none acc 0.6389 ± 0.0065
- other 2 none acc 0.7673 ± 0.0073
- social sciences 2 none acc 0.8190 ± 0.0068
- stem 2 none acc 0.6743 ± 0.0080

this pr, mmlu_pro

lm_eval --model vllm --tasks mmlu_pro --model_args pretrained=google/gemma-3-12b-it --batch_size auto

Groups Version Filter n-shot Metric Value Stderr
mmlu_pro 2 custom-extract exact_match 0.4879 ± 0.0044

main, mmlu_pro

uvx --with vllm --extra-index-url https://wheels.vllm.ai/e60f550b3825cbce2d3c7e882b029e2c1d914d8d lm_eval --model vllm --tasks mmlu_pro --model_args pretrained=google/gemma-3-12b-it --batch_size auto

Groups Version Filter n-shot Metric Value Stderr
mmlu_pro 2 custom-extract exact_match 0.4875 ± 0.0044

Will add performance benchmark result later.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label May 12, 2025
Copy link

mergify bot commented May 12, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @heheda12345.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added tpu Related to Google TPUs needs-rebase labels May 12, 2025
@heheda12345 heheda12345 marked this pull request as draft May 12, 2025 14:58
@WoosukKwon
Copy link
Collaborator

@heheda12345 Please rebase. It will help me review this PR.

@heheda12345
Copy link
Collaborator Author

Sure. Rebasing right now.

Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
@heheda12345 heheda12345 force-pushed the hybrid_allocator_a branch from 91941eb to ec55021 Compare May 15, 2025 03:05
@mergify mergify bot removed the needs-rebase label May 15, 2025
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
@heheda12345 heheda12345 marked this pull request as ready for review May 15, 2025 06:41
@heheda12345
Copy link
Collaborator Author

@WoosukKwon I've rebased this PR. It is ready for an initial code review. I'm working on unit tests and benchmarks now.

Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heheda12345 I need some help to understand this PR. I've spent several hours reading this, but didn't get the clear picture. Let's chat offline.

@mergify mergify bot removed the needs-rebase label Jun 4, 2025
@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 4, 2025
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heheda12345 LGTM! Thanks so much for the tremendous effort on this PR. It must’ve been really tough. Really appreciate your hard work and patience!

@WoosukKwon WoosukKwon merged commit f8a1a2d into vllm-project:main Jun 6, 2025
68 checks passed
Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

Comment on lines -31 to -32
@classmethod
def create_empty(cls) -> "KVCacheBlocks":
Copy link
Member

@tlrmchlsmth tlrmchlsmth Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why this method was removed?

It's used here, so this change broke main:

c.update_state_after_alloc(request,
KVCacheBlocks.create_empty(), 0)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I am also looking into the same thing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's replaced by kv_cache_manager.create_empty_block_list(), because the KVCacheBlocks class does not know the number of kv cache groups.
Not sure why this is not detected in CI 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the CI runs only on the branch and the branch was rebased on main prior to the multi-connector change that causes the problem being merged, even though there were no "conflicts".

I can fix this in multi-connector.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@njhill Thank you!

Copy link
Member

@njhill njhill Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed by #19291

Comment on lines +274 to +276
assert self.other_block_size % self.full_attention_block_size == 0, (
"KVCacheCoordinator assumes the block_size of full attention "
"layers is divisible by other layers now.")
Copy link
Contributor

@ekagra-ranjan ekagra-ranjan Jun 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heheda12345 - according to the assert message, shouldnt it be self.full_attention_block_size % self.other_block_size == 0 ? Or the msg should be updated?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching up this problem. The comment should be updated.

joerunde pushed a commit to vllm-project/vllm-spyre that referenced this pull request Jun 13, 2025
vLLM v0.9.1 contains a bug that causes vllm-spyre to hang on boot-up.

The bug is not respecting `num_gpu_blocks_overrides`. It was introduced
in vllm-project/vllm#17996 and fixed in
vllm-project/vllm#19503.

---------

Signed-off-by: Travis Johnson <[email protected]>
leoli1208 pushed a commit to leoli1208/vllm that referenced this pull request Jul 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed tpu Related to Google TPUs v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants