-
Notifications
You must be signed in to change notification settings - Fork 374
[Feature] Support batch prefill for POD Attention #1231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Edenzzzz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces significant architectural changes to FlashInfer's POD Attention, moving towards a more unified and efficient handling of prefill and decode operations. By integrating the planning and execution of these phases, it aims to streamline the attention mechanism, particularly for scenarios involving both prefill and subsequent decoding steps. The changes involve substantial refactoring of C++ kernels and Python wrappers to support this combined approach.
Highlights
- Unified Prefill and Decode Planning for POD Attention: I've introduced a new
PODWithKVCachePlan
function andPODPlanInfo
struct in the C++ backend (csrc/pod.cu
,include/flashinfer/attention/scheduler.cuh
) to enable unified planning for both prefill and decode operations within POD Attention. This allows for a single planning step that considers both phases, using separate prefill (_p
) and decode (_d
) parameters for KV cache indptrs and lengths. - Refactored POD Attention Kernel Dispatch: The main POD Attention kernel (
PODWithKVCacheTensorKernel
ininclude/flashinfer/attention/pod.cuh
) has been updated to handle both prefill and decode operations within a single kernel. This includes remappingblockIdx.x
usinglinear_bid
for dynamic scheduling and removing separate prefill-specific post-kernel merge/sum operations, indicating a more integrated approach. - API and Parameter Updates: The Python
PODWithPagedKVCacheWrapper.plan
method (flashinfer/pod.py
) now accepts distinct prefill and decode KV cache parameters (qo_indptr_p
,kv_indptr_p
,kv_indices_p
,last_page_len_p
for prefill, andkv_indptr_d
,kv_indices_d
,last_page_len_d
for decode). Several internal parameter names, such asnum_packed_qo_len
tonum_to_merge_qo_len
andindptr
tomerge_indptr
, have been updated for clarity across C++ files. - Simplified Causal Parameter Handling: The
causal
parameter has been removed from severalplan
function signatures (csrc/batch_prefill.cu
,flashinfer/decode.py
,flashinfer/prefill.py
), suggesting that causality might now be implicitly handled or is no longer a configurable parameter at this level for these specific operations. - Benchmarking for Persistent Attention: I've updated
benchmarks/bench_mixed_attention.py
to include benchmarking for the new 'Persistent Attention' (likely referring to the unified POD Attention), allowing for performance comparison against existing batched prefill and POD Attention implementations.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for batch prefill in POD Attention, which is a significant feature. The changes are extensive, touching Python wrappers, CUDA kernels, and configuration files.
My review has identified several critical issues, including a potential memory leak in a CUDA kernel, data corruption bugs in the Python wrapper due to incorrect slicing and copying, and C++ code that is unlikely to compile due to undefined variables and incorrect logic. Given this is a work-in-progress, these are understandable, but they will need to be addressed for the feature to work correctly. I've provided specific suggestions and detailed explanations for each point.
I mistouch the "ready for review" button, feel free to make it back to draft. |
📌 Description
Fixes #1022, with unified indices for prefill and decode and blockIdx.x remapping using
linear_bid
. The decode blocks will access all indices starting from the middle, e.g. [num_prefill_blocks + decode_block_idx].The main reason for not splitting request, q, kv, merge and output indices for decode and prefill is that this approach would take launching two reduction kernels or concatenating the merge indices.
Still need to upstream some changes gluing the kernel with unified indices
TODOs
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commit
by runningpip install pre-commit
(or used your preferred method).pre-commit install
.pre-commit run --all-files
and fixed any reported issues.🧪 Tests
unittest
, etc.).Reviewer Notes