Skip to content

feat: Restore convenience FLASHINFER_ENABLE_AOT option #1235

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mgorny
Copy link
Contributor

@mgorny mgorny commented Jul 8, 2025

📌 Description

Restore the minimal support for FLASHINFER_ENABLE_AOT environment variable that was removed as part of the refactoring in #1075. This option was quite useful for downstreams like Red Hat, since it provided for convenient integration of AOT mode in regular PEP 517 workflows.

This change aims to be absolutely minimal -- it does not change anything for the default workflow, merely readds FLASHINFER_ENABLE_AOT that automatically invokes flashinfer.aot.main(). This makes it possible to perform an AOT build in a single step such as:

TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a" FLASHINFER_ENABLE_AOT=1 python -m build -w

or:

TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a" FLASHINFER_ENABLE_AOT=1 pip install .

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Restore the minimal support for `FLASHINFER_ENABLE_AOT` environment
variable that was removed as part of the refactoring in flashinfer-ai#1075.
This option was quite useful for downstreams like Red Hat, since it
provided for convenient integration of AOT mode in regular PEP 517
workflows.

This change aims to be absolutely minimal -- it does not change anything
for the default workflow, merely readds `FLASHINFER_ENABLE_AOT` that
automatically invokes `flashinfer.aot.main()`.  This makes it possible
to perform an AOT build in a single step such as:

```
TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a" FLASHINFER_ENABLE_AOT=1 python -m build -w
```

or:

```
TORCH_CUDA_ARCH_LIST="7.5 8.0 8.9 9.0a" FLASHINFER_ENABLE_AOT=1 pip install .
```
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @mgorny, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request reintroduces the FLASHINFER_ENABLE_AOT environment variable, providing a convenient mechanism to trigger Ahead-of-Time (AOT) compilation for flashinfer within standard Python packaging workflows (PEP 517). This restores functionality previously removed, making AOT builds more accessible for downstream users without altering the default build process.

Highlights

  • AOT Build Convenience: Reinstates the FLASHINFER_ENABLE_AOT environment variable, allowing users to easily enable Ahead-of-Time (AOT) compilation during standard Python package builds (e.g., pip install or python -m build). This restores a feature previously removed in #1075, making AOT integration more convenient for downstream users.
  • Integration with Build Process: Modifies custom_backend.py to detect the FLASHINFER_ENABLE_AOT variable within the build_wheel function. If the variable is set to '1', it automatically invokes the flashinfer.aot.main() function, triggering the AOT compilation process during wheel creation.
  • AOT Main Function Adaptability: Updates the flashinfer.aot.main function to accept command-line arguments programmatically. This change enables its direct invocation from the build backend (custom_backend.py) without relying on sys.argv directly, making the AOT entry point more flexible.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request restores the FLASHINFER_ENABLE_AOT environment variable, providing a convenient way to trigger Ahead-of-Time (AOT) compilation. I suggested accepting more truthy values for the environment variable and ensuring the main function in flashinfer/aot.py maintains its original CLI behavior.

@@ -426,7 +427,7 @@ def main():
type=parse_bool,
help="Add kernels for Gemma Model (head_dim=256, use_sliding_window, use_logits_soft_cap)",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Pass args to parse_args only when args is not None to maintain the original CLI behavior when the function is called directly from the command line.

    args = parser.parse_args() if args is None else parser.parse_args(args)

@@ -109,5 +110,10 @@ def build_sdist(sdist_directory, config_settings=None):


def build_wheel(wheel_directory, config_settings=None, metadata_directory=None):
if os.environ.get("FLASHINFER_ENABLE_AOT", "0") == "1":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider accepting more truthy values for the FLASHINFER_ENABLE_AOT environment variable to improve user experience. The current implementation only accepts "1", which might be less intuitive for users accustomed to boolean flags with values like true or yes.

Suggested change
if os.environ.get("FLASHINFER_ENABLE_AOT", "0") == "1":
if os.environ.get("FLASHINFER_ENABLE_AOT", "0").lower() in ("1", "true", "yes", "on", "t", "y"):

@yzh119
Copy link
Collaborator

yzh119 commented Jul 8, 2025

Hi @mgorny sure this is something we can do.
Our recent goal is to totally forsaken the AOT wheel, and have a unified package for JIT/AOT mode:

pip install flashinfer-python[all]

or

pip install flashinfer-python
flashinfer --download all # or flashinfer --compile all

to get all modules, WDYT?

@mgorny
Copy link
Contributor Author

mgorny commented Jul 8, 2025

Well, we're using source distributions to build our own AOT wheel, and that's basically what we need to work. Our goal is specifically not to require our customers to wait for kernels to compile and to avoid persistent cache in Kubernetes / OpenShift. So anything that lets us get AOT mode from sdist readily works.

pip install flashinfer-python[all]

I presume this means having a second package that installs the precompiled kernels then? I think that's going to work for us, provided that package also has a sdist. I'll ask to make sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants