Skip to content

Conversation

zhewenl
Copy link
Collaborator

@zhewenl zhewenl commented Jul 19, 2025

Summary:
Add bc-linter to vLLM CI, in this PR we are piloting adding this only to vllm/v1/core/sched/output.py
Config is defined in .bc-linter.yml, where https://github.com/pytorch/test-infra/blob/main/tools/stronghold/docs/bc_linter_config.md has more examples

Class support, config support added in pytorch/test-infra#6953, pytorch/test-infra#7016

Purpose

bc-linter(https://github.com/pytorch/test-infra/wiki/BC-Linter) is tool in pytorch test infra to test backward compatibility on public pytorch functions used externally. this PR enables the check in vLLM where we can use .bc-linter.yml with @bc_linter_include / @bc_linter_excludeannotations to protect bc on functions

Test Plan

Test Plan:
CI

Test Result

https://github.com/vllm-project/vllm/pull/23855/files#diff-cafd89ce8a698a56acb24ada62831cbc7a980782f78a52d1742ba238031f296c
Screenshot 2025-08-28 at 12 30 35 PM

Copy link
Contributor

Note

Gemini is unable to generate a review for this pull request due to the file types involved not being currently supported.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the ci/build label Jul 19, 2025
@zhewenl zhewenl self-assigned this Jul 19, 2025
@@ -0,0 +1,23 @@
name: BC Lint

on:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possible to only turn on for specific folders only?

i think remind folks of BC breaking changes in core/ and attention/ are generally helpful. while it might feel a bit spammy in other folders.

Comment on lines 12 to 13
- vllm/attetion/**
- vllm/core/**
Copy link
Collaborator

@yeqcharlotte yeqcharlotte Jul 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v0 is not needed

Copy link
Collaborator

@yeqcharlotte yeqcharlotte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Can we also update the lint message so folks know how to suppress the check?

cc: @huydhn @seemethere @simon-mo @WoosukKwon to also take a look!

# branches-ignore:
# - <branches to ignore bc-linter>
paths:
- vllm/attetion/**
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean we have all the files covered here to ensure the BC guard?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, it will check all the files under this path

Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me, how the BC checking works? Do we need to tag some function to be BC guaranteed?

@zhewenl
Copy link
Collaborator Author

zhewenl commented Jul 21, 2025

Discussed offline, will check if we can use some annotations to restrict linter coverage for a selection of tests

It's not clear to me, how the BC checking works? Do we need to tag some function to be BC guaranteed?

@houseroad here is documentation of bc-linter: https://github.com/pytorch/test-infra/wiki/BC-Linter

The linter is designed to identify and report issues such as function deletion, parameter removal, renaming, or reordering, changes in parameter requiredness, and removal of support for variadic parameters.

like some warnings in https://github.com/vllm-project/vllm/pull/21235/files#diff-bafd6bd048c5ca7bad726e6f509fe4409f4033342e2b5b0c3016156e2f708b12

repo: ${{ github.event.pull_request.head.repo.full_name }}
base_sha: ${{ github.event.pull_request.base.sha }}
head_sha: ${{ github.event.pull_request.head.sha }}
suppression: ${{ contains(github.event.pull_request.labels.*.name, 'suppress-bc-linter') }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note that using labels like suppress-bc-linter is a common self-serve mechanism in PyTorch PR, but we would need a maintainer to do it here, like adding the ready label, which is ok

- vllm/v1/core/**

jobs:
bc_lint:
Copy link
Contributor

@huydhn huydhn Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You want this line https://github.com/pytorch/pytorch/blob/main/.github/workflows/lint-bc.yml#L19C5-L19C45 to run this job only on PR submitted to vllm

Suggested change
bc_lint:
bc_lint:
if: github.repository_owner == 'vllm-project'

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in fact we need this in all of our actions 😆

base_sha: ${{ github.event.pull_request.base.sha }}
head_sha: ${{ github.event.pull_request.head.sha }}
suppression: ${{ contains(github.event.pull_request.labels.*.name, 'suppress-bc-linter') }}
docs_link: 'https://github.com/pytorch/test-infra/wiki/BC-Linter'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the concurrency rule is important https://github.com/pytorch/pytorch/blob/main/.github/workflows/lint-bc.yml#L31-L33 to avoid multiple bc linter jobs running at the same time on the same PR

Suggested change
docs_link: 'https://github.com/pytorch/test-infra/wiki/BC-Linter'
docs_link: 'https://github.com/pytorch/test-infra/wiki/BC-Linter'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true

@huydhn
Copy link
Contributor

huydhn commented Jul 23, 2025

Have you been able to run this https://github.com/pytorch/test-infra/blob/main/.github/actions/bc-lint/action.yml#L62-L68 locally?

At a high level, the BC linter uses https://docs.python.org/3/library/ast.html to parse the python changes in the PR and check for those changes https://github.com/pytorch/test-infra/blob/main/tools/stronghold/src/api/violations.py. It has some PyTorch-specific logic that might not be applicable to vLLM https://github.com/pytorch/test-infra/blob/main/tools/stronghold/src/api/compatibility.py#L20-L43, but we could easily update that if needed, for example, it ignores anything start with _NAME because it's a PyTorch convention to have non-standard API starting with underscore (cc @houseroad fyi)

@mergify mergify bot removed the needs-rebase label Sep 3, 2025
paths:
# We temporarily disable globally, and will only enable with `annotations.include`
# include:
# - "vllm/v1/attetion/*.py"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is intentional, we want to disable globally and pilot on few functions/dataclasses to bootstrap

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we move this out of root dir? and move it into say .github

Signed-off-by: zhewenli <[email protected]>
@ProExpertProg
Copy link
Collaborator

I think config & engine args seem like good candidates for this

@@ -0,0 +1,24 @@
# doc: https://github.com/pytorch/test-infra/blob/main/tools/stronghold/docs/bc_linter_config.md

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flagging that currently bc-linter looks for config only in the root directory, might need to add support for alternative directories before merging this PR.

base_sha: ${{ github.event.pull_request.base.sha }}
head_sha: ${{ github.event.pull_request.head.sha }}
suppression: ${{ contains(github.event.pull_request.labels.*.name, 'suppress-bc-linter') }}
docs_link: 'https://github.com/pytorch/test-infra/wiki/BC-Linter'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe create a vllm-specific doc and point this to it?

@DarkLight1337
Copy link
Member

After the initial PR we can extend this to LLM class and also the online serving endpoints

izaitsevfb pushed a commit to pytorch/test-infra that referenced this pull request Sep 11, 2025
### Summary
This PR adds an option to specify BC linter config path, which will be
used in vllm-project/vllm#21234

### Test plan
Locally test against PR: vllm-project/vllm#24614

Before:
```
../test-infra/tools/stronghold/bin/check-api-compatibility --base-commit=b5e383cd8b62975dec605bed05e22d273c296c7a --head-commit=38ce6fa81660f0be6e0e61672fdc499bcd7fabc4

::group::fetch github.event.pull_request.base.sha
From ssh://github.com/zhewenl/vllm
 * branch                b5e383cd8b62975dec605bed05e22d273c296c7a -> FETCH_HEAD
::endgroup::
fatal: path 'vllm/_bc_linter.py' exists on disk, but not in 'b5e383cd8b62975dec605bed05e22d273c296c7a'
::warning file=vllm/v1/core/sched/output.py,line=99::Function CachedRequestData: num_computed_tokens changed from list[int] to list[float]
::warning file=vllm/v1/core/sched/scheduler.py,line=1::Function Scheduler.get_grammar_bitmask: function deleted
::warning file=vllm/v1/core/sched/scheduler.py,line=1::Function Scheduler.update_draft_token_ids: function deleted
```

After:
```
../test-infra/tools/stronghold/bin/check-api-compatibility --base-commit=b5e383cd8b62975dec605bed05e22d273c296c7a --head-commit=38ce6fa81660f0be6e0e61672fdc499bcd7fabc4 --config-dir=.github

::group::fetch github.event.pull_request.base.sha
From ssh://github.com/zhewenl/vllm
 * branch                b5e383cd8b62975dec605bed05e22d273c296c7a -> FETCH_HEAD
::endgroup::
BC-linter: Using .bc-linter.yml (parsed successfully)
fatal: path 'vllm/_bc_linter.py' exists on disk, but not in 'b5e383cd8b62975dec605bed05e22d273c296c7a'
::warning file=vllm/v1/core/sched/output.py,line=99::Function CachedRequestData: num_computed_tokens changed from list[int] to list[float]
> ../test-infra/tools/stronghold/bin/check-api-compatibility --base-commit=b5e383cd8b62975dec605bed05e22d273c296c7a --head-commit=38ce6fa81660f0be6e0e61672fdc499bcd7fabc4
```

Note the difference in `scheduler.py`, before the we specifying the
local config the bc linter yml was not respected - so the default rules
are applied and we check ALL python files; after specifying the yml the
warnings are restricted to ONLY the functions with annotations
@simon-mo simon-mo merged commit 7d46519 into vllm-project:main Sep 11, 2025
10 of 13 checks passed
@njhill
Copy link
Member

njhill commented Sep 12, 2025

@zhewenl this is failing with a strange error on my PR. I'm not sure but it appears to be an error in the tool rather than a failed code check: https://github.com/vllm-project/vllm/actions/runs/17661069007/job/50194417224?pr=24219

  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/__main__.py", line 3, in <module>
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/checker.py", line 73, in run
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/compatibility.py", line 49, in check_range
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/compatibility.py", line 67, in check
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/ast.py", line 19, in extract
    the trees simpler.  The main intention of the helper functions and this
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/ast.py", line 90, in _function_def_to_parameters
    return tuple(map(_convert, node.elts))
     ^^^^^^^^^^^
AttributeError: 'FunctionDef' object has no attribute 'lineno'

Just when we had got the rest of the CI green 😅

@zhewenl
Copy link
Collaborator Author

zhewenl commented Sep 12, 2025

@zhewenl this is failing with a strange error on my PR. I'm not sure but it appears to be an error in the tool rather than a failed code check: https://github.com/vllm-project/vllm/actions/runs/17661069007/job/50194417224?pr=24219

  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/__main__.py", line 3, in <module>
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/checker.py", line 73, in run
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/compatibility.py", line 49, in check_range
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/compatibility.py", line 67, in check
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/ast.py", line 19, in extract
    the trees simpler.  The main intention of the helper functions and this
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/ast.py", line 90, in _function_def_to_parameters
    return tuple(map(_convert, node.elts))
     ^^^^^^^^^^^
AttributeError: 'FunctionDef' object has no attribute 'lineno'

Just when we had got the rest of the CI green 😅

@njhill taking a look, meanwhile you can add #suppress-bc-linter in commit message to suppress it: https://github.com/pytorch/test-infra/wiki/BC-Linter#suppression

@chaunceyjiang
Copy link
Collaborator

https://github.com/vllm-project/vllm/actions/runs/17667773605/job/50212589393?pr=24317

  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/ast.py", line 19, in extract
    the trees simpler.  The main intention of the helper functions and this
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/vllm/vllm/_repo/../_test-infra/tools/stronghold/bin/check-api-compatibility/api/ast.py", line 90, in _function_def_to_parameters
    return tuple(map(_convert, node.elts))
     ^^^^^^^^^^^
AttributeError: 'FunctionDef' object has no attribute 'lineno'
fetch github.event.pull_request.base.sha

@zhewenl I also got the same error. Could this be a widespread issue?

skyloevil pushed a commit to skyloevil/vllm that referenced this pull request Sep 13, 2025
dsxsteven pushed a commit to dsxsteven/vllm_splitPR that referenced this pull request Sep 15, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build deepseek Related to DeepSeek models documentation Improvements or additions to documentation frontend gpt-oss Related to GPT-OSS models llama Related to Llama models multi-modality Related to multi-modality (#4194) new-model Requests to new models performance Performance-related issues qwen Related to Qwen models rocm Related to AMD ROCm speculative-decoding structured-output tool-calling v1

Projects

Status: Done
Status: Done

Development

Successfully merging this pull request may close these issues.