Skip to content

Merge changes #213

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 108 commits into from
Jul 4, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
108 commits
Select commit Hold shift + click to select a range
8183d0f
Fix typos in strings and comments (#11476)
co63oc May 30, 2025
b975bce
[docs] update torchao doc link (#11634)
sayakpaul May 30, 2025
3a31b29
Use float32 RoPE freqs in Wan with MPS backends (#11643)
hvaara Jun 2, 2025
d4dc4d7
[chore] misc changes in the bnb tests for consistency. (#11355)
sayakpaul Jun 2, 2025
20273e5
[tests] chore: rename lora model-level tests. (#11481)
sayakpaul Jun 2, 2025
9f48394
[docs] Caching methods (#11625)
stevhliu Jun 2, 2025
c934720
[docs] Model cards (#11112)
stevhliu Jun 2, 2025
d04cd95
[CI] Some improvements to Nightly reports summaries (#11166)
DN6 Jun 5, 2025
0142f6f
[chore] bring PipelineQuantizationConfig at the top of the import cha…
sayakpaul Jun 5, 2025
745199a
[examples] flux-control: use num_training_steps_for_scheduler (#11662)
Markus-Pobitzer Jun 5, 2025
0f91f2f
use deterministic to get stable result (#11663)
jiqing-feng Jun 6, 2025
16c955c
[tests] add test for torch.compile + group offloading (#11670)
sayakpaul Jun 6, 2025
73a9d58
Wan VACE (#11582)
a-r-r-o-w Jun 6, 2025
f46abfe
fixed axes_dims_rope init (huggingface#11641) (#11678)
sofinvalery Jun 8, 2025
7c6e9ef
[tests] Fix how compiler mixin classes are used (#11680)
sayakpaul Jun 9, 2025
5b0dab1
Introduce DeprecatedPipelineMixin to simplify pipeline deprecation pr…
DN6 Jun 9, 2025
6c7fad7
Add community class StableDiffusionXL_T5Pipeline (#11626)
ppbrown Jun 9, 2025
b0f7036
Update pipeline_flux_inpaint.py to fix padding_mask_crop returning on…
Meatfucker Jun 10, 2025
b79803f
Allow remote code repo names to contain "." (#11652)
akasharidas Jun 10, 2025
8e88495
[LoRA] support Flux Control LoRA with bnb 8bit. (#11655)
sayakpaul Jun 11, 2025
e27142a
[`Wan`] Fix VAE sampling mode in `WanVideoToVideoPipeline` (#11639)
tolgacangoz Jun 11, 2025
33e636c
enable torchao test cases on XPU and switch to device agnostic APIs f…
yao-matrix Jun 11, 2025
b6f7933
[tests] tests for compilation + quantization (bnb) (#11672)
sayakpaul Jun 11, 2025
9154566
[tests] model-level `device_map` clarifications (#11681)
sayakpaul Jun 11, 2025
f3e0911
Improve Wan docstrings (#11689)
a-r-r-o-w Jun 11, 2025
447ccd0
Set _torch_version to N/A if torch is disabled. (#11645)
rasmi Jun 11, 2025
b272807
Avoid DtoH sync from access of nonzero() item in scheduler (#11696)
jbschlosser Jun 11, 2025
47ef794
Apply Occam's Razor in position embedding calculation (#11562)
tolgacangoz Jun 11, 2025
00b179f
[docs] add compilation bits to the bitsandbytes docs. (#11693)
sayakpaul Jun 12, 2025
648e895
swap out token for style bot. (#11701)
sayakpaul Jun 13, 2025
62cbde8
[docs] mention fp8 benefits on supported hardware. (#11699)
sayakpaul Jun 13, 2025
e52ceae
Support Wan AccVideo lora (#11704)
a-r-r-o-w Jun 13, 2025
368958d
[LoRA] parse metadata from LoRA and save metadata (#11324)
sayakpaul Jun 13, 2025
9f91305
Cosmos Predict2 (#11695)
a-r-r-o-w Jun 13, 2025
8adc600
Chroma Pipeline (#11698)
Ednaordinary Jun 14, 2025
d1db4f8
[LoRA ]fix flux lora loader when return_metadata is true for non-diff…
sayakpaul Jun 16, 2025
f0dba33
[training] show how metadata stuff should be incorporated in training…
sayakpaul Jun 16, 2025
81426b0
Fix misleading comment (#11722)
carlthome Jun 16, 2025
9b834f8
Add Pruna optimization framework documentation (#11688)
davidberenstein1957 Jun 16, 2025
79bd7ec
Support more Wan loras (VACE) (#11726)
a-r-r-o-w Jun 17, 2025
1bc6f3d
[LoRA training] update metadata use for lora alpha + README (#11723)
linoytsaban Jun 17, 2025
5ce4814
⚡️ Speed up method `AutoencoderKLWan.clear_cache` by 886% (#11665)
misrasaurabh1 Jun 18, 2025
d72184e
[training] add ds support to lora hidream (#11737)
leisuzz Jun 18, 2025
05e8677
[tests] device_map tests for all models. (#11708)
sayakpaul Jun 18, 2025
62cce30
[chore] change to 2025 licensing for remaining (#11741)
sayakpaul Jun 18, 2025
66394bf
Chroma Follow Up (#11725)
DN6 Jun 18, 2025
48eae6f
[Quantizers] add `is_compileable` property to quantizers. (#11736)
sayakpaul Jun 19, 2025
a4df8db
Update more licenses to 2025 (#11746)
a-r-r-o-w Jun 19, 2025
3fba74e
Add missing HiDream license (#11747)
a-r-r-o-w Jun 19, 2025
7251bb4
Bump urllib3 from 2.2.3 to 2.5.0 in /examples/server (#11748)
dependabot[bot] Jun 19, 2025
fb57c76
[LoRA] refactor lora loading at the model-level (#11719)
sayakpaul Jun 19, 2025
fc51583
[CI] Fix WAN VACE tests (#11757)
DN6 Jun 19, 2025
0c11c8c
[CI] Fix SANA tests (#11756)
DN6 Jun 19, 2025
3287ce2
Fix HiDream pipeline test module (#11754)
DN6 Jun 19, 2025
85a916b
make group offloading work with disk/nvme transfers (#11682)
sayakpaul Jun 19, 2025
195926b
Update Chroma Docs (#11753)
DN6 Jun 19, 2025
3d8d848
fix invalid component handling behaviour in `PipelineQuantizationConf…
sayakpaul Jun 20, 2025
42077e6
Fix failing cpu offload test for LTX Latent Upscale (#11755)
DN6 Jun 20, 2025
5a6e386
[docs] Quantization + torch.compile + offloading (#11703)
stevhliu Jun 20, 2025
6184d8a
[docs] device_map (#11711)
stevhliu Jun 20, 2025
0874dd0
[docs] LoRA scale scheduling (#11727)
stevhliu Jun 20, 2025
7fc53b5
Fix dimensionalities in `apply_rotary_emb` functions' comments (#11717)
tolgacangoz Jun 21, 2025
ee40088
enable deterministic in bnb 4 bit tests (#11738)
jiqing-feng Jun 23, 2025
f20b83a
enable cpu offloading of new pipelines on XPU & use device agnostic e…
yao-matrix Jun 23, 2025
fbddf02
[tests] properly skip tests instead of `return` (#11771)
sayakpaul Jun 23, 2025
cd81349
[CI] Skip ONNX Upscale tests (#11774)
DN6 Jun 23, 2025
798265f
[Wan] Fix mask padding in Wan VACE pipeline. (#11778)
bennyguo Jun 23, 2025
6760300
Add --lora_alpha and metadata handling to train_dreambooth_lora_sana.…
imbr92 Jun 23, 2025
9254271
[docs] minor cleanups in the lora docs. (#11770)
sayakpaul Jun 24, 2025
7bc0a07
[lora] only remove hooks that we add back (#11768)
yiyixuxu Jun 24, 2025
474a248
[tests] Fix HunyuanVideo Framepack device tests (#11789)
a-r-r-o-w Jun 24, 2025
7392c8f
[chore] raise as early as possible in group offloading (#11792)
sayakpaul Jun 24, 2025
5df02fc
[tests] Fix group offloading and layerwise casting test interaction (…
a-r-r-o-w Jun 24, 2025
d3e27e0
guard omnigen processor. (#11799)
sayakpaul Jun 24, 2025
80f27d7
[tests] skip instead of returning. (#11793)
sayakpaul Jun 25, 2025
dd28509
adjust to get CI test cases passed on XPU (#11759)
kaixuanliu Jun 25, 2025
8846635
fix deprecation in lora after 0.34.0 release (#11802)
sayakpaul Jun 25, 2025
10c36e0
[chore] post release v0.34.0 (#11800)
sayakpaul Jun 26, 2025
3649d7b
Follow up for Group Offload to Disk (#11760)
DN6 Jun 26, 2025
d93381c
[rfc][compile] compile method for DiffusionPipeline (#11705)
anijain2305 Jun 26, 2025
a185e1a
[tests] add a test on torch compile for varied resolutions (#11776)
sayakpaul Jun 26, 2025
27bf7fc
adjust tolerance criteria for `test_float16_inference` in unit test (…
kaixuanliu Jun 26, 2025
eea7689
Flux Kontext (#11812)
a-r-r-o-w Jun 26, 2025
00f95b9
Kontext training (#11813)
sayakpaul Jun 26, 2025
d7dd924
Kontext fixes (#11815)
a-r-r-o-w Jun 26, 2025
21543de
remove syncs before denoising in Kontext (#11818)
sayakpaul Jun 27, 2025
e8e44a5
[CI] disable onnx, mps, flax from the CI (#11803)
sayakpaul Jun 27, 2025
cdaf84a
TorchAO compile + offloading tests (#11697)
a-r-r-o-w Jun 27, 2025
76ec3d1
Support dynamically loading/unloading loras with group offloading (#1…
a-r-r-o-w Jun 27, 2025
05e7a85
[lora] fix: lora unloading behvaiour (#11822)
sayakpaul Jun 28, 2025
bc34fa8
[lora]feat: use exclude modules to loraconfig. (#11806)
sayakpaul Jun 30, 2025
3b079ec
ENH: Improve speed of function expanding LoRA scales (#11834)
BenjaminBossan Jun 30, 2025
f064b3b
Remove print statement in SCM Scheduler (#11836)
a-r-r-o-w Jun 30, 2025
87f83d3
[tests] add test for hotswapping + compilation on resolution changes …
sayakpaul Jul 1, 2025
f3e1310
reset deterministic in tearDownClass (#11785)
jiqing-feng Jul 1, 2025
3f3f0c1
[tests] Fix failing float16 cuda tests (#11835)
a-r-r-o-w Jul 1, 2025
a79c3af
[single file] Cosmos (#11801)
a-r-r-o-w Jul 1, 2025
4704586
[docs] fix single_file example. (#11847)
sayakpaul Jul 1, 2025
62e847d
Use real-valued instead of complex tensors in Wan2.1 RoPE (#11649)
mjkvaak-amd Jul 1, 2025
d31b8ce
[docs] Batch generation (#11841)
stevhliu Jul 2, 2025
64a9210
[docs] Deprecated pipelines (#11838)
stevhliu Jul 2, 2025
5ef74fd
fix norm not training in train_control_lora_flux.py (#11832)
Luo-Yihang Jul 2, 2025
0e95aa8
[From Single File] support `from_single_file` method for `WanVACE3DTr…
J4BEZ Jul 2, 2025
6f1d669
[lora] tests for `exclude_modules` with Wan VACE (#11843)
sayakpaul Jul 2, 2025
d6fa329
update: FluxKontextInpaintPipeline support (#11820)
vuongminh1907 Jul 2, 2025
f864a9a
[Flux Kontext] Support Fal Kontext LoRA (#11823)
linoytsaban Jul 2, 2025
8c938fb
[docs] Add a note of `_keep_in_fp32_modules` (#11851)
a-r-r-o-w Jul 2, 2025
e6639fe
[benchmarks] overhaul benchmarks (#11565)
sayakpaul Jul 4, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
41 changes: 31 additions & 10 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,18 @@ env:
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
BASE_PATH: benchmark_outputs

jobs:
torch_pipelines_cuda_benchmark_tests:
torch_models_cuda_benchmark_tests:
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_BENCHMARK }}
name: Torch Core Pipelines CUDA Benchmarking Tests
name: Torch Core Models CUDA Benchmarking Tests
strategy:
fail-fast: false
max-parallel: 1
runs-on:
group: aws-g6-4xlarge-plus
group: aws-g6e-4xlarge
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host --gpus 0
Expand All @@ -35,27 +36,47 @@ jobs:
nvidia-smi
- name: Install dependencies
run: |
apt update
apt install -y libpq-dev postgresql-client
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install pandas peft
python -m uv pip uninstall transformers && python -m uv pip install transformers==4.48.0
python -m uv pip install -r benchmarks/requirements.txt
- name: Environment
run: |
python utils/print_env.py
- name: Diffusers Benchmarking
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_BOT_TOKEN }}
BASE_PATH: benchmark_outputs
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
run: |
export TOTAL_GPU_MEMORY=$(python -c "import torch; print(torch.cuda.get_device_properties(0).total_memory / (1024**3))")
cd benchmarks && mkdir ${BASE_PATH} && python run_all.py && python push_results.py
cd benchmarks && python run_all.py

- name: Push results to the Hub
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_BOT_TOKEN }}
run: |
cd benchmarks && python push_results.py
mkdir $BASE_PATH && cp *.csv $BASE_PATH

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: benchmark_test_reports
path: benchmarks/benchmark_outputs
path: benchmarks/${{ env.BASE_PATH }}

# TODO: enable this once the connection problem has been resolved.
- name: Update benchmarking results to DB
env:
PGDATABASE: metrics
PGHOST: ${{ secrets.DIFFUSERS_BENCHMARKS_PGHOST }}
PGUSER: transformers_benchmarks
PGPASSWORD: ${{ secrets.DIFFUSERS_BENCHMARKS_PGPASSWORD }}
BRANCH_NAME: ${{ github.head_ref || github.ref_name }}
run: |
git config --global --add safe.directory /__w/diffusers/diffusers
commit_id=$GITHUB_SHA
commit_msg=$(git show -s --format=%s "$commit_id" | cut -c1-70)
cd benchmarks && python populate_into_db.py "$BRANCH_NAME" "$commit_id" "$commit_msg"

- name: Report success status
if: ${{ success() }}
Expand Down
4 changes: 0 additions & 4 deletions .github/workflows/build_docker_images.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,10 +75,6 @@ jobs:
- diffusers-pytorch-cuda
- diffusers-pytorch-xformers-cuda
- diffusers-pytorch-minimum-cuda
- diffusers-flax-cpu
- diffusers-flax-tpu
- diffusers-onnxruntime-cpu
- diffusers-onnxruntime-cuda
- diffusers-doc-builder

steps:
Expand Down
211 changes: 63 additions & 148 deletions .github/workflows/nightly_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,9 @@ env:
PYTEST_TIMEOUT: 600
RUN_SLOW: yes
RUN_NIGHTLY: yes
PIPELINE_USAGE_CUTOFF: 5000
PIPELINE_USAGE_CUTOFF: 0
SLACK_API_TOKEN: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
CONSOLIDATED_REPORT_PATH: consolidated_test_report.md

jobs:
setup_torch_cuda_pipeline_matrix:
Expand Down Expand Up @@ -99,11 +100,6 @@ jobs:
with:
name: pipeline_${{ matrix.module }}_test_reports
path: reports
- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_nightly_tests_for_other_torch_modules:
name: Nightly Torch CUDA Tests
Expand Down Expand Up @@ -142,7 +138,6 @@ jobs:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
RUN_COMPILE: yes
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
Expand Down Expand Up @@ -175,12 +170,6 @@ jobs:
name: torch_${{ matrix.module }}_cuda_test_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_torch_compile_tests:
name: PyTorch Compile CUDA tests

Expand Down Expand Up @@ -224,12 +213,6 @@ jobs:
name: torch_compile_test_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_big_gpu_torch_tests:
name: Torch tests on big GPU
strategy:
Expand Down Expand Up @@ -280,12 +263,7 @@ jobs:
with:
name: torch_cuda_big_gpu_test_reports
path: reports
- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY


torch_minimum_version_cuda_tests:
name: Torch Minimum Version CUDA Tests
runs-on:
Expand Down Expand Up @@ -342,125 +320,13 @@ jobs:
with:
name: torch_minimum_version_cuda_test_reports
path: reports

run_flax_tpu_tests:
name: Nightly Flax TPU Tests
runs-on:
group: gcp-ct5lp-hightpu-8t
if: github.event_name == 'schedule'

container:
image: diffusers/diffusers-flax-tpu
options: --shm-size "16gb" --ipc host --privileged ${{ vars.V5_LITEPOD_8_ENV}} -v /mnt/hf_cache:/mnt/hf_cache
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install pytest-reportlog

- name: Environment
run: python utils/print_env.py

- name: Run nightly Flax TPU tests
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
run: |
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_flax_tpu \
--report-log=tests_flax_tpu.log \
tests/

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_flax_tpu_stats.txt
cat reports/tests_flax_tpu_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: flax_tpu_test_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_nightly_onnx_tests:
name: Nightly ONNXRuntime CUDA tests on Ubuntu
runs-on:
group: aws-g4dn-2xlarge
container:
image: diffusers/diffusers-onnxruntime-cuda
options: --gpus 0 --shm-size "16gb" --ipc host

steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: NVIDIA-SMI
run: nvidia-smi

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install pytest-reportlog
- name: Environment
run: python utils/print_env.py

- name: Run Nightly ONNXRuntime CUDA tests
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_onnx_cuda \
--report-log=tests_onnx_cuda.log \
tests/

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_onnx_cuda_stats.txt
cat reports/tests_onnx_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: tests_onnx_cuda_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_nightly_quantization_tests:
name: Torch quantization nightly tests
strategy:
fail-fast: false
max-parallel: 2
matrix:
matrix:
config:
- backend: "bitsandbytes"
test_location: "bnb"
Expand Down Expand Up @@ -520,12 +386,7 @@ jobs:
with:
name: torch_cuda_${{ matrix.config.backend }}_reports
path: reports
- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY


run_nightly_pipeline_level_quantization_tests:
name: Torch quantization nightly tests
strategy:
Expand Down Expand Up @@ -574,12 +435,66 @@ jobs:
with:
name: torch_cuda_pipeline_level_quant_reports
path: reports
- name: Generate Report and Notify Channel
if: always()

generate_consolidated_report:
name: Generate Consolidated Test Report
needs: [
run_nightly_tests_for_torch_pipelines,
run_nightly_tests_for_other_torch_modules,
run_torch_compile_tests,
run_big_gpu_torch_tests,
run_nightly_quantization_tests,
run_nightly_pipeline_level_quantization_tests,
# run_nightly_onnx_tests,
torch_minimum_version_cuda_tests,
# run_flax_tpu_tests
]
if: always()
runs-on:
group: aws-general-8-plus
container:
image: diffusers/diffusers-pytorch-cpu
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Create reports directory
run: mkdir -p combined_reports

- name: Download all test reports
uses: actions/download-artifact@v4
with:
path: artifacts

- name: Prepare reports
run: |
# Move all report files to a single directory for processing
find artifacts -name "*.txt" -exec cp {} combined_reports/ \;

- name: Install dependencies
run: |
pip install -e .[test]
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY


- name: Generate consolidated report
run: |
python utils/consolidated_test_report.py \
--reports_dir combined_reports \
--output_file $CONSOLIDATED_REPORT_PATH \
--slack_channel_name diffusers-ci-nightly

- name: Show consolidated report
run: |
cat $CONSOLIDATED_REPORT_PATH >> $GITHUB_STEP_SUMMARY

- name: Upload consolidated report
uses: actions/upload-artifact@v4
with:
name: consolidated_test_report
path: ${{ env.CONSOLIDATED_REPORT_PATH }}

# M1 runner currently not well supported
# TODO: (Dhruv) add these back when we setup better testing for Apple Silicon
# run_nightly_tests_apple_m1:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pr_style_bot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ jobs:
with:
python_quality_dependencies: "[quality]"
secrets:
bot_token: ${{ secrets.GITHUB_TOKEN }}
bot_token: ${{ secrets.HF_STYLE_BOT_ACTION }}
Loading
Loading