-
Notifications
You must be signed in to change notification settings - Fork 6.1k
8357530: C2 SuperWord: Diagnostic flag AutoVectorizationOverrideProfitability #25387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
8357530: C2 SuperWord: Diagnostic flag AutoVectorizationOverrideProfitability #25387
Conversation
👋 Welcome back epeter! A progress list of the required criteria for merging this PR into |
@eme64 This change now passes all automated pre-integration checks. ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details. After integration, the commit message for the final commit will be:
You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed. At the time when this comment was updated there had been 48 new commits pushed to the
As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details. ➡️ To integrate this PR with the above commit message to the |
@eme64 this pull request can not be integrated into git checkout JDK-8357530-SuperWordOverrideProfitability
git fetch https://git.openjdk.org/jdk.git master
git merge FETCH_HEAD
# resolve conflicts and follow the instructions given by git merge
git commit -m "Merge master"
git push |
Webrevs
|
@galderz You may be interested in these results ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine. One suggestion I have for separate RFE is to use UL for such outputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me otherwise.
Co-authored-by: Tobias Hartmann <[email protected]>
@vnkozlov Thanks for the approval! About your suggestion: @TobiHartmann I applied all your suggestions :) |
@TobiHartmann @vnkozlov Thanks for the reviews! /integrate |
Going to push as commit e8eff4d.
Your commit was automatically rebased without conflicts. |
I'm adding a diagnostic flag
AutoVectorizationOverrideProfitability
. The goal is that with it, we can systematically benchmark our Auto Vectorization profitability heuristics. In all cases, we run Auto Vectorization, including packing.0
: abort vectorization, as if it was not profitable.1
: default, use profitability heuristics to determine if we should vectorize.2
: always vectorize when possible, even if profitability heuristic would say that it is not profitable.In the future, we may change our heuristics. We may for example introduce a cost model JDK-8340093. But at any rate, we need this flag, so that we can override these profitability heuristics, even if just for benchmarking.
I did not yet go through all of
SuperWord
to check if there may be other decisions that could go under this flag. If we find any later, we can still add them.Below, I'm showing how it helps to benchmark the some reduction cases we have been working on.
And if you want a small test to experiement with, I have one at the end for you.
Note to reviewer: This patch should not make any behavioral difference, i.e. with the default
AutoVectorizationOverrideProfitability=1
the behavior should be as before this patch.Use-Case: investigate Reduction Heuristics
A while back, I have written a comprehensive benchmark for Reductions #21032. I saw that some cases might possibly be profitable, but we have disabled vectorization because of a heuristic.
This heuristic was added a long time ago. The observation at the time was that simple add and mul reductions were not profitable.
From the comments, it becomes clear that "simple reductions" are not profitable, that's why we check if there are more work vectors than reduction vectors. But I'm not sure why 2-element reductions are deemed always not profitable. Maybe it fit the benchmarks at the time, but now with moving reductions out of the loop, this probably does not make sense any more, at least for int/long.
But in the meantime, I have added an improvement, where we move int/long reductions out of the loop. We can do that because int/long reductions can be reordered. See #13056 . We cannot do that with float/double reductions, because there we must keep the strict order of reductions. Otherwise we risk wrong rounding results.
Since then, we have had multiple reports that simple reductions are not vectorized, and I am working on it:
https://bugs.openjdk.org/browse/JDK-8307516
Running the reduction benchmarks from #21032 (please have a look at it now, the results below are only going to be more complicated!), like this:
I ran the experiments on my
x64 / AVX512
machine, and aaarch64 / neon
machine.For each I ran with
SuperWord
disabled (no
), and withSuperWord
andAutoVectorizationOverrideProfitability
set to 1 (default), 0 (abort vectorization), and 2 (force vectorization).The orange
heuristic
tags show where the heuristic makes a difference - in this case we prevent vectorization even though it is would be faster. This is evidence that we need to update the heuristic.Interestingly, forcing vectorization in the
strict
cases did not lead to any performance drop.It seems that forced vectorization is only problematic in one case:
longMulSimple
onaarch64
. I need to investigate. Generally, we do vectorize (if forced - they are 2-element vectors after all) at least some of thelong
cases (hand checkedlongAddSimple
), but it seems it is just not very fast, no idea why. The problematiclongMulSimple
does also vectorize (if forced only), but it is consistently slow. The confusing part:longMulDotProduct
should be even slower. But a quick investigation showed that we actually do not vectorize it, the packing algorithm gets confused about which multiplications to pack. I suspect that generally 2-element multiplication reduction is very slow onneon / arch64
. We will have to be careful about that when we change the heuristic. It is edge cases like these that make me nervous, and are the reason why I have not changed these heuristics sooner.I would also have to investigate the impact on a few more platforms, especially on
AVX
andAVX2
.With
x64
andbyte/char/short
, we never vectorize. Still, enablingSuperWord
changes the level of unrolling, and it seems in some casesSuperWord
enabled leads to over-unrolling, hence you see some slowdowns in some cases. We should investigate that as well.For now it is clear: this flag would be helpful for improving performance heuristics.
Example for the Flag
I played around with an example like this:
Progress
Issue
Reviewers
Reviewing
Using
git
Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/25387/head:pull/25387
$ git checkout pull/25387
Update a local copy of the PR:
$ git checkout pull/25387
$ git pull https://git.openjdk.org/jdk.git pull/25387/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 25387
View PR using the GUI difftool:
$ git pr show -t 25387
Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/25387.diff
Using Webrev
Link to Webrev Comment