Skip to content

Conversation

firewave
Copy link
Collaborator

@firewave firewave commented Jan 7, 2025

No description provided.

@firewave
Copy link
Collaborator Author

firewave commented Jan 7, 2025

This essentially eliminates any meaningful impact by followAllReferences() at all.

-D__GNUC__ --check-level=exhaustive ../lib/utils.cpp

Clang 19 652,987,030 -> 624,476,510 618,089,977

followAllReferences() calls from isAliasOf() - 350,100 -> 1,581

The example from https://trac.cppcheck.net/ticket/10765#comment:4:

Clang 19 3,056,382,003 -> 2,838,708,731 2,815,165,117

followAllReferences() calls from isAliasOf() - 2,592,565 -> 641

[&] {
// Follow references
auto refs = followAllReferences(tok);
auto refs = tok->refs();
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs to be const auto&.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

I filed https://trac.cppcheck.net/ticket/13533 about detecting this.

return Action::Invalid;
// Follow references
auto refs = followAllReferences(tok);
auto refs = tok->refs();
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This copy is necessary since an additional entry is being added. But I think this is not necessary and I will try to refactor the code to avoid it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I adjusted this by unfortunately there is some redundant code introduced.

if (!mImpl->mRefs)
mImpl->mRefs = new SmallVector<ReferenceToken>(followAllReferences(this));
return *mImpl->mRefs;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the right way to do this. This is a const method that is modifying the token. Instead followAllReferences should be moved to the SymbolDatabase and there should be a pass that fills this in for all of the tokens.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I was about to add a comment about this. This violates the technical const and if we would not allow this (I hope some day I will finish up that change) this would require mutable (which from my experience is acceptable for caches inside objects).

I am not sure how easy it would be to implement an earlier pass since it is not done for all tokens but there are lots of checks which are performed before we actually end up following references. That would need to be replicated I reckon - and that also has a certain visible overhead and we would need to run through that twice then.

Actually I would also have the ValueFlow behave this way so we might avoid running it for code which is not relevant.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the right way to do this. This is a const method that is modifying the token.

That should be totally fine (by precedent). We modify const Token objects all over the place in the ValueFlow and symbol database via const_cast. Obviously it would be better if we didn't but here it is much cleaner and in a single place and as stated before I think this is acceptable practice.

Actually I would also have the ValueFlow behave this way so we might avoid running it for code which is not relevant.

Please disregard this. This is wishful thinking as this would not be possible the way the ValueFlow is working. I totally forgot I already looked into this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The const_cast should be fixed, but we shouldn't add more code that needs to be fixed.

Also this is called in ValueFlowForward and ValueFlowReverse so its already called on almost every token in functionScopes, so it really won't help performance being a cache.

Furthermore, in copcheck we update the tokens through passes rather than using a cache, this makes it easier to debug and we can provide this information to addons later on. So doing a pass in SymbolDatabase would be consistent with the rest of the code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Furthermore, in copcheck we update the tokens through passes rather than using a cache, this makes it easier to debug and we can provide this information to addons later on. So doing a pass in SymbolDatabase would be consistent with the rest of the code.

Will give it a try and check how it impacts performance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we know if it causes a perf impact or how much? It seems we are making it worse for premature optimizations.

There are other advantages to doing it the correct way too such as better debugging and addons can take advantage of this information (this seems like a useful analysis for addons). So if we enable it for addons then we will beed to run a pass regardless.

Also you could consider skipping this for functions we are skipping analysis for, if the performance is too bad, but it would be good to see some actual numbers to make this decision.

This comment was marked as duplicate.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the current approach seems like the best approach.

I meant to say "It seems like the currently best approach".

Do we know if it causes a perf impact or how much? It seems we are making it worse for premature optimizations.

Various performance numbers are in the PR. It is a massive improvement. It would also help with the runtime of the CI.

Also you could consider skipping this for functions we are skipping analysis for, if the performance is too bad, but it would be good to see some actual numbers to make this decision.

That was an idea regarding the ValueFlow (see https://trac.cppcheck.net/ticket/12528) but that won't work since not all passes are based on function scopes. But that is currently out-of-scope and is something I am looking at within another context hopefully soon.

It might actually not an issue after all because with the duplicated calls eliminated it basically no longer has any footprint. The only issue could be that we perform it for more tokens than we actually need so that would introduce new overhead but it might not be much. Will test that. Although I would prefer not to have that at all since all the overhead adds up - a lot.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized this is called when setting exprids, so it always called on every token regardless of ValueFlow analysis.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just gave it a small spin and it does not increase the Ir so that looks feasible.

void setCppcheckAttribute(CppcheckAttributes::Type type, MathLib::bigint value);
bool getCppcheckAttribute(CppcheckAttributes::Type type, MathLib::bigint &value) const;

SmallVector<ReferenceToken>* mRefs{};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is going to be a pointer, you should use std::unique_ptr or std::shared_ptr.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I modeled it after mValues which is also just a raw pointer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should model it after mAttributeAlignas, thats a much safer choice. I think the raw pointers were used before managed pointers were available.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That requires ReferenceToken to be a complete type and the astutils.h to be added.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just move the ReferenceToken to the token.h header.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, but followAllReferences() is the code which produces the ReferenceToken objects so I would prefer to keep those together.

It is unfortunate enough that smallvector.h spilled into more code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spillage could be addressed by moving TokenImpl into the implementation (hence the name). I tried that in the past but unfortunately the code relies on the implementation being exposed - I will give it another try.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Onward into the next rabbithole: #7831.

@firewave
Copy link
Collaborator Author

firewave commented Jan 7, 2025

Something really weird is going on here in the UBSAN job:

Check time: cli/threadexecutor.cpp: 0.53017s
Check time: cli/processexecutor.cpp: 1.41327s
Check time: lib/addoninfo.cpp: 0.172107s
Check time: lib/analyzerinfo.cpp: 0.636273s

The timing information for cli/cmdlineparser.cpp is missing ...

@firewave
Copy link
Collaborator Author

firewave commented Jan 7, 2025

Before

Check time: cli/cmdlineparser.cpp: 1091.73s
[...]
Check time: lib/checkio.cpp: 219.069s
[...]
Check time: lib/symboldatabase.cpp: 191.785s
[...]
Check time: lib/tokenize.cpp: 290.026s

After

Check time: cli/cmdlineparser.cpp: 760.299s
[...]
Check time: lib/checkio.cpp: 168.103s
[...]
Check time: lib/symboldatabase.cpp: 145.913s
[...]
Check time: lib/tokenize.cpp: 236.561s

@firewave firewave marked this pull request as ready for review January 8, 2025 13:38
@firewave firewave marked this pull request as draft January 19, 2025 16:48
@firewave
Copy link
Collaborator Author

Could we merge this without the debug integration for now (I will file a ticket about doing that)? I would like to clean up and test the existing debug output first (see #7258 - as usual stalled) before I add new data. And it would greatly speed up the CI as well as giving a baseline to compare against if running it as a pass would have significant performance impact.

@firewave firewave marked this pull request as ready for review April 5, 2025 08:08
@pfultz2
Copy link
Contributor

pfultz2 commented Apr 5, 2025

This should be done in the symboldatabase either before or during exprids.

return a;
}
if (analyzeTok) {
auto a = analyze_f(tok);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont really like this at all, it is harder to follow and harder to extend in the future.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dropped it for now.

bool inconclusive = true,
ErrorPath errors = ErrorPath{},
int depth = 20);
SmallVector<ReferenceToken> followAllReferences(const Token* tok, bool temporary = true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are removing the inconclusive flag, then we should add that to the ReferenceToken so we can know if it was inconclusive or not.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The removed flags were not used outside of astutils.cpp itself. If that information is need in the future we can make the suggested change.

lib/token.cpp Outdated
return mTokensFrontBack.list.getFiles()[mImpl->mFileIndex];
}

const SmallVector<ReferenceToken>& Token::refs() const
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing the bool temporary parameter.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was intentional - added.

@firewave firewave marked this pull request as draft April 7, 2025 07:33
@firewave
Copy link
Collaborator Author

firewave commented Apr 7, 2025

Thanks for the feedback. Some of the shortcomings were intentional since it seemed to make things simpler and I wanted to get the performance improvement in. But I remembered the changes to be simpler which is not the case so I need a bit to get into it again.

@firewave
Copy link
Collaborator Author

firewave commented May 7, 2025

@pfultz2 How should the information be presented in the debug output? I also assume it should be a separate section.

@firewave firewave force-pushed the followref-cache branch 2 times, most recently from fb74dde to 6699af7 Compare June 3, 2025 12:18
@firewave
Copy link
Collaborator Author

firewave commented Jun 3, 2025

This should be done in the symboldatabase either before or during exprids.

In the limited cases I tested that produced considerably more calls to followAllReferences() but did not affect the overall performance much. This still needs to be tested with the selfcheck.

@firewave firewave changed the title introduced a cache for followAllReferences() calls with default parameters introduced a cache for followAllReferences() calls Jun 4, 2025
@firewave
Copy link
Collaborator Author

Given the improvement this change provides and how it affects the CI (we are actually getting even slower and not having this applied makes it harder to pinpoint the other hot spots) I would really prefer if we could just merge the very first version and do the adjustments as a follow-ups.

@firewave
Copy link
Collaborator Author

Having this done in incremental commits would also make it possible to bisect differences in behavior/performance this might have based on the stage this cache is generated. Especially after I tried to add a different cache in #7573 which relies on that the results are not being cached beyond a point (which might also cause issues here if this is stored too early - but might not be reflected in our tests).

@firewave firewave force-pushed the followref-cache branch 2 times, most recently from 1031202 to 92ad6f3 Compare September 15, 2025 18:39
@firewave
Copy link
Collaborator Author

I dropped the changes to generate these beforehand as it was too unspecific. The performance gains are just too big to delay this any longer. I will file a ticket and publish a PR with the WIP code later.

And I would really prefer that to land it in separate commits anyways so it will make it easier to distinguish possible issues caused by adding the cache and by precomputation of the cache.

I am not also confident the data might actually correct when precomputated after working on #7573 (I am trying to improving guardrails so we could detect that).

I also confirmed that adding the cache shows no differences with #7800.

@firewave firewave marked this pull request as ready for review September 15, 2025 18:49
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants