Skip to content

Increase assertAllCancellableTasksAreCancelled timeout #89744

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

pxsalehi
Copy link
Member

The following two failures happen rarely, but both fail in the same assertBusy block. I don't have a clue why, and couldn't reproduce them. Considering the amount of checks in that block, maybe a larger timeout is more suitable. (Also it seems from the test history, it is not uncommon for those tests to take 2-3s, so every few thousand runs hitting the 10s timeout seems likely, IMO!)
Relates #88884, #88201

@pxsalehi pxsalehi added >test Issues or PRs that are addressing/adding tests :Distributed Coordination/Task Management Issues for anything around the Tasks API - both persistent and node level. auto-merge-without-approval Automatically merge pull request when CI checks pass (NB doesn't wait for reviews!) auto-backport-and-merge v8.3.4 v7.17.7 v8.4.2 labels Aug 31, 2022
@elasticsearchmachine elasticsearchmachine added Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. v8.5.0 labels Aug 31, 2022
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

Copy link
Contributor

@kingherc kingherc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, assuming CI passes. Hope it was just the timeout the reason for the failures! 🤞

@pxsalehi
Copy link
Member Author

Thanks @kingherc. Do you know actually if backporting to 8.3 is necessary here? I have the impression that since main is now 8.5, only the CI for 8.4 will be periodically running, and there is no more 8.3 changes.

@kingherc
Copy link
Contributor

@pxsalehi seeing the release schedule, I believe 8.3 may not be necessary. But feel free to check with the team as well to be certain.

@pxsalehi
Copy link
Member Author

Yes, I thought the same. I think I'll take out the 8.3. Thanks.

@pxsalehi pxsalehi removed the v8.3.4 label Aug 31, 2022
@pxsalehi
Copy link
Member Author

@elasticmachine update branch

@elasticsearchmachine elasticsearchmachine merged commit 68782d6 into elastic:main Aug 31, 2022
@pxsalehi pxsalehi deleted the ps-220830-qa-smokeTest-increase-timeout branch August 31, 2022 10:30
@elasticsearchmachine
Copy link
Collaborator

💔 Backport failed

Status Branch Result
7.17
8.4 Commit could not be cherrypicked due to conflicts

You can use sqren/backport to manually backport by running backport --upstream elastic/elasticsearch --pr 89744

pxsalehi added a commit to pxsalehi/elasticsearch that referenced this pull request Aug 31, 2022
The following two failures happen rarely, but both fail in the same
`assertBusy` block. I don't have a clue why, and couldn't reproduce
them. Considering the amount of checks in that block, maybe a larger
timeout is more suitable. (Also it seems from the test history, it is
not uncommon for those tests to take 2-3s, so every few thousand runs
hitting the 10s timeout seems likely, IMO!)  Relates
elastic#88884,
elastic#88201
elasticsearchmachine pushed a commit that referenced this pull request Aug 31, 2022
The following two failures happen rarely, but both fail in the same
`assertBusy` block. I don't have a clue why, and couldn't reproduce
them. Considering the amount of checks in that block, maybe a larger
timeout is more suitable. (Also it seems from the test history, it is
not uncommon for those tests to take 2-3s, so every few thousand runs
hitting the 10s timeout seems likely, IMO!)  Relates
#88884,
#88201
pxsalehi added a commit to pxsalehi/elasticsearch that referenced this pull request Aug 31, 2022
The following two failures happen rarely, but both fail in the same
`assertBusy` block. I don't have a clue why, and couldn't reproduce
them. Considering the amount of checks in that block, maybe a larger
timeout is more suitable. (Also it seems from the test history, it is
not uncommon for those tests to take 2-3s, so every few thousand runs
hitting the 10s timeout seems likely, IMO!)  Relates
elastic#88884,
elastic#88201
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-merge-without-approval Automatically merge pull request when CI checks pass (NB doesn't wait for reviews!) :Distributed Coordination/Task Management Issues for anything around the Tasks API - both persistent and node level. Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. >test Issues or PRs that are addressing/adding tests v7.17.7 v8.4.2 v8.5.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants