Skip to content

Master back 1.2 again 2 #242

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: release-1.2
Choose a base branch
from
Open

Master back 1.2 again 2 #242

wants to merge 7 commits into from

Conversation

nammn
Copy link
Collaborator

@nammn nammn commented Jul 8, 2025

Summary

Proof of Work

Checklist

  • Have you linked a jira ticket and/or is the ticket in the title?
  • Have you checked whether your jira ticket required DOCSP changes?
  • Have you checked for release_note changes?

Reminder (Please remove this when merging)

  • Please try to Approve or Reject Changes the PR, keep PRs in review as short as possible
  • Our Short Guide for PRs: Link
  • Remember the following Communication Standards - use comment prefixes for clarity:
    • blocking: Must be addressed before approval.
    • follow-up: Can be addressed in a later PR or ticket.
    • q: Clarifying question.
    • nit: Non-blocking suggestions.
    • note: Side-note, non-actionable. Example: Praise
    • --> no prefix is considered a question

anandsyncs and others added 7 commits July 4, 2025 14:58
# Summary

Revert the change to only build last 3 OM versions with last 3 Operator
versions.
We want to build all possible OM versions with last 3 operator versions.
# Summary

We sometimes run into errors and having these traced makes it easier to
know which ones have failed for later analysis.

This pull request enhances the `queue_exception_handling` function in
`pipeline.py` by integrating OpenTelemetry tracing and adding detailed
metrics to improve observability of task queue error processing.

### Observability Enhancements:

* Added OpenTelemetry tracing to `queue_exception_handling` 
* Introduced metrics to track task queue processing, including:
  - Total number of tasks (`mck.agent.queue.tasks_total`).
  - Count of tasks with exceptions (`mck.agent.queue.exceptions_count`).
  - Success rate of tasks (`mck.agent.queue.success_rate`).
  - Types of exceptions encountered (`mck.agent.queue.exception_types`).
- Boolean flag indicating if exceptions were found
(`mck.agent.queue.has_exceptions`)
* Updated logging to provide more granular details about exceptions
encountered in the task queue

## Proof of Work

<!-- Enter your proof that it works here.-->

## Checklist
- [ ] Have you linked a jira ticket and/or is the ticket in the title?
- [ ] Have you checked whether your jira ticket required DOCSP changes?
- [ ] Have you checked for release_note changes?

## Reminder (Please remove this when merging)
- Please try to Approve or Reject Changes the PR, keep PRs in review as
short as possible
- Our Short Guide for PRs:
[Link](https://docs.google.com/document/d/1T93KUtdvONq43vfTfUt8l92uo4e4SEEvFbIEKOxGr44/edit?tab=t.0)
- Remember the following Communication Standards - use comment prefixes
for clarity:
  * **blocking**: Must be addressed before approval.
  * **follow-up**: Can be addressed in a later PR or ticket.
  * **q**: Clarifying question.
  * **nit**: Non-blocking suggestions.
  * **note**: Side-note, non-actionable. Example: Praise 
  * --> no prefix is considered a question

---------

Co-authored-by: Anand <[email protected]>
Co-authored-by: Anand Singh <[email protected]>
# Summary

This pull request adds unit tests for the `MongoDBSearchReconciler` in
the `operator` package. These tests cover various reconciliation
scenarios, including success cases, failure cases, and edge cases.
Additionally, helper functions and mock setups are added.

### Unit Tests for `MongoDBSearchReconciler`:

* **Basic Reconciliation Scenarios:**
- Added tests for successful reconciliation
(`TestMongoDBSearchReconcile_Success`) and scenarios where the
`MongoDBSearch` resource or its source is missing
(`TestMongoDBSearchReconcile_NotFound`,
`TestMongoDBSearchReconcile_MissingSource`).

* **Failure Cases:**
- Implemented tests to validate failure conditions, such as unsupported
MongoDB versions (`TestMongoDBSearchReconcile_InvalidVersion`),
TLS-enabled configurations
(`TestMongoDBSearchReconcile_TLSNotSupported`), and multiple
`MongoDBSearch` resources referencing the same MongoDB instance
(`TestMongoDBSearchReconcile_MultipleSearchResources`).

### Helper Functions and Mock Setup:

* **Helper Functions:**
- Added helper functions to create mock `MongoDBCommunity` and
`MongoDBSearch` resources (`newMongoDBCommunity`, `newMongoDBSearch`)
and build expected configurations (`buildExpectedMongotConfig`).


## Proof of Work

Tests Pass

## Checklist
- [x] Have you linked a jira ticket and/or is the ticket in the title?
- [x] Have you checked whether your jira ticket required DOCSP changes?
- [x] Have you checked for release_note changes?
#231)

# Summary

The E2E test `e2e_multi_cluster_replica_set_scale_up` has been flaky and
@lucian-tosa suggested that we should fix it. It has been failing while
waiting for statefulsets (STSs) to have correct number of in case multi
cluster mongoDB deployment. And the problem was sometimes after the
`MongoDBMultiCluster (mdbmc)` resource got into running phase (that
would mean all the STSs are ready), some of the STSs got into not ready
state.
When we see that `mdbmc` resource is Running we try to make sure that
STSs have correct number of replicas but because of above problem (STSs
transitioning into not ready state from ready), STS didn't have the
correct number of replicas and tests failed.
The reason why STS was transitioning into not ready state from ready is,
the pod that it was maintaining did the same, i.e., it transitioned from
ready state to not ready state. After looking into it further we got to
know that the pod is behaving like this because sometimes, it's
readiness probe fails momentarily. And because of that the pod gets to
ready and then transitioned to not ready (readiness probe failed) and
then eventually becomes ready. This is documented in much more detail in
the document [here](https://jira.mongodb.org/browse/CLOUDP-329231).

The ideal fix of the problem would be to figure out why the readiness
probe fails and then fix that. But this PR has the workaround that
changes the test slightly to wait for STSs to get correct number of
replicas.

Jira ticket: https://jira.mongodb.org/browse/CLOUDP-329422

## Proof of Work

Ran the test `e2e_multi_cluster_replica_set_scale_up` manually locally
to make sure that it's passing consistently. I am not able to reproduce
the flakiness now.

## Checklist
- [x] Have you linked a jira ticket and/or is the ticket in the title?
- [x] Have you checked whether your jira ticket required DOCSP changes?
- [x] Have you checked for release_note changes?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants