Skip to content

Introduce PVC based repo-host for pgbackrest #35

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 86 commits into from
May 31, 2024
Merged

Conversation

RafiaSabih
Copy link
Contributor

With this commit a PVC based backup is possible for pgbackrest. This can be done by providing Spec.Backup.Pgbackrest.Repo[i].Storage = "pvc" in the manifest, this will also require to provide Spec.Backup.Pgbackrest.Repo[i].pvcSize = 'xGi'. When this is option is set, a statefulset with 1 instance is created for the repo-host. The required global and db section for the pgbackrest configuration file pgbackrest_instance.conf should be set by the appropriate configmaps for both the postgres pods and the repo-host pod.

With this commit a PVC based backup is possible for pgbackrest.
This can be done by providing Spec.Backup.Pgbackrest.Repo[i].Storage = "pvc"
in the manifest, this will also require to provide Spec.Backup.Pgbackrest.Repo[i].pvcSize = 'xGi'.
When this is option is set, a statefulset with 1 instance is created for the repo-host.
The required global and db section for the pgbackrest configuration file pgbackrest_instance.conf
should be set by the appropriate configmaps for both the postgres pods and the repo-host pod.
@RafiaSabih RafiaSabih added the enhancement New feature or request label Apr 16, 2024
@RafiaSabih RafiaSabih requested a review from Schmaetz April 16, 2024 13:19
@RafiaSabih
Copy link
Contributor Author

RafiaSabih commented Apr 16, 2024

Example manifest to test this feature

apiVersion: cpo.opensource.cybertec.at/v1
kind: postgresql
metadata:
  name: cluster2-pvc
spec:
  dockerImage: 'docker.io/cybertecpostgresql/cybertec-pg-container:postgres-16.2-2-rc1'
  numberOfInstances: 2
  postgresql:
    version: '16'
  teamId: acid
  volume:
    size: 5Gi
  backup:
    pgbackrest:
      global:
        repo1-path: /repo1/
        repo1-retention-full: '7'
        repo1-retention-full-type: count
      image: docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.2-2-rc1
      repos:
        - name: repo1
          schedule:
            full: 30 2 * * *
          storage: pvc
          volume:
            size: 1Gi

rafia sabih and others added 28 commits May 16, 2024 12:39
The code was counting all matching pods and then waited for master and
replica counts to add up to this. Could be improved by adding a role
aux to other pods and wait for them too, but it's not clear that we want
to do this.
If backup archive is not available during deletion PostgreSQL shutdown
will hang on archiving. Deleting main STS before backup one will ensure
proper sequencing because STS deletion waits for pods to be deleted.
Now they only get dropped when backup repositories are deleted or
cluster is deleted. Cert syncing and repo host volume resizing still not
in here.
* Pass repo spec instead of only name to generation funcs
* Rename name parameter to backupType
* Generate selector parameter so that it makes use of ClusterLabel using
  existing functions..
* Remove extra overwrite of selector env
* Clean up log spam from cronjob syncing
Restore.repo is the same as repos.name.
Restore.options is a map of option: value instead of array of options
When replica is started before leader promotes it fails to rewind and
gets stuck.
The feature was conflicting with beign able to pass in secrets for S3
configuration.
Now using additional volumes mechanism as it is intended.
Also enables not adding pgbackrest volumes to logical backup jobs.
@Schmaetz Schmaetz merged commit fac7246 into v0.7.0-dev May 31, 2024
0 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants