-
Notifications
You must be signed in to change notification settings - Fork 1k
Description
I'm struggling to get WAL-E and logical backups to work. I went through all configuration options I could find by looking at the source code and environment variables tickets on GitHub and . I'm trying to connect to Digital Ocean spaces but without success.
I'm getting the following error:
wal_e.main INFO MSG: starting WAL-E
DETAIL: The subcommand is "backup-push".
STRUCTURED: time=2019-07-24T05:26:13.333584-00 pid=33
wal_e.blobstore.s3.s3_util WARNING MSG: WALE_S3_ENDPOINT defined, ignoring AWS_REGION
HINT: AWS_REGION is only intended for use with AWS S3, and not interface-compatible use cases supported by WALE_S3_ENDPOINT
STRUCTURED: time=2019-07-24T05:26:13.366082-00 pid=33
2019-07-24 05:26:14.023 43 LOG Starting pgqd 3.3
2019-07-24 05:26:14.023 43 LOG auto-detecting dbs ...
wal_e.operator.backup INFO MSG: start upload postgres version metadata
DETAIL: Uploading to s3://project/spilo/project-postgres-cluster/wal/basebackups_005/base_000000010000000000000003_00000208/extended_version.txt
.
STRUCTURED: time=2019-07-24T05:26:14.282007-00 pid=33
wal_e.operator.backup WARNING MSG: blocking on sending WAL segments
DETAIL: The backup was not completed successfully, but we have to wait anyway. See README: TODO about pg_cancel_backup
STRUCTURED: time=2019-07-24T05:26:14.382303-00 pid=33
Configuration I have so far:
# used in the main configmap (the default acid minimal cluster using the latest images). key: pod_environment_configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: project-postgres-pod-config
namespace: default
data:
WAL_S3_BUCKET: project
# AWS_REGION: eu-central-1
AWS_ENDPOINT: https://fra1.digitaloceanspaces.com
AWS_ACCESS_KEY_ID: REPLACE_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: REPLACE_AWS_SECRET_ACCESS_KEY
I will try and use a AWS bucket instead of using Digital Ocean spaces to zero things out. However I feel it can be very helpful if there was some documentation about the subject.
I just read about mounting a secret with aws/gcp credentials. One can define this in the CRD configuration file.
# additional_secret_mount: "some-secret-name"
# additional_secret_mount_path: "/some/dir"
What should be the content of the secret? Also the mount path doesn't make allot of sense to me. Is there any documentation on this? My assumption would be that the additional_secret_mount would point to a kubernetes secret containing the following data:
aws_access_key_id: 'key_id_here'
aws_secret_access_key: 'access_token_here'
and the additional_secret_mount would be:
~/.aws
Then based on this assumption, the operator should generate a file named credentials looking like this:
[default]
aws_access_key_id = key_id_here
aws_secret_access_key = access_token_here