-
Notifications
You must be signed in to change notification settings - Fork 1k
Additional volumes capability #736
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
6938533 to
bf4455e
Compare
|
Thanks for the PR and sorry for late reply. I like the idea, since I'm almost sure it can enable more interesting approaches. Few commentaries about the current implementation:
@@ -1334,11 +1366,11 @@ func (c *Cluster) generateCloneEnvironment(description *acidv1.CloneDescription)
c.logger.Info(msg, description.S3WalPath)
envs := []v1.EnvVar{
- v1.EnvVar{
+ {
Name: "CLONE_WAL_S3_BUCKET",
Value: c.OpConfig.WALES3Bucket,
},
- v1.EnvVar{
+ {
Name: "CLONE_WAL_BUCKET_SCOPE_SUFFIX",
Value: getBucketScopeSuffix(description.UID),
},
|
I used the same approach as the secrets. A sidecar pod can need the additional volume too. For example, I can use an initContainer which grab a new certificate from my PKI and share it via emptyDir with the Spilo container and the telegraf sidecar (for tls auth). (that's typically why I'm doing this PR)
Since the volume mounted is not managed by the operator itself, I don't think the should handle resizing, because we don't know of if the additional Volume is a ext4 or xfs. (ak
Yes ! You're right. we should forbidden to override
Yes, I know It's because I've run a
I'll try to add one.
That's why CI tests are failing ? |
ef74187 to
03674b7
Compare
If I understand correctly from this part, if you define a volume for one sidecar, it will be also mounted to all other containers (including the spilo container), which is probably undesired. Or am I missing something? for i := range podSpec.Containers {
mounts := podSpec.Containers[i].VolumeMounts
for _, v := range additionalVolumes {
mounts = append(mounts, v1.VolumeMount{
Name: v.Name,
MountPath: v.MountPath,
SubPath: v.SubPath,
})
}
podSpec.Containers[i].VolumeMounts = mounts
}
Yes, this part is questionable, but I believe we still have to synchronize volumes via add/remove at the very least, just to give a possibility to manage them somehow via manifest.
Right, and also crosscheck that "additional volumes" are not clashing between themselves.
It's not clear for me yet, but could be related. I would suggest first fix the build (or at least you can check if it's the same in your environment), and then we can take a look at the CI. |
It is desired. In my use case I want to generate a certificate (.key and .pem) in an initContainer. example : I can also add a telegraf sidecar. It will have the same shared volume, used to authenticate to my postgresql in TLS. |
Interesting. Then it conflicts with the ideas in #625, since with this implementation one can't create e.g. a separate volume for one tablespace, while it would be mounted to all the sidecards. Is it possible to make this implementation more flexible to satisfy both use cases? |
|
Yes, I can add a |
Yes. My initContainer also generate a Diffie hellman paramater .pem file and fetch the ca cert from my existing pki. @erthalion I've updated the PR with the |
|
@seuf what is your plan to manage certificate rotation? |
We have long term certificates for stateful apps. So until the certificate expire, we can just bump the version of postgresql in the manifest and patroni will do the rolling update with new certificates :) |
Thanks. But after thinking about this, I believe it would be less extensible. What about having an option e.g. |
So, @seuf what do you think about this suggestion? |
|
Nope, for that purpose we will use special keyword
I'm not asking for implementing the part about target container, you can leave it empty. You can leave the current logic, just transform schema and names a bit to allow this to be implemented in the future. |
52ab39b to
cb941af
Compare
|
Hello @erthalion. I've updated the PR with The CI still fail with |
|
@seuf thanks for implementing this. If I am not mistaken, this could also allow to define an emptyDir to hold the PostgreSQL unix socket, then allowing a sidecar (i.e. postgres exporter) to access the database without doing full TCP, SSL and authentication simply to scrape some metrics. |
|
@seuf thank you for this cool PR. I pieced together some code and was about to create a PR myself just to allow sharing the PostgreSQL socket ( Do you need any help / testing @seuf? Could this me merged for 1.5 @FxKu ? |
|
@frittentheke @seuf yeah, we have the plan to merge it for the next version. It overlapped with #798 so we had to pick one PR first, and went for TLS as it covered one specific use case. However if conflicts are removed, I see a good chance it can be merged. @erthalion how do you see it, as the reviewer? |
…to be mounter or not
Check that there are no volume mount path clashes or "all" vs ["a", "b"] mixtures. Also change the default behaviour to mount to "postgres" container.
7bbb5e5 to
70eee0e
Compare
|
I've rebased this branch over master. But I can't run tests succefully. |
|
We switched to go 1.14 recently. I remember, I had to add quotes in |
| volumes []v1.Volume, | ||
| additionalVolumes []acidv1.AdditionalVolume, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passing volume[] twice is giving me some headache. The first on was introduced by @zimbatm for the TLS secrets. Maybe that could be mapped to an additionalVolumes too, or it's just renamed tlsVolumes. Anyway, can be done in an extra PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FxKu -> would this mean that it's either TLS or additionalVolumes?
Our scenario would require both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no I thought both can be mapped into one array. Hm, but maybe it's also better to keep it separated to highlight the special role of TLS secrets. At least, it has it's own field in the manifest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FxKu if there is a new functionality available fully covering a previously specially handled use case, there should be some sort of deprecation. This here seems to be one of those cases where the "old" functionality is simply extended upon and folks simply can move to the new, more feature rich way.
But this could also be the case for the PR implementing a broader sidecar support (#890). Why having multiple things doing the same thing and creating ever more logic to "merge" them and also having to maintain these increasingly complicated code paths.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
👍 |
|
👍 |
|
Thanks @seuf for your contribution. Also thanks @gertvdijk, @zimbatm, @frittentheke, @muff1nman |
Additional Volumes
This pull request allow operator to mount additional volumes in the postgresql Statefulset.
a new key is available in the postgresql manifest :
additionalVolumes.For each volume listed, a
volumewill be added to the podSpec and a avolumeMountto each container of the pod.Each item must contain a
name,mountPathandvolumeSourcedefinition.The volumeSource must be a kubernetes volume declaration. This alow you to mount existing persitentVolumeClaim, configMap or a shared emptyDir with the initContainer.
Example :
Related issues :