You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before today, I was running puppet server v 5.3.0 to manage all of my docker containers on my homelab server. After I upgraded to 6.1.0 containers will not start/create.
What happened?
Everything was working OK on 5.3.0. I upgraded the puppetserver container to latest (6.1.0) and on the next puppet agent run, all my containers were deleted and would not restart.
I dug up issue #313, which was very similar but I had already tried the remove_container_on_start => false, and I was not using detach => true.
How to reproduce it?
Here is an example manifest that I've been using, but this was happening with all 15+ containers I had set up.
I tracked this down to the .sh scripts that are created for systemd to run. the Health_Check_Command is being populated with 's'. With no interval set, the .sh fails to run.
By explicitly setting health_check_interval => 30, in all of my manifests for docker::run, the containers are now starting up again.
This is already fixed in master and will be resolved with the next release which we are aiming to push out very soon. Version 3.1.0 doesn't have support for puppet6. This is documented on the forge and in compatibility tab.
@mconway, @davejrt, thank you for this issue and overall project. I confirm that the workaround health_check_interval => 30 works with puppet6. But, you all already know this - and the fix is on master until next release. If I pull the next release in the future, and forget to remove health_check_interval => 30, will there be any side effects?
What you expected to happen?
Before today, I was running puppet server v 5.3.0 to manage all of my docker containers on my homelab server. After I upgraded to 6.1.0 containers will not start/create.
What happened?
Everything was working OK on 5.3.0. I upgraded the puppetserver container to latest (6.1.0) and on the next puppet agent run, all my containers were deleted and would not restart.
I dug up issue #313, which was very similar but I had already tried the remove_container_on_start => false, and I was not using detach => true.
How to reproduce it?
Here is an example manifest that I've been using, but this was happening with all 15+ containers I had set up.
Anything else we need to know?
I tracked this down to the .sh scripts that are created for systemd to run. the Health_Check_Command is being populated with 's'. With no interval set, the .sh fails to run.
By explicitly setting
health_check_interval => 30,
in all of my manifests for docker::run, the containers are now starting up again.Versions:
I never came across any errors, and didn't save one of the bad .sh scripts
The text was updated successfully, but these errors were encountered: