Skip to content

Add MAX_NOFILE environment variable to change the max number of open files #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

xuhdev
Copy link
Contributor

@xuhdev xuhdev commented Mar 23, 2015

No description provided.

@xuhdev
Copy link
Contributor Author

xuhdev commented Mar 23, 2015

I found that the max number of open files affects how much RAM slapd uses extensively. If I set it to 5000, it uses 10M; if I set it to 10000, it uses 15M. Originally when I didn't lower the value at all, it uses 700M! I would consider this is a very important option for slapd.

@dinkel
Copy link
Owner

dinkel commented Mar 24, 2015

Thanks for your investigations on this topic and thanks for sharing. I agree with you, that this issue needs to be adressed in "my" image!

However I currently don't know if this is the best way and have two concerns:

  1. The issue you referenced and the follow ups show as far as I understand it at the moment, that there is ongoing work (although a little slowly) to correct the general problem in Docker itself. Therefore I somehow dislike duplicating and overruling this work.
  2. I also dislike having this limit "user configurable". I try to limit the options to "business relevant" stuff while being opiniated about technical details (e.g. I don't let the user define the backend, but stick to HDB always).

What do you think about my reasoning?

I am thinkning a bit more and try to understand teh issue more clearly. Also I will first try to test how the memory consumption is in Docker >= 1.4.0 where it should contain at least some fix.

@xuhdev
Copy link
Contributor Author

xuhdev commented Mar 24, 2015

Thanks for reply.

For point 1, I don't think this is a docker bug and docker can fix nothing. One would limit the no of files for all containers globally before starting the docker daemon, by setting systemd files or upstart files. The limit in each container can be lowered by the startup script, but not raised, so the global no of files must be large for some sense. For most processes this number does not affect much, but for slapd, this is sensitive. So something must be done here, not in docker.

For point 2, I can understand, but having a default value set would be "equal" to not having this option at all for people who don't want to look into technical details, and leave some flexibility when technical aspects matter.

@dinkel
Copy link
Owner

dinkel commented Mar 25, 2015

Thanks for clearing my understanding of point one.

For point two, I understand your "having a default value is like no option at all, but I still don't like it too much.

I decided to update my entrypoint.sh file with a static ulimit -n 8192 to limit the memory consumption to a reasonable amout.

There are a few forum posts around, that on a "very busy" OpenLDAP server, they touch the 1024 open file limit. Having it to 8 times this value, will almost never be reached. If one had such a high end OpenLDAP installation this person would probably have a much better understanding about OpenLDAP configuration than I have. My assumption would then be, that this individual has its own specialized image anyways.

I hope that you can live with my descision and still use this image?

I really appreciate your help to make this a better image. Thanks a lot!

I currently leave this pull request open and not yet reject it, so that you can still comment on this...

@xuhdev
Copy link
Contributor Author

xuhdev commented Mar 25, 2015

I think it's good for me. But the hard coded limit may lead to permanent errors if one sets a global number of opening to a lower value than 8192, since containers cannot raise this number. Perhaps this setting should be done before "set -e" is called, so that the container can still start up even the global no of files is low?

@dinkel
Copy link
Owner

dinkel commented Mar 25, 2015

Agreed and corrected with bd213c1 !

@xuhdev xuhdev closed this Mar 30, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants