Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive RAM usage on 1.2.2, increased 10x from 1.2.1 #242

Closed
naanselmo opened this issue Sep 20, 2018 · 17 comments
Closed

Excessive RAM usage on 1.2.2, increased 10x from 1.2.1 #242

naanselmo opened this issue Sep 20, 2018 · 17 comments

Comments

@naanselmo
Copy link

Hello,

Essentially, just as the title says, RAM usage for my container increased by 10x (actually a bit more, like 12~13x) when upgrading from 1.2.1 to 1.2.2.

I used to have roughly 60~70MB RAM usage, but after upgrading to 1.2.2 the container uses over 800MB as soon as it starts.

The usage is solely from the slapd process, which I think shouldn't use nearly as much.

I'll be sticking to 1.2.1 for the time being but if you need any info from my system let me know.

Thanks!

@kopax
Copy link

kopax commented Sep 26, 2018

Did anybody tried to reproduce this?

@naanselmo
Copy link
Author

Any information that might be needed to help diagnose the issue let me know. I just quickly rolled back to 1.2.1 after noticing the issue and trying it out a few times, even on different nodes on my swarm.

@kopax
Copy link

kopax commented Sep 27, 2018

Do you use tls with TLS_ENFORCE=true ?

@kopax
Copy link

kopax commented Sep 27, 2018

Can you tell us more about how you performed the test? I am trying to reproduce.

@naanselmo
Copy link
Author

I do not use TLS_ENFORCE in my environment variables so I don't think so (unless it's default). My only environment variables are LDAP_ORGANISATION, LDAP_DOMAIN, LDAP_BASE_DN, and LDAP_ADMIN_PASSWORD.

I only measured it very... empirically. I first noticed the container was getting killed, then I suspected resources and lifted my usual 128MB RAM limit, and noticed it spiked to 800MB through the docker stats, confirmed by the monitoring software showing a 800MB RAM usage spike after the container went up, in that specific node (which only had 2GB RAM which made it noticeable).

@michidk
Copy link

michidk commented Oct 14, 2018

Running this image on Kubernetes. Experiencing this too on 1.2.2 and 1.2.3. Also experiencing very high CPU usage. As soon as we start ldap, the whole node gets unusable (100% cpu, and ~1GB ram). Tested with multiple nodes. I think I got it running once, having SSL/TLS turned off via env variable.

@kopax
Copy link

kopax commented Oct 14, 2018

How did you get 1.2.3 ?

image

@michidk
Copy link

michidk commented Oct 14, 2018

Using the branch release-1.2.3. Since 1.2.2 didn't work, I thought maybe the issue was fixed in 1.2.3.
image

@kopax
Copy link

kopax commented Oct 14, 2018

Ok I did not see that. And you confirm the problem is not in 1.2.1 too?

I've checked the changes and that sounds very suspicious.

If yes, could you please try to build the Dockerfile from the source code of 1.2.1 and run the image to see if you also have this perf issues. (that would be expected, OS may have changed)

I doubt changes in 1.2.2 were related to perf at any point but I may be wrong.

@michidk
Copy link

michidk commented Oct 14, 2018

1.2.1 is working fine for me.

@ghost
Copy link

ghost commented Oct 25, 2018

I'm having the issue as well using the 1.2.2 branch.

@perara
Copy link

perara commented Dec 11, 2018

Same for me (i think on 1.2.2) Trying to set this up on a 1 gb node.

First start is done...
*** Set environment for container process
*** Remove file /container/environment/99-default/default.startup.yaml
*** ignore : LANG = en_US.UTF-8 (keep LANG = en_US.UTF-8 )
*** ignore : LANGUAGE = en_US.UTF-8 (keep LANGUAGE = en_US:en )
*** Environment files will be proccessed in this order : 
Caution: previously defined variables will not be overriden.
/container/environment/99-default/default.yaml

*** --- process file : /container/environment/99-default/default.yaml ---
*** Run commands before process...
*** ------------ Environment dump ------------
*** LDAP_LOG_LEVEL = 256
*** LC_CTYPE = en_US.UTF-8
*** INITRD = no
*** HOME = /root
*** PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*** LANG = en_US.UTF-8
*** CONTAINER_SERVICE_DIR = /container/service
*** LANGUAGE = en_US:en
*** CONTAINER_LOG_LEVEL = 4
*** LC_ALL = en_US.UTF-8
*** HOSTNAME = 3b64ec3c7098
*** CONTAINER_STATE_DIR = /container/run/state
*** ------------------------------------------
*** Running /container/run/process/slapd/run...
*** /container/run/process/slapd/run started as PID 113
1048576
5c0fc3e3 @(#) $OpenLDAP: slapd  (May 23 2018 04:25:19) $
	Debian OpenLDAP Maintainers <[email protected]>
TLS: warning: ignoring dhfile
5c0fc3e3 ch_calloc of 1048576 elems of 704 bytes failed
slapd: ../../../../servers/slapd/ch_malloc.c:107: ch_calloc: Assertion `0' failed.
*** /container/run/process/slapd/run exited with status 0
*** Run commands before finish...
*** Killing all processes...

@Ashniu123
Copy link

This issue still persists even after 1.2.3.
1.2.1 doesn't have this problem.
Is there any fix in progress?

@m0wer
Copy link

m0wer commented Feb 7, 2019

If you set the LDAP_NOFILE=1024 environment variable when launching the container it works as expected. This shouldn't be neccessary because 1024 is the default value set in default.startup.yaml but it's not working for some reason since v1.2.2.

@BertrandGouny
Copy link
Member

@m0wer thanks LDAP_NOFILE should be set in default.yaml file to be set in process.sh

@BertrandGouny
Copy link
Member

can you guys test osixia/openldap:1.2.4-dev ?

@m0wer
Copy link

m0wer commented Feb 8, 2019

@BertrandGouny LGTM. Works with ~46MB or RAM without the need of setting the LDAP_NOFILE environment variable.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants