Anyone here know a good way to tell what Apache may be chewing up memory on?
I have Apache running purely as a reverse proxy (not hosting anything), timeouts configured everywhere and good cleanup on threads, scoreboard staying clean, but after a clean restart it will slowly climb and climb throughout the day using more and more memory then when it hits around 85% we start seeing core notices, then warnings and finally errors and death.
child segmentation fault (notice)
still did not exit (SIGTERM) warning
still did not exit (SIGTERM) error
Every process is dead from above logs entries except one remaining that will not stop and no responses anymore from apache without kill -9 and restarting.
I do not have that installed but will definitely look into it as it may have a similar effect on what I found. The problem I found (well kind of) is that there is some unknown memory leak. Was looking at another thread where it stated if there is a memory leak in one of the modules and MaxConnectionsPerChild (mpm_event) is set to 0 then it will slowly climb and eat all your memory. Sure enough I changed that from 0 to 4000 (just a guess) and memory has been flat lined even since. In addition Apache CPU usage on my xymon graphs was just all over the place throughout the day and its flatlined now too. It could be the nature of the app behind this reverse proxy as it performs horribly and there are LOTs of items that just timeout and dont get responded to so we had to be aggressive on our timeouts in the first place.
We did find a LOT of ldap connections not being closed also (when Apache was hung we saw over 300 established LDAP connections). While it didnt fix the overall memory leak we fixed those number of connections hung by forcing the ldap connection and pool to timeout after 60s. LDAP seems suspect for the memory leak but not sure.