Today I’m working on performance tuning httpd on the primary production server. It has totally been out of hand this week.
1. Increase MinSpareServers and MaxSpareServers.
This should decrease the amount of time that apache must spend spawning new processes to handle requests.
2. Set the MaxRequestsPerChild to a high value.
I set this (back) to the default of 10000. I think our initial setting was too low causing httpd to continously spawn new processes.
3. Disable server-status
This just causes apache to keep track of lots of things that it doesn’t need to track.
4. Use Apache::SizeLimit to kill off processes if they grow too large.
Just came across this interesting mailing list post on profiling memory usage on Linux.
I’ve been trying to ProxyPass from a primary server to a backend (but not firewalled) machine running MediaWiki 1.5.2 with very little luck.
I finally come across the solution. backend.machine.edu is the machine hosting MediaWiki. frontend.machine.edu is the machine acting as a reverse proxy.
1. Proxy machine, httpd.conf
ProxyPass /wiki http://backend.machine.edu/wiki
2. On backend.machine.edu edit DefaultSettings.php
This *shouldn’t* be necessary as this variable can also be provided in LocalSettings.php. However, I found the early stages of the request cycle were not correctly resolved to the front end machine.
3. Edit LocalSettings.php
Add the (possibly redundant) $wgServer variable
I also found that the absolute image path for the Wiki Logo was not resolving correctly through the proxy. Correct this (if necessary) by setting $wgLogo. This assumes that your backend machine is not behind a firewall.
4. Edit squid.conf on squid.machine.org to exclude wiki content
I might want to revisit this decision in the future. For now, I’d rather not fill our cache with Wiki material, saving that space for the heavier acedb content.
no_cache deny wiki