Find the bottleneck
To find the bottleneck first empty the browser cache and reloaded the page with firebug running. The Net panel showed that it took 24 seconds to load the initial page. After that, the remaining files loaded quickly.
I should have realized right away that this behavior meant the server was hamstrung by a thread limit. It took me ten minutes to figure that out the bottle neck
Step 1: Cut image quality
Since the new post was my first image-heavy post, I realized I could cut bandwidth consumption in half by compressing images.
converttool can shrink images at the command line:
$ convert image.ext -quality 0.2 image-mini.ext
I wrote a shell script to compress every image in my
Then, I did a search-and-replace in emacs to switch everything to the
-miniimages. Page load time dropped from 24 to 12 seconds.
Quick and dirty, yet effective.
Step 2: Make content static
I use server-side scripts to generate what are actually static pages.
Under heavy load, dynamic content chews up time.
So, I scraped the generated HTML out of View Source and dropped it in a static
Page load times dropped to 6 seconds. Almost bearable!
Step 3: Adding threads in the apache conf
In firebug, pages were sill bursting in after the initial page load.
On a hunch, I checked with my linode control panel. I saw that the CPU utilization graph was near 3%, and that there was plenty of bandwidth left.
Suddenly, I remembered that the default apache configuration sets a low number of processes and threads.
Requests were streaming in and getting queued, waiting for a free thread.
Meanwhile, the CPU was spending 97% of its time doing nothing.
I opened my apache configuration file, found the
mpm_worker_modulesection, and ramped up processes and threads:
<IfModule mpm_worker_module> StartServers 4 # was 2 MaxClients 600 # was 150 MinSpareThreads 50 # was 25 MaxSpareThreads 150 # was 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>
Page load times fell to two seconds.