Load testing a site with ApacheBench is fairly straight forward. Typically you'd just SSH to a machine on the same network as the one you want to test, and run a command like this:
ab -n 500 -c 50 http://my.web.server/path/to/page/
The -n
argument determines the number of requests to execute, and the -c
argument the determines the concurrency level--or how many requests will be running simultaneously at any given time.
For Python and Django web applications, Fabric is popular tool for deploying code to and running other commands on remote servers. It's built in Python, and its simple syntax makes it easy to use as well. For more information and a primer on Fabric, check out the post that Colin Copeland wrote back in 2010, titled Basic Django deployment with virtualenv, fabric, pip and rsync.
Running ApacheBench from Fabric is useful because you can easily do other things like customize and update your web server configuration in an automated way. For example, here's a sample template for an Apache server configuration that I upload to our web servers using Fabric:
ServerName %(www_server_name)s
WSGIDaemonProcess my_site-%(environment)s processes=%(process_count)s threads=%(thread_count)s display-name=%%{GROUP}
WSGIProcessGroup my_site-%(environment)s
WSGIScriptAlias / %(apache_root)s/%(environment)s.wsgi
ErrorLog %(log_root)s/wsgi.error.log
LogLevel info
CustomLog %(log_root)s/wsgi.access.log combined
You'll notice the %s
-style Python string formatting syntax in the Apache config. These are populated by Fabric's files.upload_template
method when the file is copied to the remote server, and are based on variables you pass in to the context. Here's a sample Fabric method to upload your Apache configuration to the remote server:
def _join(*items):
"""
We're deploying to Linux, so hard code that type of path join here. Using
os.path.join would not work when deploying from Windows.
"""
return '/'.join(items)
def apache_graceful():
sudo('/etc/init.d/apache2 graceful')
def update_apache_conf(process_count=15, thread_count=1):
env.process_count = process_count
env.thread_count = thread_count
for ext in ['conf', 'wsgi']:
source = os.path.join(env.deployment_dir, 'templates',
'apache.%s' % ext)
dest = _join(env.home, 'apache.conf.d',
'.'.join([env.environment, ext]))
files.upload_template(source, dest, context=context, mode=0755,
use_sudo=True)
apache_graceful()
Specifying process_count and thread_count in the arguments to update_apache_conf()
means that I can pass those in from the command line, like so:
fab staging update_apache_conf:10,3
This would install an Apache configuration on the server that starts up 10 mod_wsgi
processes with 3 threads each.
Running ApacheBench through Fabric is also easy to do, but here's a slightly more complex example I put together that saves the results in time-stamped folders, whose names also include the number of requests, concurrency level, process count, and thread count of the test:
def benchmark():
config = {
'number': 500,
'concurrency': 50,
'url': 'http://my.web.server/path/to/page/',
}
# prime the server with a few requests before logging any results
run('ab -n 10 -c 1 {url}'.format(**config))
context = dict(env)
context.update(config)
context['now'] = datetime.datetime.now().strftime('%Y-%m-%d_%H:%M:%S')
dir_name = '{now}_n={number},c={concurrency}'
if 'process_count' in context and 'thread_count' in context:
dir_name += '_p={process_count},t={thread_count}'
dir_name = dir_name.format(**context)
context['test_dir'] = os.path.join('test_runs', dir_name)
run('mkdir -p {0}'.format(context['test_dir']))
for x in range(4):
context['test_file'] = os.path.join(context['test_dir'],
'ab{0}.txt'.format(x))
run('ab -n {number} -c {concurrency} {url} > '
'{test_file}'.format(**context))
You can run these commands together to update the Apache configuration and run a benchmark with a single line from the shell, like so:
fab staging update_apache_conf:10,5 benchmark
This would update the Apache configuration on the remote server, run a few requests to prime the server, and then run the specified ApacheBench test 4 times and save the results in text files in a timestamped directory.
To test lots of different server configurations at once with minimal user interaction, you can further script this by wrapping the above command in a Bash for
loop, like so:
for process_count in {1..76..5}; do fab staging update_apache_conf:$process_count,1 benchmark; done
This command iterates from 1 through 76, in steps of 5 (1, 6, 11, 16 ... 76), sets the Apache configuration to use that number of processes, and runs a separate benchmark for each configuration.
Anyway, that's just a little insight into how one might deploy and test a Python or Django application using Fabric and ApacheBench. Hope you find it helpful!