IOMelt Benchmark Results - August 2012

Rodrigo Campos - camposr@gmail.com - @xinu

This is a set of benchmark results for AWS instances in the South America Region (São Paulo) and US East Region (North Virginia).

Direct link to the results
Direct link to the Raw data and R script to generate charts

Preface: What's wrong with most benchmarks?

The main problem with most benchmarks is that they are extremely hard to be consistently reproduced. One of the reasons for this is that the current standard benchmark procedures measure the performance only once.

This is particularly critical for shared multi-tenant environments because the test may not consider neighboring effects and will only show the system behavior at one point in time. Besides that, as the infrastructure provider may run a variety of management procedures (backup, lifecycle management, etc.) you may have a significant performance impact that is not related to your own application.

Proposed solution

As a proposed solution to the aforementioned problem I've scheduled IOMelt to run every five minutes for several days, thus allowing to measure the disk IO performance of this environment for the duration of the test.

IOMelt was designed from scratch to provide consistent results and has a concise output that makes it easy to generate data that can be used in R, Excel, AWK, and many other data processing tools.

Environment

For this benchmark three Amazon Web Services EC2 instances of the following types were used:
  1. m1.small
  2. m1.medium
  3. m1.large
All three instances were created using a standard AWS AMI (Amazon Linux AMI 2012.03) that was updated using yum. Besides installing a compiler (gcc) and the GNU make tool no other modifications were performed on the operating system and its configuration.

The root volumes were created using the typical EBS configuration.

The filesystem used was ext4, this is the output of mount and dumpe2fs showing the mounted volumes and the filesystem block size used:

[ec2-user@large ~]$ sudo dumpe2fs /dev/xvda1  | egrep -i 'block size'
dumpe2fs 1.42 (29-Nov-2011)
Block size:               4096
[ec2-user@large ~]$ mount
/dev/xvda1 on / type ext4 (rw,noatime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

IOMelt version 0.3 was compiled and the standard Linux cron was used to schedule it to run every 5 minutes:

[ec2-user@large ~]$ sudo crontab -l 
*/5 * * 8 * /root/iomelt/iomelt -dor >> /root/iomelt.out 2>&1

For a detailed description of the command line options please refer to the IOMelt man page or the README file.

Time Frame and other considerations

Tests were run from August 11th, 2012 through August 21st, 2012 in the South America Region, and from August 21st, 2012 through August 26th, 2012 in the North Virginia Region. The iomelt application was called 13.230 times during the tests in both regions.

For each instance 6 charts were generated using R:

No mixed read&write tests were performed as the iomelt version used did not support this test at the time.

Results

Jump to:
Kernel Density of Serial Read
SA EAST 1US EAST 1
m1.small
m1.medium
m1.large

Kernel Density of Serial Write
SA EAST 1US EAST 1
m1.small
m1.medium
m1.large

Serial Read
SA EAST 1US EAST 1
m1.small
m1.medium
m1.large

Serial Write
SA EAST 1US EAST 1
m1.small
m1.medium
m1.large

Random Read
SA EAST 1US EAST 1
m1.small
m1.medium
m1.large

Random Write
SA EAST 1US EAST 1
m1.small
m1.medium
m1.large


comments powered by Disqus Tweet

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.