IOMelt Provisioned IOPS EBS Benchmark Results - December 2012

Rodrigo Campos - camposr@gmail.com - @xinu

This is a set of benchmark results for AWS Provisioned IOPS EBS in the US East Region (North Virginia).

Direct link to the results
Direct link to the Raw data and R script to generate charts
Please note that this R script is different from the one used for the August benchmark. It was modified in order to support the two new columns in the CSV output of IOMelt.

Introduction:

Since I have published the previous EBS benchmark some people have asked me to do the same tests using provisioned IOPS EBS volumes.

These tests were performed in the AWS US East Region (North Virginia) using a m1.large instance. The volume size was the minimum necessary in order to create 100, 1000 and 2000 provisioned IOPS EBS volumes. The volume size was always the minimum necessary in order to configure the desired IOPS capacity, volume size in Gigabytes has to be at least 1:10 of the configured IOPS.

The objective was to verify if the provisioned IOPS EBS volume would provide a more consistent IO throughput capacity during the test.

Environment

For this benchmark a m1.large Amazon Web Services EC2 instance was used.

The instance was created using a standard AWS AMI (Amazon Linux AMI 2012.03) that was updated using yum. Besides installing a compiler (gcc) and the GNU make tool no other modifications were performed on the operating system and its configuration.

The root volumes were created using the standard web GUI and the instances were recreated for each test.

The filesystem used was ext4, as it is the default filesystem available in the standard AMI. This is the output of mount and dumpe2fs showing the mounted volumes and the filesystem block size used:

[ec2-user@large ~]$ sudo dumpe2fs /dev/xvda1  | egrep -i 'block size'
dumpe2fs 1.42 (29-Nov-2011)
Block size:               4096
[ec2-user@large ~]$ mount
/dev/xvda1 on / type ext4 (rw,noatime)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

IOMelt version 0.6 was compiled and the standard Linux cron was used to schedule it to run every 5 minutes:

[ec2-user@large ~]$ sudo crontab -l 
*/5 * * * * /usr/bin/iomelt -dor >> /root/iomelt.out 2>&1

For a detailed description of the command line options please refer to the IOMelt man page or the README file.

Volume sizes:

Provisioned IOPSVolume Size
10010 Gb
1,000100 Gb
2,000200 Gb

Time Frame and other considerations

Tests were run from November 11th, 2012 through December 9th, 2012 in the North Virginia Region. The iomelt application was called 5,925 times during the tests in this region.

For each instance 7 charts were generated using R:

This time I've included the mixed random read&write in the test output, but no Direct IO was used since this feature was not available in IOMelt at this time.

Results

Averages, Maximum and Minimum Throughput (calls/s) for each volume:

 100 iops1,000 iops2,000 iops
Serial Read
Average 1629.31838102176 10390.3329678489 12242.1719233386
Max 1660.442105 11261.010169 12353.787841
Min 1506.288970 9513.792326 6133.046127
Std. Dev. 10.284767 106.315367 411.571650
Serial Write
Average 33.8714990598417 337.298299893883 521.57001528379
Max 33.954572 337.832757 576.861737
Min 31.972022 324.018144 211.480829
Std. Dev. 0.112741 0.799008 33.790749
Random Reread
Average 360.444840902571 3428.67915164294 6629.86135540722
Max 451.965584 3947.644094 7392.563964
Min 204.350769 2886.672796 4517.461300
Std. Dev. 19.000623 153.965392 240.869238
Random Rewrite
Average 33.9231220593471 535.293446337078 1175.74254125902
Max 33.989400 628.200049 1319.635270
Min 25.871408 503.964500 860.179199
Std. Dev. 0.212366 34.344166 49.146749
Random Read&Write
Average 51.9644272754698 623.728976446318 1460.44922428944
Max 56.470919 727.797680 1687.971809
Min 34.151346 424.477151 1201.796753
Std. Dev. 1.171774 31.908333 57.233391

For sake of simplicty I'll plot only the charts for 1,000 and 2,000 IOPS, raw data and R scripts are available if you need to plot the charts for the 100 IOPS data.

Jump to:
Kernel Density
 1,000 iops2,000 iops
Serial Write
Serial Read

Scatter Plots
 1,000 iops2,000 iops
Serial Read
Serial Write
Random Reread
Random Rewrite
Random Read&Write

Conclusions:

It is clear that the provisioned IOPS EBS volumes offer a huge performance upgrade when compared to the non-optimized EBS volumes, but as data has to be spread among more underlying disks or systems, it seems that the volume is increasingly more susceptible to performance fluctuations.
Even then, there is much less deviation when compared to the previous test results.

comments powered by Disqus Tweet

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.