Difference between revisions of "Iozone"

From In The Wings
Jump to navigation Jump to search
Line 99: Line 99:
 
==Samsung==
 
==Samsung==
 
===HD103UJ===
 
===HD103UJ===
On a 7+1 RAID 5 array I have eight of these drives. This was done via software RAID through a Silicon Image 3132 Serial ATA Raid Controller. Yes, I know the controller has RAID capability. Unfortunately, it sucks ass and only supports a maximum of five drives attached to it. Anyway, in this case I used iozone with a 4gb file size using 2mb blocks and a single thread. The numbers are pretty decent.
+
On a 7+1 RAID 5 array I have eight of these drives. This was done via software RAID through a Silicon Image 3132 Serial ATA Raid Controller. Yes, I know the controller has RAID capability. Unfortunately, it sucks ass and only supports a maximum of five drives attached to it.  
<pre>
+
 
Children see throughput for  1 initial writers =  149834.62 KB/sec
+
The testbed in this case has the following statistics:
Children see throughput for  1 readers =  142497.44 KB/sec
+
* Software RAID 5, 7+1 Array
Children see throughput for 1 random readers =  84628.58 KB/sec
+
* 64k Chunks
Children see throughput for 1 random writers =  103196.52 KB/sec
+
* XFS Filesystem
</pre>
+
* Dual AMD Opteron 244
 +
 
 +
The test was done against a 16gb file in order to get around any sort of caching.

Revision as of 10:32, 20 May 2009

The standard command I use is as follows:

iozone -s 1024m -r 32k -i 0 -i 1 -i 2 -+n -t 1

Western Digital

WDC WD1600JD-75H

        Children see throughput for  1 initial writers  =   49564.20 KB/sec
        Children see throughput for  1 readers          =   59590.32 KB/sec
        Children see throughput for 1 random readers    =    8309.17 KB/sec
        Children see throughput for 1 random writers    =   22801.88 KB/sec

WDC WD1200JB-75CRA0

        Children see throughput for  1 initial writers  =   46014.13 KB/sec
        Children see throughput for  1 readers          =   40067.62 KB/sec
        Children see throughput for 1 random readers    =   11244.67 KB/sec
        Children see throughput for 1 random writers    =   16124.12 KB/sec

WDC WD3200JS-60P

        Children see throughput for  1 initial writers  =   70735.41 KB/sec
        Children see throughput for  1 readers          =   61879.04 KB/sec
        Children see throughput for 1 random readers    =   12072.48 KB/sec
        Children see throughput for 1 random writers    =   22518.14 KB/sec

(3 Drives in LVM)

        Children see throughput for  1 initial writers  =   64858.33 KB/sec
        Children see throughput for  1 readers          =   64913.92 KB/sec
        Children see throughput for 1 random readers    =   33801.48 KB/sec
        Children see throughput for 1 random writers    =   71252.03 KB/sec

WDC WD5000AAJB-00UHA

This is the drive through an IEEE1394 Firewire connection:

This was also done on a FAT32 partition.

        Children see throughput for  1 initial writers  =   26731.99 KB/sec
        Children see throughput for  1 readers          =   29232.09 KB/sec
        Children see throughput for 1 random readers    =    8226.97 KB/sec
        Children see throughput for 1 random writers    =   24798.85 KB/sec

Now, with an ext3 filesystem installed:

        Children see throughput for  1 initial writers  =   27004.33 KB/sec
        Children see throughput for  1 readers          =   28917.23 KB/sec
        Children see throughput for 1 random readers    =    8041.50 KB/sec
        Children see throughput for 1 random writers    =   11330.02 KB/sec

Finally, in a RAID 5 array, 2+1 configuration, XFS formatted, software RAID:

        Children see throughput for  1 initial writers  =   18200.69 KB/sec
        Children see throughput for  1 readers          =   35551.36 KB/sec
        Children see throughput for 1 random readers    =    8121.42 KB/sec
        Children see throughput for 1 random writers    =   14548.73 KB/sec

Maxtor

Maxtor 54098U8

        Children see throughput for  1 initial writers  =   25403.75 KB/sec
        Children see throughput for  1 readers          =    5227.56 KB/sec
        Children see throughput for 1 random readers    =    6036.05 KB/sec
        Children see throughput for 1 random writers    =    9144.35 KB/sec

Seagate

ST373405LW

        Children see throughput for  1 initial writers  =   48671.05 KB/sec
        Children see throughput for  1 readers          =   54253.02 KB/sec
        Children see throughput for 1 random readers    =   16231.61 KB/sec
        Children see throughput for 1 random writers    =   42183.82 KB/sec

ST173404LW

        Children see throughput for  1 initial writers  =   30251.11 KB/sec
        Children see throughput for  1 readers          =   28398.42 KB/sec
        Children see throughput for 1 random readers    =   14386.34 KB/sec
        Children see throughput for 1 random writers    =   17114.76 KB/sec

ST336607LC

These numbers are kind of screwed up. I find it hard to believe that a Seagate Cheetah 10k disk only gets this kind of performance. I am probably going to retire the machine that this disk is in, and find something a bit better.

        Children see throughput for  1 initial writers  =    4268.85 KB/sec
        Children see throughput for  1 readers          =    3454.83 KB/sec
        Children see throughput for 1 random readers    =    2762.78 KB/sec
        Children see throughput for 1 random writers    =    3499.16 KB/sec

ST380811AS

Some more unbelievable numbers. It wouldn't surprise me here if there was some major caching going on. The machine itself has 8gb of RAM in this situation.

        Children see throughput for  1 initial writers  =   20867.79 KB/sec
        Children see throughput for  1 readers          = 1616623.50 KB/sec
        Children see throughput for 1 random readers    = 1603349.50 KB/sec
        Children see throughput for 1 random writers    =    5807.17 KB/sec

Samsung

HD103UJ

On a 7+1 RAID 5 array I have eight of these drives. This was done via software RAID through a Silicon Image 3132 Serial ATA Raid Controller. Yes, I know the controller has RAID capability. Unfortunately, it sucks ass and only supports a maximum of five drives attached to it.

The testbed in this case has the following statistics:

  • Software RAID 5, 7+1 Array
  • 64k Chunks
  • XFS Filesystem
  • Dual AMD Opteron 244

The test was done against a 16gb file in order to get around any sort of caching.