Difference between revisions of "Iozone"

From In The Wings
Jump to navigation Jump to search
 
(34 intermediate revisions by the same user not shown)
Line 4: Line 4:
 
==Western Digital==
 
==Western Digital==
 
===WDC WD1600JD-75H===
 
===WDC WD1600JD-75H===
====64k Blocks====
 
 
<pre>
 
<pre>
         Iozone: Performance Test of File I/O
+
         Children see throughput for  1 initial writers  =  49564.20 KB/sec
                Version $Revision: 3.279 $
+
        Children see throughput for  1 readers          =  59590.32 KB/sec
                Compiled for 32 bit mode.
+
        Children see throughput for 1 random readers    =    8309.17 KB/sec
                Build: linux
+
        Children see throughput for 1 random writers    =  22801.88 KB/sec
 +
</pre>
 +
===WDC WD1200JB-75CRA0===
 +
<pre>
 +
        Children see throughput for  1 initial writers  =  46014.13 KB/sec
 +
        Children see throughput for  1 readers          =  40067.62 KB/sec
 +
        Children see throughput for 1 random readers    =  11244.67 KB/sec
 +
        Children see throughput for 1 random writers    =  16124.12 KB/sec
 +
</pre>
 +
===WDC WD3200JS-60P===
 +
<pre>
 +
        Children see throughput for  1 initial writers  =  70735.41 KB/sec
 +
        Children see throughput for  1 readers          =  61879.04 KB/sec
 +
        Children see throughput for 1 random readers    =  12072.48 KB/sec
 +
        Children see throughput for 1 random writers    =  22518.14 KB/sec
 +
</pre>
 +
(3 Drives in LVM)
 +
<pre>
 +
        Children see throughput for  1 initial writers  =  64858.33 KB/sec
 +
        Children see throughput for  1 readers          =  64913.92 KB/sec
 +
        Children see throughput for 1 random readers    =  33801.48 KB/sec
 +
        Children see throughput for 1 random writers    =  71252.03 KB/sec
 +
</pre>
 +
===WDC WD5000AAJB-00UHA===
 +
This is the drive through an IEEE1394 Firewire connection:
 +
 
 +
This was also done on a FAT32 partition.
 +
<pre>
 +
        Children see throughput for  1 initial writers  =  26731.99 KB/sec
 +
        Children see throughput for  1 readers          =  29232.09 KB/sec
 +
        Children see throughput for 1 random readers    =    8226.97 KB/sec
 +
        Children see throughput for 1 random writers    =  24798.85 KB/sec
 +
</pre>
 +
Now, with an ext3 filesystem installed:
 +
<pre>
 +
        Children see throughput for  1 initial writers  =  27004.33 KB/sec
 +
        Children see throughput for  1 readers          =  28917.23 KB/sec
 +
        Children see throughput for 1 random readers    =    8041.50 KB/sec
 +
        Children see throughput for 1 random writers    =  11330.02 KB/sec
 +
</pre>
 +
Finally, in a RAID 5 array, 2+1 configuration, XFS formatted, software RAID:
 +
<pre>
 +
        Children see throughput for  1 initial writers  =  18200.69 KB/sec
 +
        Children see throughput for  1 readers          =  35551.36 KB/sec
 +
        Children see throughput for 1 random readers    =    8121.42 KB/sec
 +
        Children see throughput for 1 random writers    =  14548.73 KB/sec
 +
</pre>
 +
===WDC WD20EADS-00R===
 +
On a 7+1 RAID 5 array I have eight of these drives. This was done via software RAID through a Tempo SATA X4P Serial ATA host controller for PCI-X.
  
        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
+
The testbed in this case has the following statistics:
                    Al Slater, Scott Rhine, Mike Wisner, Ken Goss
 
                    Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
 
                    Randy Dunlap, Mark Montague, Dan Million,
 
                    Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
 
                    Erik Habbinga, Kris Strecker, Walter Wong.
 
  
        Run began: Mon Apr  9 10:23:36 2007
+
* Software RAID 5, 7+1 Array
 +
* 64k Chunks
 +
* XFS Filesystem
 +
* Dual Intel Xeon 3.2Ghz
  
        File size set to 1048576 KB
+
The test was done against a 8gb file in order to get around any sort of caching.
        Record Size 64 KB
+
[[Image:Linx32array.png | 650px]]
        No retest option selected
 
        Command line used: iozone -s 1024m -r 64k -i 0 -i 1 -i 2 -+n -t 1
 
        Output is in Kbytes/sec
 
        Time Resolution = 0.000001 seconds.
 
        Processor cache size set to 1024 Kbytes.
 
        Processor cache line size set to 32 bytes.
 
        File stride size set to 17 * record size.
 
        Throughput test with 1 process
 
        Each process writes a 1048576 Kbyte file in 64 Kbyte records
 
  
         Children see throughput for  1 initial writers  =  49564.20 KB/sec
+
==Maxtor==
         Parent sees throughput for  1 initial writers  =   45431.54 KB/sec
+
===Maxtor 54098U8===
         Min throughput per process                      =   49564.20 KB/sec
+
<pre>
         Max throughput per process                      =   49564.20 KB/sec
+
         Children see throughput for  1 initial writers  =  25403.75 KB/sec
        Avg throughput per process                      49564.20 KB/sec
+
         Children see throughput for  1 readers          =   5227.56 KB/sec
        Min xfer                                        = 1048576.00 KB
+
         Children see throughput for 1 random readers    =   6036.05 KB/sec
 +
         Children see throughput for 1 random writers    =    9144.35 KB/sec
 +
</pre>
 +
 
 +
==Seagate==
 +
===ST31000520AS===
 +
These are 1tb SATA 3gbs drives with 32mb of cache running at 5900rpm (green drives).
 +
 
 +
They are installed in a falcon III storage bay with a 3gbs Fibre channel connection.
 +
 
 +
* RAID 6 Configuration (6+2)
 +
<pre>
 +
Children see throughput for  1 initial writers =  122314.44 KB/sec
 +
Children see throughput for  1 readers = 146547.94 KB/sec
 +
Children see throughput for 1 random readers 42988.03 KB/sec
 +
Children see throughput for 1 random writers =   79221.51 KB/sec
  
        Children see throughput for  1 readers          =  59590.32 KB/sec
+
</pre>
        Parent sees throughput for  1 readers          =  59577.54 KB/sec
 
        Min throughput per process                      =  59590.32 KB/sec
 
        Max throughput per process                      =  59590.32 KB/sec
 
        Avg throughput per process                      =  59590.32 KB/sec
 
        Min xfer                                        = 1048576.00 KB
 
  
        Children see throughput for 1 random readers    =   8309.17 KB/sec
+
===ST373405LW===
        Parent sees throughput for 1 random readers    =   8309.16 KB/sec
 
        Min throughput per process                      =   8309.17 KB/sec
 
        Max throughput per process                      =   8309.17 KB/sec
 
        Avg throughput per process                      =   8309.17 KB/sec
 
        Min xfer                                        = 1048576.00 KB
 
  
         Children see throughput for 1 random writers    =  22801.88 KB/sec
+
<pre>
         Parent sees throughput for 1 random writers     20238.00 KB/sec
+
        Children see throughput for  1 initial writers  =  48671.05 KB/sec
         Min throughput per process                      22801.88 KB/sec
+
        Children see throughput for  1 readers          =  54253.02 KB/sec
         Max throughput per process                      22801.88 KB/sec
+
        Children see throughput for 1 random readers    =  16231.61 KB/sec
         Avg throughput per process                      22801.88 KB/sec
+
         Children see throughput for 1 random writers    =  42183.82 KB/sec
         Min xfer                                        = 1048576.00 KB
+
</pre>
 +
===ST173404LW===
 +
<pre>
 +
         Children see throughput for 1 initial writers =  30251.11 KB/sec
 +
        Children see throughput for  1 readers          28398.42 KB/sec
 +
         Children see throughput for 1 random readers    14386.34 KB/sec
 +
         Children see throughput for 1 random writers    17114.76 KB/sec
 +
</pre>
 +
===ST336607LC===
 +
These numbers are kind of screwed up. I find it hard to believe that a Seagate Cheetah 10k disk only gets this kind of performance. I am probably going to retire the machine that this disk is in, and find something a bit better.
 +
<pre>
 +
        Children see throughput for  1 initial writers  =    4268.85 KB/sec
 +
         Children see throughput for  1 readers          =    3454.83 KB/sec
 +
        Children see throughput for 1 random readers    =    2762.78 KB/sec
 +
        Children see throughput for 1 random writers    =    3499.16 KB/sec
 +
</pre>
 +
===ST380811AS===
 +
Some more unbelievable numbers. It wouldn't surprise me here if there was some major caching going on. The machine itself has 8gb of RAM in this situation.
 +
<pre>
 +
        Children see throughput for  1 initial writers  20867.79 KB/sec
 +
        Children see throughput for  1 readers          = 1616623.50 KB/sec
 +
        Children see throughput for 1 random readers    = 1603349.50 KB/sec
 +
         Children see throughput for 1 random writers    =   5807.17 KB/sec
 
</pre>
 
</pre>
 +
 +
==Samsung==
 +
===HD103UJ===
 +
On a 7+1 RAID 5 array I have eight of these drives. This was done via software RAID through a Silicon Image 3132 Serial ATA Raid Controller. Yes, I know the controller has RAID capability. Unfortunately, it sucks ass and only supports a maximum of five drives attached to it.
 +
 +
The testbed in this case has the following statistics:
 +
* Software RAID 5, 7+1 Array
 +
* 64k Chunks
 +
* XFS Filesystem
 +
* Dual AMD Opteron 244
 +
 +
The test was done against a 16gb file in order to get around any sort of caching.
 +
[[Image:7-1raidstats.png | 650px]]

Latest revision as of 11:12, 3 May 2011

The standard command I use is as follows:

iozone -s 1024m -r 32k -i 0 -i 1 -i 2 -+n -t 1

Western Digital

WDC WD1600JD-75H

        Children see throughput for  1 initial writers  =   49564.20 KB/sec
        Children see throughput for  1 readers          =   59590.32 KB/sec
        Children see throughput for 1 random readers    =    8309.17 KB/sec
        Children see throughput for 1 random writers    =   22801.88 KB/sec

WDC WD1200JB-75CRA0

        Children see throughput for  1 initial writers  =   46014.13 KB/sec
        Children see throughput for  1 readers          =   40067.62 KB/sec
        Children see throughput for 1 random readers    =   11244.67 KB/sec
        Children see throughput for 1 random writers    =   16124.12 KB/sec

WDC WD3200JS-60P

        Children see throughput for  1 initial writers  =   70735.41 KB/sec
        Children see throughput for  1 readers          =   61879.04 KB/sec
        Children see throughput for 1 random readers    =   12072.48 KB/sec
        Children see throughput for 1 random writers    =   22518.14 KB/sec

(3 Drives in LVM)

        Children see throughput for  1 initial writers  =   64858.33 KB/sec
        Children see throughput for  1 readers          =   64913.92 KB/sec
        Children see throughput for 1 random readers    =   33801.48 KB/sec
        Children see throughput for 1 random writers    =   71252.03 KB/sec

WDC WD5000AAJB-00UHA

This is the drive through an IEEE1394 Firewire connection:

This was also done on a FAT32 partition.

        Children see throughput for  1 initial writers  =   26731.99 KB/sec
        Children see throughput for  1 readers          =   29232.09 KB/sec
        Children see throughput for 1 random readers    =    8226.97 KB/sec
        Children see throughput for 1 random writers    =   24798.85 KB/sec

Now, with an ext3 filesystem installed:

        Children see throughput for  1 initial writers  =   27004.33 KB/sec
        Children see throughput for  1 readers          =   28917.23 KB/sec
        Children see throughput for 1 random readers    =    8041.50 KB/sec
        Children see throughput for 1 random writers    =   11330.02 KB/sec

Finally, in a RAID 5 array, 2+1 configuration, XFS formatted, software RAID:

        Children see throughput for  1 initial writers  =   18200.69 KB/sec
        Children see throughput for  1 readers          =   35551.36 KB/sec
        Children see throughput for 1 random readers    =    8121.42 KB/sec
        Children see throughput for 1 random writers    =   14548.73 KB/sec

WDC WD20EADS-00R

On a 7+1 RAID 5 array I have eight of these drives. This was done via software RAID through a Tempo SATA X4P Serial ATA host controller for PCI-X.

The testbed in this case has the following statistics:

  • Software RAID 5, 7+1 Array
  • 64k Chunks
  • XFS Filesystem
  • Dual Intel Xeon 3.2Ghz

The test was done against a 8gb file in order to get around any sort of caching.

Error creating thumbnail: File missing

Maxtor

Maxtor 54098U8

        Children see throughput for  1 initial writers  =   25403.75 KB/sec
        Children see throughput for  1 readers          =    5227.56 KB/sec
        Children see throughput for 1 random readers    =    6036.05 KB/sec
        Children see throughput for 1 random writers    =    9144.35 KB/sec

Seagate

ST31000520AS

These are 1tb SATA 3gbs drives with 32mb of cache running at 5900rpm (green drives).

They are installed in a falcon III storage bay with a 3gbs Fibre channel connection.

  • RAID 6 Configuration (6+2)
	Children see throughput for  1 initial writers 	=  122314.44 KB/sec
	Children see throughput for  1 readers 		=  146547.94 KB/sec
	Children see throughput for 1 random readers 	=   42988.03 KB/sec
	Children see throughput for 1 random writers 	=   79221.51 KB/sec

ST373405LW

        Children see throughput for  1 initial writers  =   48671.05 KB/sec
        Children see throughput for  1 readers          =   54253.02 KB/sec
        Children see throughput for 1 random readers    =   16231.61 KB/sec
        Children see throughput for 1 random writers    =   42183.82 KB/sec

ST173404LW

        Children see throughput for  1 initial writers  =   30251.11 KB/sec
        Children see throughput for  1 readers          =   28398.42 KB/sec
        Children see throughput for 1 random readers    =   14386.34 KB/sec
        Children see throughput for 1 random writers    =   17114.76 KB/sec

ST336607LC

These numbers are kind of screwed up. I find it hard to believe that a Seagate Cheetah 10k disk only gets this kind of performance. I am probably going to retire the machine that this disk is in, and find something a bit better.

        Children see throughput for  1 initial writers  =    4268.85 KB/sec
        Children see throughput for  1 readers          =    3454.83 KB/sec
        Children see throughput for 1 random readers    =    2762.78 KB/sec
        Children see throughput for 1 random writers    =    3499.16 KB/sec

ST380811AS

Some more unbelievable numbers. It wouldn't surprise me here if there was some major caching going on. The machine itself has 8gb of RAM in this situation.

        Children see throughput for  1 initial writers  =   20867.79 KB/sec
        Children see throughput for  1 readers          = 1616623.50 KB/sec
        Children see throughput for 1 random readers    = 1603349.50 KB/sec
        Children see throughput for 1 random writers    =    5807.17 KB/sec

Samsung

HD103UJ

On a 7+1 RAID 5 array I have eight of these drives. This was done via software RAID through a Silicon Image 3132 Serial ATA Raid Controller. Yes, I know the controller has RAID capability. Unfortunately, it sucks ass and only supports a maximum of five drives attached to it.

The testbed in this case has the following statistics:

  • Software RAID 5, 7+1 Array
  • 64k Chunks
  • XFS Filesystem
  • Dual AMD Opteron 244

The test was done against a 16gb file in order to get around any sort of caching.

Error creating thumbnail: File missing