Home » Linux Administration, Unix & Solaris, Virtualization (Virtualisation)

UFS & ZFS in Virtual Machine vs Ext3 Physical File Benchmarks

1 October 2007 No Comment


Some really bizarre results here that just blew my mind. Check out the “rate” column for both random writes and random reads and compare the Ext3 Ubuntu with Nexenta VM for both UFS and ZFS – UFS is a bit quicker the ZFS. I am not sure whether the virtualisation actually improves the random reads and writes but it’s a pretty massive difference. After my small play with Nexenta, it’s looking quite promising to become my primary desktop OS.

Here’s a tiobench of Ubuntu 7.04 running on a Seagate ST3320620AS 320GB SATA drive.

Sequential Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
2.6.20-16-generic 1792 4096 1 54.81 9.103% 0.070 370.19 0.00000 0.00000 602
2.6.20-16-generic 1792 4096 2 44.26 13.23% 0.175 791.39 0.00000 0.00000 334
2.6.20-16-generic 1792 4096 4 49.33 31.25% 0.306 1187.43 0.00000 0.00000 158
2.6.20-16-generic 1792 4096 8 47.55 59.80% 0.613 1626.75 0.00000 0.00000 79

Random Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
2.6.20-16-generic 1792 4096 1 0.51 0.771% 7.648 136.11 0.00000 0.00000 66
2.6.20-16-generic 1792 4096 2 0.52 1.272% 15.015 155.48 0.00000 0.00000 41
2.6.20-16-generic 1792 4096 4 0.55 2.649% 27.618 572.07 0.00000 0.00000 21
2.6.20-16-generic 1792 4096 8 0.55 5.843% 53.622 1032.24 0.00000 0.00000 9

Sequential Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
2.6.20-16-generic 1792 4096 1 32.42 17.40% 0.116 5364.25 0.00109 0.00000 186
2.6.20-16-generic 1792 4096 2 45.88 58.74% 0.158 4584.56 0.00044 0.00000 78
2.6.20-16-generic 1792 4096 4 43.31 105.3% 0.325 4418.34 0.00196 0.00000 41
2.6.20-16-generic 1792 4096 8 41.73 198.6% 0.654 6986.02 0.00763 0.00000 21

Random Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
2.6.20-16-generic 1792 4096 1 2.00 0.972% 0.257 15.78 0.00000 0.00000 206
2.6.20-16-generic 1792 4096 2 1.92 2.064% 0.503 150.80 0.00000 0.00000 93
2.6.20-16-generic 1792 4096 4 1.59 3.425% 1.691 332.46 0.00000 0.00000 47
2.6.20-16-generic 1792 4096 8 1.78 6.911% 1.178 546.28 0.00000 0.00000 26

Now – Running inside the same physical machine in a Virtual Machine using Vmware Server Version 1.0.3-1 I am running SunOS sun1 5.10 NexentaOS_20070402 i86pc i386 i86pc Solaris with all ‘apt-get upgrade’ applied as of today.

The vmdk files live on the same physical hard drive of the above benchmark tests and obviously the benchmarks were not being performed at the same time.

First, the UFS volume, living at /

Sequential Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
5.10 512 4096 1 69.05 51.46% 0.055 280.48 0.00000 0.00000 134
5.10 512 4096 2 110.90 157.3% 0.065 280.11 0.00000 0.00000 70
5.10 512 4096 4 113.98 318.0% 0.120 390.12 0.00000 0.00000 36
5.10 512 4096 8 127.42 715.1% 0.214 725.61 0.00000 0.00000 18

Random Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
5.10 512 4096 1 51.53 45.93% 0.073 154.57 0.00000 0.00000 112
5.10 512 4096 2 55.09 82.72% 0.089 97.44 0.00000 0.00000 67
5.10 512 4096 4 102.26 98.82% 0.036 0.90 0.00000 0.00000 103
5.10 512 4096 8 121.49 96.99% 0.029 0.24 0.00000 0.00000 125

Sequential Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
5.10 512 4096 1 37.51 33.65% 0.097 890.04 0.00000 0.00000 111
5.10 512 4096 2 87.13 113.1% 0.077 510.31 0.00000 0.00000 77
5.10 512 4096 4 77.77 195.4% 0.154 604.28 0.00000 0.00000 40
5.10 512 4096 8 89.91 446.0% 0.283 924.70 0.00000 0.00000 20

Random Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
5.10 512 4096 1 3.39 25.89% 1.123 180.35 0.00000 0.00000 13
5.10 512 4096 2 12.12 72.10% 0.366 509.51 0.00000 0.00000 17
5.10 512 4096 4 10.33 93.73% 0.786 340.37 0.00000 0.00000 11
5.10 512 4096 8 15.49 299.5% 1.254 428.83 0.00000 0.00000 5

and here’s the home directory, which is running ZFS

Sequential Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
5.10 512 4096 1 128.28 96.71% 0.030 36.40 0.00000 0.00000 133
5.10 512 4096 2 127.90 188.5% 0.056 200.39 0.00000 0.00000 68
5.10 512 4096 4 130.20 360.9% 0.108 380.10 0.00000 0.00000 36
5.10 512 4096 8 130.67 689.6% 0.211 640.09 0.00000 0.00000 19

Random Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
5.10 512 4096 1 74.75 71.35% 0.043 17.40 0.00000 0.00000 105
5.10 512 4096 2 112.29 99.28% 0.032 1.29 0.00000 0.00000 113
5.10 512 4096 4 105.33 204.9% 0.074 92.12 0.00000 0.00000 51
5.10 512 4096 8 125.68 98.95% 0.028 0.23 0.00000 0.00000 127

Sequential Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
5.10 512 4096 1 99.39 67.10% 0.034 569.68 0.00000 0.00000 148
5.10 512 4096 2 86.28 118.8% 0.066 558.60 0.00000 0.00000 73
5.10 512 4096 4 76.45 201.4% 0.121 760.74 0.00000 0.00000 38
5.10 512 4096 8 108.75 558.3% 0.229 570.08 0.00000 0.00000 19

Random Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
—————————- —— —– — —— —— ——— ———– ——– ——– —–
5.10 512 4096 1 13.72 56.60% 0.263 49.72 0.00000 0.00000 24
5.10 512 4096 2 12.34 93.16% 0.558 282.44 0.00000 0.00000 13
5.10 512 4096 4 13.88 183.4% 0.920 350.58 0.00000 0.00000 8
5.10 512 4096 8 16.86 288.1% 1.027 510.47 0.00000 0.00000 6

Now, if I was doing a Masters or Phd, I would run these >5 times and take the average to make sure my results were accurate but I just ain’t got the time and no-ones publishing or referencing these little SATA disks, are they. If I can get the chance to install Nexenta on an ESX server that’s also running Suse 10.1, I’d like to see some vm->vm comparisons to see the performance difference between the two.

Anyway, Nexenta is looking ubercool ie: in it’s most basic form, think of being able to bolt on ZFS, Dtrace, Zones and Containers to Ubuntu and you have.. Nexenta – well, almost. I have been waiting for this ever since I read the announcement that Sun were open sourcing Solaris. A huge kudos to the guys over at Nexenta who give me Z-raided fantasies.

Powered by ScribeFire.

Technorati Tags: , , , ,

Leave your response!

Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.

Be nice. Keep it clean. Stay on topic. No spam.

You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar.