This is the command used to start one of the affect Linux guests:
/usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name ubuntu -uuid
9de5914b-f448-cb8d-066f-ec51286c80c0 -chardev
socket,id=monitor,path=/var/lib/libvirt/qemu/ubuntu.monitor,server,nowait
-monitor chardev:monitor -boot c -drive
file=/datahauz/kvm/ubuntu.qemu,if=virtio,index=0,boot=on,format=qcow2 -drive
if=ide,media=cdrom,index=2 -net
nic,macaddr=52:54:00:12:34:56,vlan=0,name=nic.0 -net
tap,fd=39,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0
-parallel none -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus
The SCP file transfers (Linux guest) were tested against both the guest disk
as well as guest /dev/shm. Pushing a file through the network stack to
/dev/shm was faster than pushing a file from the guest disc to shm. This at
least tells us that the guest machine is capable of dealing with greater
throughput than the disc. I just wanted to make sure the bottleneck was
confined to the disc.
FYI, the disc throughput on the Linux guests is poor, but it is at least
usable (though not in a production environment). The throughput on Windows
guests is 1/10th the speed.
For all tests, I verified results by at least two methods.
Under Windows guests, I performed file copies to/from guest disc, ftp
transfer of large files to the guest disc, and used IOmeter. All three
tests provided nearly identical results. ~2MBps R and ~2MBps W +/-500KBps.
Under Linux guests, I performed file copies to/from guest disc, scp transfer
of large files to/from the guest disc, and ran Bonnie++ (results follow). I
also created >2GB files using 'dd' from both /dev/zero and from /dev/mem.
Bonnie++
Linux Guest:
bens@webtest:~$ time dd if=/dev/zero of=1g bs=4096 count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1.0 GB) copied, 67.6536 s, 15.1 MB/s
real 1m7.663s
user 0m0.080s
sys 0m1.510s
And again to a raw device without a file system:
root@webtest:~# time dd if=/dev/zero of=/dev/vdb bs=4096 count=400000
400000+0 records in
400000+0 records out
1638400000 bytes (1.6 GB) copied, 66.2002 s, 24.7 MB/s
real 1m6.208s
user 0m0.070s
sys 0m1.750s
bens@webtest:~$ bonnie++
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03c ------Sequential Output------ --Sequential Input-
--Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
%CP
webtest 1G 12754 17 11505 2 18583 3 65015 93 500392 53 +++++
+++ ------Sequential Create------ --------Random
Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
%CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++
+++
webtest,1G,12754,17,11505,2,18583,3,65015,93,500392,53,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
Linux Host:
bens@octillion:~$ time dd if=/dev/zero of=1g bs=4096 count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1.0 GB) copied, 5.2139 s, 196 MB/s
real 0m5.412s
user 0m0.080s
sys 0m2.800s
bens@octillion:~$ bonnie++
Writing a byte at a time...done
Writing intelligently...done
done
Reading a byte at a
time...done
Keep in mind, Linux guest disc performance is 10x better than the Windows
guest disc performance. I know the results are a bit scattered as I don't
have the ability right now, to do raw write tests to my raid, but either
way, all real world disc tests in the guests are consistent, regardless of
the host disc (with the exception of /dev/shm).
In summary, all reported tests, have been run on the same hardware and under
similar conditions (as similar as is possible.)
To test this, perhaps someone else could just try to install a Windows XP
guest. My install took most of a day to complete, on a quad core 2.4GHz,
5GB RAM system, on top of several rather large raids.
Both host raids are mdraid0 and mdraid5 and both have LVM on top. Have
tested with identical results to a single SATA2 7500RPM disc.
This is the command used to start one of the affect Linux guests: f448-cb8d- 066f-ec51286c80 c0 -chardev id=monitor, path=/var/ lib/libvirt/ qemu/ubuntu. monitor, server, nowait kvm/ubuntu. qemu,if= virtio, index=0, boot=on, format= qcow2 -drive media=cdrom, index=2 -net 52:54:00: 12:34:56, vlan=0, name=nic. 0 -net 39,vlan= 0,name= tap.0 -chardev pty,id=serial0 -serial chardev:serial0
/usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name ubuntu -uuid
9de5914b-
socket,
-monitor chardev:monitor -boot c -drive
file=/datahauz/
if=ide,
nic,macaddr=
tap,fd=
-parallel none -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus
The SCP file transfers (Linux guest) were tested against both the guest disk
as well as guest /dev/shm. Pushing a file through the network stack to
/dev/shm was faster than pushing a file from the guest disc to shm. This at
least tells us that the guest machine is capable of dealing with greater
throughput than the disc. I just wanted to make sure the bottleneck was
confined to the disc.
FYI, the disc throughput on the Linux guests is poor, but it is at least
usable (though not in a production environment). The throughput on Windows
guests is 1/10th the speed.
For all tests, I verified results by at least two methods.
Under Windows guests, I performed file copies to/from guest disc, ftp
transfer of large files to the guest disc, and used IOmeter. All three
tests provided nearly identical results. ~2MBps R and ~2MBps W +/-500KBps.
Under Linux guests, I performed file copies to/from guest disc, scp transfer
of large files to/from the guest disc, and ran Bonnie++ (results follow). I
also created >2GB files using 'dd' from both /dev/zero and from /dev/mem.
Bonnie++
Linux Guest:
bens@webtest:~$ time dd if=/dev/zero of=1g bs=4096 count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1.0 GB) copied, 67.6536 s, 15.1 MB/s
real 1m7.663s
user 0m0.080s
sys 0m1.510s
And again to a raw device without a file system:
root@webtest:~# time dd if=/dev/zero of=/dev/vdb bs=4096 count=400000
400000+0 records in
400000+0 records out
1638400000 bytes (1.6 GB) copied, 66.2002 s, 24.7 MB/s
real 1m6.208s
user 0m0.070s
sys 0m1.750s
bens@webtest:~$ bonnie++ ..done ..done ..done. ..done. ..
- Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
- -----Sequential Create------ --------Random
- Create- - --Read--- -Delete-- -Create-- --Read--- 1G,12754, 17,11505, 2,18583, 3,65015, 93,500392, 53,++++ +,+++,16, +++++,+ ++,++++ +,+++,+ ++++,++ +,+++++ ,+++,++ +++,+++ ,+++++, +++
Writing with putc()...done
Writing intelligently.
Rewriting...done
Reading with getc()...done
Reading intelligently.
start 'em...done.
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03c ------Sequential Output------ --Sequential Input-
--Random-
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
%CP
webtest 1G 12754 17 11505 2 18583 3 65015 93 500392 53 +++++
+++
Create--------
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
%CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++
+++
webtest,
Linux Host:
bens@octillion:~$ time dd if=/dev/zero of=1g bs=4096 count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1.0 GB) copied, 5.2139 s, 196 MB/s
real 0m5.412s
user 0m0.080s
sys 0m2.800s
bens@octillion:~$ bonnie++ ..done
Writing a byte at a time...done
Writing intelligently.
done
Reading a byte at a
time...done
Reading intelligently. ..done ..done. ..done. ..done. ..done. .. 96,octillion, 1,1269961838, 10G,,161, 97,76547, 18,42493, 12,1976, 96,114159, 10,262. 1,3,16, ,,,,15145, 28,++++ +,+++,22692, 29,18729, 30,++++ +,+++,25217, 30,147ms, 4926ms, 16159ms, 21546us, 176ms,198ms, 695us,749us, 577us,728us, 60us,131us
start 'em...done.
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.96 ------Sequential Output------ --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
%CP
octillion 10G 161 97 76547 18 42493 12 1976 96 114159 10
262.1 3
Latency 147ms 4926ms 16159ms 21546us 176ms
198ms
Version 1.96 ------Sequential Create------ --------Random
Create--------
octillion -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
%CP
16 15145 28 +++++ +++ 22692 29 18729 30 +++++ +++ 25217
30
Latency 695us 749us 577us 728us 60us
131us
1.96,1.
Keep in mind, Linux guest disc performance is 10x better than the Windows
guest disc performance. I know the results are a bit scattered as I don't
have the ability right now, to do raw write tests to my raid, but either
way, all real world disc tests in the guests are consistent, regardless of
the host disc (with the exception of /dev/shm).
In summary, all reported tests, have been run on the same hardware and under
similar conditions (as similar as is possible.)
To test this, perhaps someone else could just try to install a Windows XP
guest. My install took most of a day to complete, on a quad core 2.4GHz,
5GB RAM system, on top of several rather large raids.
Both host raids are mdraid0 and mdraid5 and both have LVM on top. Have
tested with identical results to a single SATA2 7500RPM disc.