Poor insert performance from xt_flush_indices on IO bound tests
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
PBXT |
Fix Committed
|
High
|
Paul McCullagh |
Bug Description
I am running an IO bound performance test -- iibench from https:/
pwrite64,
ha_pbxt:
mysql_parse,
il_apply_log has a loop where it reads a log entry and then perfoms a write. The IO is done sequentially.
Can il_apply_log be changed to use async IO (simulated or real) and then block at function end for the writes to complete? Async IO would also allow for some amount of write reordering. writev() is another option instead of async IO.
This uses the latest PBXT code from launchpad
The test server has 2 disks in SW RAID 0.
my.cnf parameters are:
pbxt_index_
pbxt_record_
pbxt_checkpoint
pbxt_data_
pbxt_row_
pbxt_data_
pbxt_log_
pbxt_log_
iostat output with 10 second intervals during the slowdown
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
md0 0.00 0.00 130.06 141.83 3586.21 8086.59 42.93 0.00 0.00 0.00 0.00
md0 0.00 0.00 125.00 165.30 4252.00 3684.90 27.34 0.00 0.00 0.00 0.00
md0 0.00 0.00 138.20 144.60 3545.60 3274.80 24.12 0.00 0.00 0.00 0.00
md0 0.00 0.00 166.20 2.60 5393.60 36.50 32.17 0.00 0.00 0.00 0.00
md0 0.00 0.00 106.60 366.30 4104.00 8722.10 27.12 0.00 0.00 0.00 0.00
md0 0.00 0.00 159.50 79.50 5340.00 2180.70 31.47 0.00 0.00 0.00 0.00
md0 0.00 0.00 118.70 197.70 4200.80 4439.80 27.31 0.00 0.00 0.00 0.00
md0 0.00 0.00 122.30 265.30 4229.60 6319.30 27.22 0.00 0.00 0.00 0.00
md0 0.00 0.00 156.50 62.40 5316.80 1444.60 30.89 0.00 0.00 0.00 0.00
md0 0.00 0.00 134.20 176.60 5137.60 4368.30 30.59 0.00 0.00 0.00 0.00
md0 0.00 0.00 113.60 240.60 4160.00 5754.00 27.99 0.00 0.00 0.00 0.00
md0 0.00 0.00 129.90 218.70 4290.40 5226.50 27.30 0.00 0.00 0.00 0.00
md0 0.00 0.00 157.60 58.10 6137.60 1399.10 34.94 0.00 0.00 0.00 0.00
md0 0.00 0.00 174.60 9.30 5460.80 188.10 30.72 0.00 0.00 0.00 0.00
md0 0.00 0.00 75.40 492.00 1732.80 11715.50 23.70 0.00 0.00 0.00 0.00
md0 0.00 0.00 133.97 166.23 4317.28 3986.71 27.66 0.00 0.00 0.00 0.00
md0 0.00 0.00 132.50 162.10 4308.00 3905.90 27.88 0.00 0.00 0.00 0.00
md0 0.00 0.00 163.20 29.40 5365.60 841.60 32.23 0.00 0.00 0.00 0.00
Hi Mark,
Thanks for the great bug report.
Doing that I/O in parallel looks like a good idea. The disk is totally under utilized.
According to iostat below the disk utilization (%util) is zero! And the wait time (await & svctm) as well. By issuing a lot of write in parallel we will probably be able to use the disk capacity better.
I will give it a try...