ZFS on Xeon D-1537 with Proxmox

I installed Proxmox 4.1-22 on my Xeon D-1537 system to benchmark ZFS on this system. I then installed Server 2016 TP4 in KVM on this build and benched the storage within the VM using ATTO and diskspd1

ATTO

2016-04-16-proxmox-01 2016-04-16-proxmox-02 2016-04-16-proxmox-03

Diskspd

Large area random concurrent writes of 64KB blocks

diskspd -c2G -w -b64K -F8 -r -o32 -d10 -h testfile.dat

Total IO

thread bytes I/Os MB/s I/O per s file
total 8 threads: 35203448832 537162 3242.57 51881.09 2GB

Large area sequential concurrent writes of 64KB blocks

diskspd -c2G -w -b64K -F8 -T1b -s8b -o32 -d10 -h testfile.dat

Total IO

thread bytes I/Os MB/s I/O per s file
total 8 threads: 54970941440 838790 3193.67 51098.78 2GB

Large area sequential serial writes of 64KB blocks

diskspd -c2G -w -b64K -o1 -d10 -h testfile.dat

Total IO

thread bytes I/Os MB/s I/O per s file
total: 6997934080 106780 667.37 10677.96 2GB

Sequential large I/O Writes

diskspd -w100 -d600 -W300 -b512K -t1 -o4 -h -L -Z1M -c64G testfile.dat 2

Total IO

thread bytes I/Os MB/s I/O per s AvgLat LatStdDev file
total: 41702391808 79541 66.28 132.57 30.168 46.099 64GB

Reads

diskspd -b4K -d60 -h -o128 -t32 -si -c50G testfile.dat 3

Total IO

thread bytes I/Os MB/s I/O per s file
total 32 threads: 13471064064 3288834 214.11 54813.08 50GB

SQL OLTP-type workloads

diskspd -b8K -d2 -h -L -o4 -t4 -r -w20 -Z1G -c50G testfile.dat 4

Total IO

thread bytes I/Os MB/s I/O per s AvgLat LatStdDev file
total 4 threads: 126582784 15452 59.91 7667.86 2.082 5.453 50GB

Read IO

thread bytes I/Os MB/s I/O per s AvgLat LatStdDev file
total 4 threads: 101990400 12450 48.27 6178.16 0.133 0.222 50GB

Write IO

thread bytes I/Os MB/s I/O per s AvgLat LatStdDev file
total 4 threads: 24592384 3002 11.64 1489.70 10.166 8.469 50GB

Response time

%-ile Read (ms) Write (ms) Total (ms)
min 0.043 1.919 0.043
25th 0.093 3.876 0.097
50th 0.115 6.803 0.131
75th 0.153 15.196 0.205
90th 0.192 22.814 6.321
95th 0.216 28.896 15.087
99th 0.280 34.100 28.732
3-nines 0.882 35.121 34.550
4-nines 12.319 35.213 35.176
5-nines 19.003 35.213 35.213
6-nines 19.003 35.213 35.213
7-nines 19.003 35.213 35.213
8-nines 19.003 35.213 35.213
max 19.003 35.213 35.213

Comments

Conclusion

I’m disappointed with the write performance. As I plan to use this storage primarily for large files, the inconsistent write performance from ATTO (peaking at around 200MB/s), and the poor performance of the large sequential writes (66.28MB/s) means that I either need to add more drives, or find another solution.

I didn’t have more drives to test with, unfortunately.