MS iSCSI vs. RDM vs. VMFS


Have you ever wondered if there is a real performance difference when a LUN is connected using a Microsoft (MS) iSCSI initiator, using raw device mapping (RDM) or when the virtual disk is on VMFS? You might also have wondered if multipathing (MP) really makes difference. While investigating other iSCSI performance issues I ended up doing some Iometer tests with different disk configurations. In this post I will share some of the test results.

Test environment [list edited 9/7/2011]

  • EMC CX4-240
  • Dell PE M710HD Blade server
  • Two Dell PowerConnect switches
  • Two 10G iSCSI NICs per ESXi, total of four iSCSI paths.
  • Jumbo frames enabled
  • ESXi 4.1U1
  • Virtual Win 2008 (1vCPU and 4GB memory)

Disk Configurations

  • 4x300GB 15k FC Disk RAID10
    • 100GB LUN for VMFS partition (8MB block size)
      • 20GB virtual disk (vmdk) for VMFS and VMFS MP tests
    • 20GB LUN for MS iSCSI, MS iSCSI MP, RDM physical and RDM virtual tests

MS iSCSI initiator used virtual machine’s VNXNET 3 adapter (one or two depending on the configuration) that was connected to the dedicated iSCSI network through ESXi’s 10GB nic. MS iSCSI initiator multipathing was configured using Microsoft TechNet Installing and Configuring MPIO guide. Multipathing for RDM and VMFS disks was configured by enabling round robin path selection policy. When multipathing was enabled there were two active paths to storage.

Iometer configuration

When I was trying to figure out what would be the best way to test the different disk configurations I found a post “Open unofficial storage performance thread”  from VMware Communities. In the thread there is this Iometer configuration that would test maximum throughput and also simulate real life scenario. Other Community users have also posted their results there. I decided to use the Iometer configuration posted on the thread so that I could also compare my results with the others.

Max Throughput-100%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 100% Read distribution
  • 5 minute run time

Max Throughput-50%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 50% read/write distribution
  • 5 minute run time

RealLife-60%Rand-65%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 40% sequential / 60% random distribution
  • 35 % read /65% write distribution
  • 5 minute run time

Random-8k-70%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 100% random distribution
  • 30 % read /70% write distribution
  • 5 minute run time

Test results

Each Iometer test was ran twice and the results are the average of those two test runs. If the results were not similar enough (i.e. several hundreds difference in IOps) then a third test was ran and the results are the average of those three runs.

Conclusions

Looking at these results VMFS was performing very well with both single and multipath. Both RDM disks with multipathing are really close to the performance of VMFS. And then there is MS iSCSI initiator that gave kind of conflicting results. You would think that multipathing would give better results than single path, but actually that was the case only on the max throughput test. Keep in mind that these tests were ran on a virtual machine that was running on ESXi and that the MS iSCSI initiator was configured to use virtual nics. I would guess that Windows Server 2008 running on a physical server with MS iSCSI initiator would give much better results.

Overall VMFS would be the best choice to put the virtual disk on but it’s not always that simple. Some clustering softwares don’t support virtual disks on VMFS and then the options are RDM or MS iSCSI. There could also be limitations for physical or virtual RDM disk usage.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.

Advertisements

2 responses to “MS iSCSI vs. RDM vs. VMFS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: