Tag Archives: Storage Pool

Hands-on with VNXe 3300 Part 6: Performance


Now that the VNXe is installed, configured and also some storage has been provisioned to ESXi hosts it is time to look at the performance. Like I mentioned in the first post I had already gathered some test results from CX4-240 using Iometer and I wanted to make similar tests with VNXe so that the results could be comparable.

Hands-on with VNXe 3300 series:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Test environment CX4-240

  • EMC CX4-240
  • Dell PE M710HD Blade server
  • Two 10G iSCSI NICs with total of four paths between storage and ESXi. Round robin path selection policy enabled for each LUN with two active I/O  paths
  • Jumbo Frames enabled
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Test environment VNXe 3300

  • EMC VNXe 3300
  • Dell PE M710HD Blade server
  • Two 10Gb iSCSI NICs with total of two paths between storage and ESXi. Round robin path selection policy enabled for each LUN with two active I/O paths (see Trunk restrictions and Load balancing)
  • Jumbo Frames enabled
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Iometer Configuration

I used Iometer setup described in VMware’s Recommendations for Aligning VMFS Partitions (page 7) document.

Disk configuration

I had to shorten the explanations on the charts so here are the definitions:

  • CX4 FC 15D
    • 15 15k FC Disk RAID5 Pool on CX4-240 connected with iSCSI
  • CX4 SATA 25D
    • 25 7.2k SATA Disk RAID5 Pool on CX240 connected with iSCSI
  • VNXe 21D 2.0.3
    • 21 15k SAS Disk RAID 5 (3×6+1) Pool on VNXe 3300 connected with iSCSI. VNXe Software version 2.0.3
  • VNXe 28D 2.0.3
    • 28 15k SAS Disk RAID 5 (4×6+1) Pool on VNXe 3300. connected with iSCSI. VNXe Software version 2.0.3
  • VNXe 28D 2.1.0
    • 28 15k SAS Disk RAID 5 (4×6+1) Pool on VNXe 3300 connected with iSCSI. VNXe Software version 2.1.0
  • VNXe RG 2.1.0
    • 7 15k SAS RAID5 (6+1) RG on VNXe connected with iSCSI. VNXe Software version 2.1.0
  • VNXe 28D NFS
    • 28 15k SAS RAID 5 (4×6+1) Pool on VNXe 3300 connected with NFS. VNXe Software version 2.1.0

100Gb thick LUN was created to each pool and RG where 20Gb virtual disk was stored. This 20Gb virtual disk was presented to the virtual Windows server that was used to conduct the test. Partition to this disk was created using diskpart command ’create partition primary align=1024′ and partition was formatted with a 32K allocation size.

Trunk restrictions

Before I go through the results I want to address the limitation with the trunk between VNXe and the 10Gb switch that it is connected to. Even though there are 4Gb (4x1Gb) trunk between the storage and the switch the maximum throughput is only the throughput of one physical port.

While I was running the tests I had SSH connection open to VNXe and I ran netstat -i -c command to see what was going on with the trunk and individual ports. The first screen capture is taken while 8k sequential read test was running. You can see that all the traffic is going through that one port:

The second screen capture is taken while the VNXe was in production and several virtual machines were accessing the disk. In this case the load is balanced randomly between the physical ports:

Load balancing

VNXe 3300 is active/active array but doesn’t support ALUA. This means that LUN can only be accessed through one SP. One iSCSI/NFS server can only have one IP and this IP can only be tied to one port or trunk. Also LUN can only be served by one iSCSI/NFS server. So there will be only one path from the switch to VNXe. Round robin path selection policy can be enabled on ESX side but this will only help to balance the load between the ESX NICs. Even without the trunk round robin can’t be used to balance the load between the four VNXe ports.

Test results

Each Iometer test was ran twice and the results are the average of those two test runs. If the results were not similar enough (i.e. several hundred difference in IOps) then a third test was ran and the results are the average of those three runs.

Same as previous but without NFS results:

Average wait time

Conclusions

The first thing that caught my eye was the difference between VNXe 28 disk pool and 7 disk RG on the random write test. Quote from my last post about the pool structure:

When LUN is created it will be placed on the RAID group (RG) that is the least utilized from a capacity point of view. If the LUN created is larger than the free space on an individual RG the LUN will be then extended across multiple RGs but there is no striping involved.

The tests were ran on 100Gb LUN so it should fit in one RG if the information that I got would be correct. So comparing the pool and the results from one RG random write test it seems that even smaller LUNs are divided across multiple RGs.

Another interesting detail is the difference of the software version 2.0.3 and 2.1.0. Looking at the difference of these results it is obvious that the software version has a big effect on the performance.

NFS storage performance with random write was really bad. But with the 1k sequential read it surprised giving out 30000 iops. Based on these test I would stick with the iSCSI and maybe look at the NFS again after the next software version.

Overall VNXe is performing very well compared to CX. With this configuration the VNXe it is hitting the limits of the one physical port. This could be fixed by adding 10Gb I/O modules. Would be nice run the same test with the 10Gb I/O modules.

We are coming to an end of my hands-on series and I’ll be wrapping up the series in the next post.

[Update 2/17/2012] Updated NFS performance results with VNXe 3100 and OE 2.1.3.16008: VNXe 3100 performance

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


Hands-on with VNXe 3300 Part 4: Storage pools


When EMC announced that VNXe will also utilize storage pools my first thought was that it is similar to what CX/VNX has. Storage pool would consist of five disk RAID 5 groups and LUNs would be striped across all of these RAID groups to utilize all spindles. After some discussions with EMC experts I found out that this is not how the pool works in VNXe. In this part I will go a bit deeper into the pool structure and also explain how Storage Pool is created.

Hands-on with VNXe 3300 series [list edited 9/23/2011]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Pool Structure

VNXe 3300 can be furnished with SAS, NL-SAS or Flash drives. The one that I was configuring had 30 SAS disks so there were two options when creating Storage Pools: 6+1 drive RAID 5 groups or 3+3 RAID 1/0 groups. I chose to create one big pool with 28 disks (four 6+1 RAID 5 groups) and one hot spare disk (EMC recommends having one hot spare disk for every 30 SAS disks).  EMC also recommends not putting any I/O intensive load on the first four disks because PSL (Persistent Storage Layout) is located on those disks. I wanted to test the storage pool performance with all the disks that were available so I ignored this recommendation and also used the first four disks in the pool too.

When LUN is created it will be placed on the RAID group (RG) that is the least utilized from a capacity point of view. If the LUN created is larger than the free space on an individual RG the LUN will be then extended across multiple RGs but there is no striping involved. So depending of the LUN size and pool utilization a new LUN could reside either in one RG or several RGs. This means that only one RG is used for sequential workloads but random workload could be spread over several RGs. Now if disks are added to the storage pool those newly added RGs are the least utilized and will be used first when new LUNs are created. So a storage pool on VNXe can be considered more as a capacity pool than a performance pool.

Before I wrote this post I was in contact with EMC Technology Consultant (TC) and EMC vSpecialist to get my facts right. Both of them confirmed that the LUNs in VNXe pool are not striped across RGs. Pool structure was explained to me by the EMC TC. Looking at the test results that I posted on part 6 and also looking at the feedback that I got the description above is not accurate. Here is a quote from Brian Castelli’s (EMC employee) comment:

 “When provisioning storage elements, such as an iSCSI LUN, the VNXe will always stripe across as many RAID Groups as it can–up to a maximum of four.”

Based on Brian’s comment LUNs in VNXe pool are striped across multiple RGs. [Edited 9/15/2011]

Creating Storage Pools

Storage pools are configured and managed from System – Storage Pools. If no pools have been configured then Unconfigured Disk Pool is only shown.

Selecting Configure Disks will start disk configuration wizard and there are three options to select from: Automatically configure pools, Manually create a new pool, and Manually add disks to an existing pool. Quite easy to understand what each option stands for. I chose the Automatically configure pools option. When using the automatic configuration option 6+1 disk RAID 5 groups are used to create the pool.

Next step is to select how many disks are added to the new pool and you can see that the options are multiples of seven (6+1 RAID 5).

A hot spare pool will also be created when using the automatic pool configuration option.

When selecting Manually create a new pool there is a list of alternatives (see picture below) based on the desired purpose of the pool. This makes creating a storage pool easy because VNXe suggests the RAID level based on the selection that the user made. There is also an option further down on the wizard where the user can select the number of disks used and the RAID level (Balanced Perf/Cap R5 or High Performance R1/0).

Conclusions

It feels a little disappointing to find out that the pool structure wasn’t what I was expecting it to be. But maybe my expectations were also too high in the first place.

Creating a Storage Pool is in line with one of EMC’s definitions for VNXe: simple. When Automatic configuration option is selected Unisphere will take care of deciding what disks are used in the pool and what is the correct number of hot spares needed based on EMC’s best practices.

The next part will cover storage provisioning from VNXe and also using EMC’s VSI plug-in for vCenter.


%d bloggers like this: