Category Archives: Uncategorized

Hands-on with VNXe 3300 Part 6: Performance


Now that the VNXe is installed, configured and also some storage has been provisioned to ESXi hosts it is time to look at the performance. Like I mentioned in the first post I had already gathered some test results from CX4-240 using Iometer and I wanted to make similar tests with VNXe so that the results could be comparable.

Hands-on with VNXe 3300 series:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Test environment CX4-240

  • EMC CX4-240
  • Dell PE M710HD Blade server
  • Two 10G iSCSI NICs with total of four paths between storage and ESXi. Round robin path selection policy enabled for each LUN with two active I/O  paths
  • Jumbo Frames enabled
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Test environment VNXe 3300

  • EMC VNXe 3300
  • Dell PE M710HD Blade server
  • Two 10Gb iSCSI NICs with total of two paths between storage and ESXi. Round robin path selection policy enabled for each LUN with two active I/O paths (see Trunk restrictions and Load balancing)
  • Jumbo Frames enabled
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Iometer Configuration

I used Iometer setup described in VMware’s Recommendations for Aligning VMFS Partitions (page 7) document.

Disk configuration

I had to shorten the explanations on the charts so here are the definitions:

  • CX4 FC 15D
    • 15 15k FC Disk RAID5 Pool on CX4-240 connected with iSCSI
  • CX4 SATA 25D
    • 25 7.2k SATA Disk RAID5 Pool on CX240 connected with iSCSI
  • VNXe 21D 2.0.3
    • 21 15k SAS Disk RAID 5 (3×6+1) Pool on VNXe 3300 connected with iSCSI. VNXe Software version 2.0.3
  • VNXe 28D 2.0.3
    • 28 15k SAS Disk RAID 5 (4×6+1) Pool on VNXe 3300. connected with iSCSI. VNXe Software version 2.0.3
  • VNXe 28D 2.1.0
    • 28 15k SAS Disk RAID 5 (4×6+1) Pool on VNXe 3300 connected with iSCSI. VNXe Software version 2.1.0
  • VNXe RG 2.1.0
    • 7 15k SAS RAID5 (6+1) RG on VNXe connected with iSCSI. VNXe Software version 2.1.0
  • VNXe 28D NFS
    • 28 15k SAS RAID 5 (4×6+1) Pool on VNXe 3300 connected with NFS. VNXe Software version 2.1.0

100Gb thick LUN was created to each pool and RG where 20Gb virtual disk was stored. This 20Gb virtual disk was presented to the virtual Windows server that was used to conduct the test. Partition to this disk was created using diskpart command ’create partition primary align=1024′ and partition was formatted with a 32K allocation size.

Trunk restrictions

Before I go through the results I want to address the limitation with the trunk between VNXe and the 10Gb switch that it is connected to. Even though there are 4Gb (4x1Gb) trunk between the storage and the switch the maximum throughput is only the throughput of one physical port.

While I was running the tests I had SSH connection open to VNXe and I ran netstat -i -c command to see what was going on with the trunk and individual ports. The first screen capture is taken while 8k sequential read test was running. You can see that all the traffic is going through that one port:

The second screen capture is taken while the VNXe was in production and several virtual machines were accessing the disk. In this case the load is balanced randomly between the physical ports:

Load balancing

VNXe 3300 is active/active array but doesn’t support ALUA. This means that LUN can only be accessed through one SP. One iSCSI/NFS server can only have one IP and this IP can only be tied to one port or trunk. Also LUN can only be served by one iSCSI/NFS server. So there will be only one path from the switch to VNXe. Round robin path selection policy can be enabled on ESX side but this will only help to balance the load between the ESX NICs. Even without the trunk round robin can’t be used to balance the load between the four VNXe ports.

Test results

Each Iometer test was ran twice and the results are the average of those two test runs. If the results were not similar enough (i.e. several hundred difference in IOps) then a third test was ran and the results are the average of those three runs.

Same as previous but without NFS results:

Average wait time

Conclusions

The first thing that caught my eye was the difference between VNXe 28 disk pool and 7 disk RG on the random write test. Quote from my last post about the pool structure:

When LUN is created it will be placed on the RAID group (RG) that is the least utilized from a capacity point of view. If the LUN created is larger than the free space on an individual RG the LUN will be then extended across multiple RGs but there is no striping involved.

The tests were ran on 100Gb LUN so it should fit in one RG if the information that I got would be correct. So comparing the pool and the results from one RG random write test it seems that even smaller LUNs are divided across multiple RGs.

Another interesting detail is the difference of the software version 2.0.3 and 2.1.0. Looking at the difference of these results it is obvious that the software version has a big effect on the performance.

NFS storage performance with random write was really bad. But with the 1k sequential read it surprised giving out 30000 iops. Based on these test I would stick with the iSCSI and maybe look at the NFS again after the next software version.

Overall VNXe is performing very well compared to CX. With this configuration the VNXe it is hitting the limits of the one physical port. This could be fixed by adding 10Gb I/O modules. Would be nice run the same test with the 10Gb I/O modules.

We are coming to an end of my hands-on series and I’ll be wrapping up the series in the next post.

[Update 2/17/2012] Updated NFS performance results with VNXe 3100 and OE 2.1.3.16008: VNXe 3100 performance

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


VMworld – Long time no see


It is good to get to see you again this year my good friend! It has been two years since I last attended VMworld in Las Vegas and this is going to be my fourth VMworld (2006 LA, 2008 LV, 2009 SF and 2011 LV).

It’s been a hectic couple of weeks and I haven’t had time to prepare as thoroughly as I would have liked to. But I’m happy with the sessions that I added to the schedule weeks ago because now all of those are full. Although most if not all sessions will be recorded so if you attend the conference you can watch those recordings later. But if you have any questions about the topic or if you want to talk to the presenter after the session then it would be good to attend that specific session. If you are only planning to listen then you could just as well watch the recording.

Sessions 

Here is my list of sessions that I’m planning to attend:

  • VSP1628 VMware vSphere Clustering Q&A
  • VSP1926 Getting Started with VMware vSphere Design
  • VSP3205 Technology Preview: VMware vStorage APIs for VM and Application Granular Data
  • VSP1956 The VMware ESXi Quiz Show
  • VSP1425 Ask the Expert vBloggers
  • VSP1956 Protecting SMBs Using Site Recovery Manager 5.0 with VMware vSphere Replication
  • BCA1995 Design, Deploy, Optimized SQL Server on VMware ESXi 5
  • VSP1823 VMware Storage Distributed Resource Scheduler
  • VSP3116 VMware vSphere 5.0 Resource Management Deep Dive
  • VSP2384 Distributed Datacenters with Multiple vCenter Deployments: Best Practices

Networking, parties and meet ups

The whole conference is a really big networking opportunity. When you are not attending sessions you should walk around the expo floor and talk to people, ask questions and interact. This is your chance to meet people face to face and get your questions answered and maybe answer someone else’s questions too. I’m really looking forward to meeting new people, seeing old friends and I might bump into old colleagues too. If you see me wondering around I’m always up for tech talk. I’m also fairly easy to be recognized wearing my shirts:

When there is a conference there is a party/parties. Vendors are usually organizing customer appreciation parties but there are also some individuals or small groups that are doing the same. These are really good networking opportunities. Here is my party/meetup list:

Tips and links

Here are a few of links to good blog posts about VMworld happenings and tips. My advice is to wear shoes that you know are comfortable, so no new shoes. Pedometer is also nice to carry around if you want to track how much you walk during the week

See you at the VMworld!


MS iSCSI vs. RDM vs. VMFS


Have you ever wondered if there is a real performance difference when a LUN is connected using a Microsoft (MS) iSCSI initiator, using raw device mapping (RDM) or when the virtual disk is on VMFS? You might also have wondered if multipathing (MP) really makes difference. While investigating other iSCSI performance issues I ended up doing some Iometer tests with different disk configurations. In this post I will share some of the test results.

Test environment [list edited 9/7/2011]

  • EMC CX4-240
  • Dell PE M710HD Blade server
  • Two Dell PowerConnect switches
  • Two 10G iSCSI NICs per ESXi, total of four iSCSI paths.
  • Jumbo frames enabled
  • ESXi 4.1U1
  • Virtual Win 2008 (1vCPU and 4GB memory)

Disk Configurations

  • 4x300GB 15k FC Disk RAID10
    • 100GB LUN for VMFS partition (8MB block size)
      • 20GB virtual disk (vmdk) for VMFS and VMFS MP tests
    • 20GB LUN for MS iSCSI, MS iSCSI MP, RDM physical and RDM virtual tests

MS iSCSI initiator used virtual machine’s VNXNET 3 adapter (one or two depending on the configuration) that was connected to the dedicated iSCSI network through ESXi’s 10GB nic. MS iSCSI initiator multipathing was configured using Microsoft TechNet Installing and Configuring MPIO guide. Multipathing for RDM and VMFS disks was configured by enabling round robin path selection policy. When multipathing was enabled there were two active paths to storage.

Iometer configuration

When I was trying to figure out what would be the best way to test the different disk configurations I found a post “Open unofficial storage performance thread”  from VMware Communities. In the thread there is this Iometer configuration that would test maximum throughput and also simulate real life scenario. Other Community users have also posted their results there. I decided to use the Iometer configuration posted on the thread so that I could also compare my results with the others.

Max Throughput-100%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 100% Read distribution
  • 5 minute run time

Max Throughput-50%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 50% read/write distribution
  • 5 minute run time

RealLife-60%Rand-65%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 40% sequential / 60% random distribution
  • 35 % read /65% write distribution
  • 5 minute run time

Random-8k-70%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 100% random distribution
  • 30 % read /70% write distribution
  • 5 minute run time

Test results

Each Iometer test was ran twice and the results are the average of those two test runs. If the results were not similar enough (i.e. several hundreds difference in IOps) then a third test was ran and the results are the average of those three runs.

Conclusions

Looking at these results VMFS was performing very well with both single and multipath. Both RDM disks with multipathing are really close to the performance of VMFS. And then there is MS iSCSI initiator that gave kind of conflicting results. You would think that multipathing would give better results than single path, but actually that was the case only on the max throughput test. Keep in mind that these tests were ran on a virtual machine that was running on ESXi and that the MS iSCSI initiator was configured to use virtual nics. I would guess that Windows Server 2008 running on a physical server with MS iSCSI initiator would give much better results.

Overall VMFS would be the best choice to put the virtual disk on but it’s not always that simple. Some clustering softwares don’t support virtual disks on VMFS and then the options are RDM or MS iSCSI. There could also be limitations for physical or virtual RDM disk usage.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


Week after the EMC World


So it has been over a week since the EMC World ended. Last week went by so fast trying to catch up with all the missed e-mails and work issues that had piled up during the EMC World. 16 hour days talking tech all day kind of drained my batteries so I had to do some recharging before getting my toughts together for a blog post. I also edited and uploaded some pictures to Flickr. First I wasn’t going to take the camera with me at all because I wanted to travel light. But I did and took some nice shots.

Like I already mentioned those were long days full of good conversations and information sharing. I met lots of great people some of whom I’ve been tweeting with but never seen before and some whom I’ve never met at all. Just what I was looking for. EMC had also organized a place for meeting other bloggers: Bloggers Lounge. It was nice to sit down on cozy couches after walking around the Venetian, to have a great cup of coffee and connect with other bloggers.

Networking wasn’t all that I was looking for. In my pre-EMC World post I mentioned that I would concentrate more on the networking than I’ve had in the past conferences. But there were just so many interesting sessions that I couldn’t resist attending to. The session itself might not always give you lots of new information but you can ask questions and go talk to the speakers after the session. That’s how I get the most out of the sessions. I had several good conversations with the speakers and got the information I was looking for.

vLabs

Getting hands on experience was one of the things that I was looking forward to. There were many teasers tweeted and also posted on blogs about the vLabs while they were being put together. But it was all worth the wait. The vLabs were really nicely planned and exceuted and most importantly the labs were great. Thanks to all the vSpecialists who worked hard to make the vLabs possible.

EMC Proven Professional

EMC had a gread deal on the proven professional certifications during the EMC World: 50% off from all certification exams. First I thought it was a really good deal but it wouldn’t concern me. Then a week before the EMC World I ordered the Kindle version of the EMC ISM book and decided if I have time to read the book and feel confident with the practice test I’ll try the exam. Well I didn’t have time to read the whole book but I decided to take the test anyway. I passed and now I have the EMCISA certification. Next step will then be the Cloud Architect certification.

The fun part

There is no conference without parties and the EMC World 2011 was not and exception. Most mentionable ones were Christopher Kusek’s party at one of the suites in Cosmopolitan and storage beers / vBeers organized by Bas Raayman. Met lots of great people and of course had lots of fun too. Thank you Christopher and Bas for getting all the people together. Good times!

Chad and Wade hosted a great Chad’s World live show and with the help of the vSpecialist team they showed some cool live demos. The room was packed and the crowd was really excited. And of course there was also plenty of cold Canadian adult beverages served as I guessed. From there it was nice to head on to the EMC World party where EMC’s Barn Dogs and The Fray was performing. Relaxing atmosphere, good food and good music.

Thank you!

Big thanks to EMC and all the people who made the EMC World 2011 happen. Looking forward to the next years event.


%d bloggers like this: