Tag Archives: Virtualization

VNXe OE 2.4 and 2.5″ form factor disks


Once again I had a chance to play around with some shiny new hardware. And once again the hardware was VNXe 3300 but this time it was something that I hadn’t seen before: 2.5” form factor with 46 600GB 10k disks. If you have read about the new RAID configurations in OE  2.4.0 you might figure out what kind of configuration I have in my mind with this HW.

In this post I will go through some of the new features introduced in VNXe OE 2.4.0, do some configuration comparisons between 3.5” and 2.5” form factors and also between VNXe and VNX. Of course I had to do some performance testing as well with the new RAID configurations so I will introduce the results later in this post.

VNXe OE 2.4.0.20932 release notes

Customizable Dashboard

Along with the new OE came the ability to customize UI dashboard. The look of the Unisphere UI on new or upgraded VNXe is now similar to Unisphere Remote. You can customize the dashboard and also create new tabs and add desired view blocks to the tabs.

VNXe dashboard

vnxe_dashboard1

Jobs

Some of the operations are now added as background jobs and you don’t have to wait that the operation is finished. Steps of the operations are also more detailed when viewed from the jobs page. Number of active jobs is also shown next to the alerts on the status bar dependent on what page are you on.

jobs

New RAID configurations

Now this is one of the enhancements that I’ve been waiting for because VNXe can only utilize four RAID groups in a pool. So with the previous OE this would mean that datastore in 6+1 RAID 5 pool could only utilize 28 disks. Now with the 10+1 RAID 5 pool structure datastores can utilize as many as 44 disks. This also means increased max iops per datastore. 3.5” form factor 15k disk RAID 5 pool max iops is increased from ~4900 to ~7700 and with 2.5” form factor 10k disk RAID 5 pool max iops is increased from ~3500 to ~5500. Iops is not the only thing to be looked at. Size of the pool matters too and not to forget the rack space that the VNXe will use. While I was sizing the last VNXe that we ordered I made this comparison chart to compare the pool size, iops and rack space with different disk form factors in VNX and VNXe.

comparison

Interesting setup with the VNXe 3150 and 2.5” form factor disks is the 21TB and 5500 iops packed in 4U rack space. VNXe 3300 with same specs would take 5U space and VNX5300 would take 6U space. Of course the SP performance is a bit different between these arrays but so is the price.

Performance

I’ve already posted some performance test results from VNX 3100 and 3300 so I added those results to the charts for comparison. I’ve also ran some tests on VNX 5300 that I haven’t posted yet and also added those results on the charts.

avgmbps1

avgmbps2

avgiops1

avgiops2

avglatency1

avglatency2

There is a significant difference in the max throughput between 1G and 10G modules on VNXe. Then again the real life test results are quite similar.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


MS iSCSI vs. RDM vs. VMFS


Have you ever wondered if there is a real performance difference when a LUN is connected using a Microsoft (MS) iSCSI initiator, using raw device mapping (RDM) or when the virtual disk is on VMFS? You might also have wondered if multipathing (MP) really makes difference. While investigating other iSCSI performance issues I ended up doing some Iometer tests with different disk configurations. In this post I will share some of the test results.

Test environment [list edited 9/7/2011]

  • EMC CX4-240
  • Dell PE M710HD Blade server
  • Two Dell PowerConnect switches
  • Two 10G iSCSI NICs per ESXi, total of four iSCSI paths.
  • Jumbo frames enabled
  • ESXi 4.1U1
  • Virtual Win 2008 (1vCPU and 4GB memory)

Disk Configurations

  • 4x300GB 15k FC Disk RAID10
    • 100GB LUN for VMFS partition (8MB block size)
      • 20GB virtual disk (vmdk) for VMFS and VMFS MP tests
    • 20GB LUN for MS iSCSI, MS iSCSI MP, RDM physical and RDM virtual tests

MS iSCSI initiator used virtual machine’s VNXNET 3 adapter (one or two depending on the configuration) that was connected to the dedicated iSCSI network through ESXi’s 10GB nic. MS iSCSI initiator multipathing was configured using Microsoft TechNet Installing and Configuring MPIO guide. Multipathing for RDM and VMFS disks was configured by enabling round robin path selection policy. When multipathing was enabled there were two active paths to storage.

Iometer configuration

When I was trying to figure out what would be the best way to test the different disk configurations I found a post “Open unofficial storage performance thread”  from VMware Communities. In the thread there is this Iometer configuration that would test maximum throughput and also simulate real life scenario. Other Community users have also posted their results there. I decided to use the Iometer configuration posted on the thread so that I could also compare my results with the others.

Max Throughput-100%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 100% Read distribution
  • 5 minute run time

Max Throughput-50%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 50% read/write distribution
  • 5 minute run time

RealLife-60%Rand-65%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 40% sequential / 60% random distribution
  • 35 % read /65% write distribution
  • 5 minute run time

Random-8k-70%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 100% random distribution
  • 30 % read /70% write distribution
  • 5 minute run time

Test results

Each Iometer test was ran twice and the results are the average of those two test runs. If the results were not similar enough (i.e. several hundreds difference in IOps) then a third test was ran and the results are the average of those three runs.

Conclusions

Looking at these results VMFS was performing very well with both single and multipath. Both RDM disks with multipathing are really close to the performance of VMFS. And then there is MS iSCSI initiator that gave kind of conflicting results. You would think that multipathing would give better results than single path, but actually that was the case only on the max throughput test. Keep in mind that these tests were ran on a virtual machine that was running on ESXi and that the MS iSCSI initiator was configured to use virtual nics. I would guess that Windows Server 2008 running on a physical server with MS iSCSI initiator would give much better results.

Overall VMFS would be the best choice to put the virtual disk on but it’s not always that simple. Some clustering softwares don’t support virtual disks on VMFS and then the options are RDM or MS iSCSI. There could also be limitations for physical or virtual RDM disk usage.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


My first EMC World


This year’s EMC World is going to be my first and I’m really looking forward to it. I’ve been working as a customer with EMC’s products the past 6 years and our whole storage infrastructure is built on top of their products. I got the ok to go already last year but the due date of my second child was too close so I decided not to go.

For the past couple of weeks I’ve been browsing trough the conference schedule and trying to figure out what sessions to attend. There is lots and lots of good breakout sessions but the four days seems not to be enough. I always book my breakout session schedule too tight and end up not spending enough time at the exhibition and meetups. You can watch the recorded break out sessions later, but you can’t meet all the people later. So I decided to spend more time socializing and networking than I have in the past conferences.

Breakout Sessions

There will be over 500 breakout sessions available and here are some that I’m definitely interested in:

EMC Unisphere Analyzer – Hands-on Workshop

VNXe System Architecture

SQL Server on VMware – Architecting for Performance

vLabs

There will also be a 200 seat virtualized hands on vLab where you can get familiar with several EMC’s products including the new VNX and VNXe product lines. There is a group of EMC employees that have been working hard to make this happen. Big thanks to them already. I know I’m going to be spending fair amount of time in one of those 200 seats. I’m also keen to see Nicholas Weaver’s vLab stats which he’s been teasing about on Twitter this week. This guy can create some cool stuff.

Bloggers lounge

If you are a blogger and attending EMC World you need to check out the Bloggers Lounge. It is the place to meet other bloggers and maybe also do a quick post on your blog. Some bloggers that have been there for a long time might not consider me as a real blogger with my whopping 4 blog posts. But you need to start somewhere right? As a beginner I have lots of questions to other bloggers. I’m also interested to talk to the other newbie bloggers and share thoughts.

Spousetivities

Traveling with the family or spouse? Spousetivities is what you need to get your spouse involved with. Spousetivities organizes fun activities and happenings for spouses around the conference city. It was created by Scott Lowe’s wife Crystal after attending several technology conferences as a spouse and found herself bored while Scott was attending the conferences. I wish I knew about this when I travelled to VMworld -08 and -09 with my family. There are also some good deals for all EMC World attendees on Spousetivities site, so check it out.

vStogies

Apparently this meetup has taken place in the past VMworlds without my knowledge. I like good cigars and I’m not going to miss this one. Good cigars and tech talk sounds good. Follow #vStogies on Twitter for the time and place. I’ve heard there is a good cigar place in Ceasars Palace.

vBeers and storagebeers

Another good meetup organized by Bas Raayman. It is all going to be about good food, drinks and tech talk and socializing. I bet it is going to be good night!

Chad’s World

Chad’s World will be live on Wednesday night before the party and there will also be a happy hour. Wonder if they are serving “frosty cold Canadian adult beverages” there. I’ve been watching Chad’s World episodes and to be honest those are funny but also have really informative content. Getting customers involved in something like that is really awesome. Wade and Chad you rock!

Hopefully I’ll see you in EMC World. It is always nice to meet new people and share thoughts. I’m looking forward to sharing experiences about EMC products, VMware, disaster recovery and anything IT related.


%d bloggers like this: