Category Archives: CX

New Year, New Continent, New Role

Some of you might have noticed that lately I haven’t been as active on social media as I have before. There is couple of reasons for that. One busy factor that’s not listed on the subject has been my involvement in the VNX implementation project that has taken lot of my time. The goal of that project was to replace CX/MirrorView/SRM with VNX/RecoverPoint/SRM and it didn’t go that smoothly. The project is now finalized and everything worked out in the end. I learned a lot during the project and I have some good ideas for blog posts for the future i.e. RecoverPoint journal sizing.


New Continent

In February 2008 my wife and I packed everything that we had, sold our condo in Finland and moved to Atlanta because of my internal transfer. We moved to the Atlanta suburbs and really didn’t know that many people around there. The initial plan was to stay for two years and then come back home. Well, those two years became almost five years. During that time we got very close with our neighbors and got to know lots of other great people from the same neighborhood. It was our home and we felt like we belonged to the community. The most amazing two things that happened during that time were the births of our children. It was hard to be so far from “home” and family in the beginning. We saw family once a year when we visited Finland and almost all closest family members visited us at least once. We enjoyed our time in the US but then came the time to move back to Finland. Once again everything we had was packed to a container and shipped to Finland. I had mixed feelings about the move. I was excited to go back “home” but then again I was sad to leave so many good friends behind. Driving to the Atlanta airport one last time wasn’t easy at all. All the good memories rushed through my mind. It was mid December 2012 and we moved back to Finland to the snow and coldness.

On my way to the office

On my way to the office

New Role

This spring I’ve been with the current company for 9 years. About right after I joined the company I started my virtualization journey with GSX and then with ESX 2.0. From that point on my main focus has been on virtualization and storage. I’ve been working as an architect and been involved in getting the ESX from version 2 to 5 and also implementing new features as those have been announced i.e. SRM and View. I’ve also got my hands dirty when implementing EMC CX300, upgrading it to CX3-40C and replacing it with CX4-120 and CX4-240. As from my VNXe post you might have noticed that I’ve done some work with those too. And of course now with the latest project I had also a chance to get some hands-on experience with VNX and RecoverPoint. In my new role I’ll be managing a team which is responsible for developing and maintaining the company’s whole infrastructure including virtualization, networking, storage, Windows/Linux servers and so forth. This is the same team that I’ve been a part of in the past years. I’m looking forward for the new challenges that the new role brings to my desk and don’t worry, I’ll still be involved with the technical stuff and will continue blogging about virtualization and storage. There might be some 2.5” form-factor VNXe and VNX/RecoverPoint posts coming out soon.

My first ESX installation media

My first ESX installation media

Thank you, all my followers, for the year 2012 and I hope this year is going to be even better. I’m happy to see that my posts in the past year have been helpful.

EMC World 2012: Social Networking

The past couple of conferences that I’ve attended I’ve spent more time on social networking than in the sessions. Nowadays that most of the sessions are recorded it is possible to watch those afterwards whenever you have some spare time. So my focus on this year’s EMC world was to get into some interesting HOLs as I described in my previous post and also to talk with vendors, other attendees and bloggers. And actually the contacts that I made in the last year’s EMC world eventually lead to getting me in to this year’s EMC world.

EMC Community Network (ECN)

I spent some of my time at the EMC online booth talking to people about the ECN and also answering questions about VNXe. I met many people who had not registered to ECN. I asked them do they have any issues or problems in their environments and everyone answered yes. I then asked how do they usually solve the issue and mostly all answered that they resolve those themselves and some said that they contact the support. Now this is where social networking comes in handy. Why not ask someone else, maybe somebody has also solved the same issue that you are having. So ask a question on Twitter or post a question to ECN and you might get an answer sooner than you thought.

EMC Support Forum Legends – Buzz Talk

Who did I meet then?

Here are some of the people that I met:

@mjbrender @andybanta @Kiwi_Si @ionthegeek @MabroIRL @da5is @lynxbat @rolltidega @NerdBlurt @sthulin @sixfootdad @TylerAltrup @LizONeal @sakacc @NixFred @davedfw @huberw @juliamak @50mu @BrandonJRiley @BasRaayman @sysxperts @vTexan @Jon_2vcps @chris_cicotte @VirtualChappy @stu @scott_lowe @crystal_lowe @CloudOfCaroline @the_sboss @traversn @jellers @davemhenry @wadeoharrow @CommsNinja @lalazarescou @eanthony @Backupbuddha @clintonskitson @mcowger @vDirtybird @jgcole01 @jasemccarty @keithnorbie

Interesting conversations

One of the best conversations during the EMC World I had was with Dynamox, one of the top ECN contributors. This was the first time that we met face to face, even though we live just across the city from each other. We talked about VNXe, VNX, CLARiiON, our environments and implementations, issues that we were facing with those, social networking and ECN. And the whole conversation took place next to a slot machine – only in Vegas. We met a couple of times during the conference and we were also on the same flight back home so we continued our conversations throughout the week.

So during the EMC World I met some great people, some that I already knew and some that I didn’t. I hope we can keep in touch and maybe we’ll see in VMworld or maybe the next EMC World.

EMC World 2012: Hands-On Labs

Hands-On Labs (HOL) are always on my priority list when attending conferences or local EMC/VMware forums. Recorded breakout sessions can be viewed after the conference but HOLs are not available afterwards, at least not yet. The HOL setup was similar to last VMworld HOLs: most of the HOLs were running on virtual appliances and accessed using zero/thin clients.

VNXe Labs

There were two VNXe Hands-on labs available:

VNXe Unisphere Administrator

Remote Monitoring of Multiple VNXe Systems

I did the first HOL where the objectives were to create CIFS server/share and also generic iSCSI server/datastore and then connect those to Windows VM. For someone who has been working with CIFS and generic iSCSI servers this might already be a familiar topic. But for someone who has only been working with vSphere datastores on VNXe this was a good introduction to CIFS and the generic iSCSI side of the VNXe.

While I was at the lab I had a quick chat with Mike Gore from EMC who is responsible of the VNXe labs at the EMC World. I asked him why there weren’t any VNXe labs focusing on the vSphere side and he mentioned that those could be available in future events and that the current labs are more like an introduction to VNXe.

Unisphere Analyzer Evaluating FAST Cache and FAST-VP on VNX

I’ve been working with CLARiiONs the past 8 years so Navisphere, Unisphere and also Analyzer have become very familiar to me. I still wanted to do this HOL and see if there was something that could help me in the future when digging into analyzer statistics. It was a very good lab for refreshing memory and also to give some new hints what to look for in analyzer.

ProShpere Storage Resource Management

This was the most interesting HOL that I took. I’ve been looking into ProSphere after it was released but never had a chance to test it in our environment. Like I mentioned earlier I’ve been using Unisphere Analyzer to dig in to the CLARiiON performance statistics but it is really hard to see the overall performance using analyzer. So ProSphere gives a great overall view of the environment including host, storage path and storage performance. I’m definitely going to use this in the near future.


I’ve been using MirrorView also several years now and wanted to see what RecoverPoint would offer compared to MirrorView. And the answer is simple: a lot more. Of course when comparing these two it is good to first evaluate the data protection needs. RecoverPoint might be a bit overkill just to replicate one VMware datastore and would not be the most cost efficient way to do it. But it was a very useful lab and gave me a good overview of RecoverPoint and what it could be used for.

One can use several hours viewing demos and reading documents but in my opinion hands on experience is the best way to learn new things. So once again EMC succeeded delivering a good number of very well executed hands-on labs. Big thanks to the vSpeclialists and other crew members who made the HOLs possible. I hope I can attend more HOLs in the future events.

Check out also Chad’s post about the HOLs.

EMC World 2012: Wrap-up

Week after the EMC World and my brain is still digesting all the information from the conference. I started going through my notes from last week and thought how I can squeeze everything in one post. So I decided to do separate posts about Hands-On Labs and Social Networking. In this post I will go through some of the new product announcements and also other interesting things that I witnessed last during the EMC World.

A couple of interesting facts that Joe Tucci mentioned on his keynote:

  • E and M on EMC comes from the founders names: Richard Egan and Roger Marino.
  • The first “EMC World” was held in 2001 and was called “EMC wizards” having about 1300 customers attending.


Pat Gelsingers keynote was all about new product announcements and demos. Chad Sakacc did a great job with the demos and also managed to scare everyone with a big explosion on his second demo. There were actually 42 new products and technologies announced and one of those was very interesting: VNXe 3150. I already did a quick post about VNXe 3150 highlights during the EMC World.

Another really interesting announcement was the VNX software upgrade that goes by the name “Inyo” at this point and will be available in the second half of 2012. This brings several new enhancements and features to VNX. Two of those that I’ve been waiting since the FAST pools were initially introduced: mixed pools and automatic pool rebalancing. Also a very welcomed addition is the Storage Analytics package. Chad Sakacc and Sean Thulin have written great posts covering the “Inyo” and its new features.

Session highlights

One interesting session that I attended was titled as “VNX & VNXe: Unisphere future visions and directions”. The main topic was the future single Unisphere for VNX and VNXe combining simplicity and flexibility. This will bring the VNXe simplicity and application-centric storage management to VNXe but will not take away the flexibility of VNX and the ability to manually create datastores and LUNs. There will also be some improvements to serviceability: simplifying self-service and problem notifications. Downloading updates and scheduling those using Unisphere is one of the major improvements that were mentioned about the serviceability.

In the future both VNX and VNXe can be managed using Unisphere remote. There will also be performance monitoring, history and analytics available in the future Unisphere remote. Last but not least a mobile app (monitoring first) and a unified cli are also on the way.

Chad’s World

Once again Chad and Wade filled the room with their entertaining “Chad’s World Live II – The Comeback Tour” show. And of course they had something face melting to announce: Project Razor

If you wonder who the gorilla is hugging Chad, check out the Cloud Freaky 2012 video:

Hands-on with VNXe 3300 Part 6: Performance

Now that the VNXe is installed, configured and also some storage has been provisioned to ESXi hosts it is time to look at the performance. Like I mentioned in the first post I had already gathered some test results from CX4-240 using Iometer and I wanted to make similar tests with VNXe so that the results could be comparable.

Hands-on with VNXe 3300 series:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Test environment CX4-240

  • EMC CX4-240
  • Dell PE M710HD Blade server
  • Two 10G iSCSI NICs with total of four paths between storage and ESXi. Round robin path selection policy enabled for each LUN with two active I/O  paths
  • Jumbo Frames enabled
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Test environment VNXe 3300

  • EMC VNXe 3300
  • Dell PE M710HD Blade server
  • Two 10Gb iSCSI NICs with total of two paths between storage and ESXi. Round robin path selection policy enabled for each LUN with two active I/O paths (see Trunk restrictions and Load balancing)
  • Jumbo Frames enabled
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Iometer Configuration

I used Iometer setup described in VMware’s Recommendations for Aligning VMFS Partitions (page 7) document.

Disk configuration

I had to shorten the explanations on the charts so here are the definitions:

  • CX4 FC 15D
    • 15 15k FC Disk RAID5 Pool on CX4-240 connected with iSCSI
  • CX4 SATA 25D
    • 25 7.2k SATA Disk RAID5 Pool on CX240 connected with iSCSI
  • VNXe 21D 2.0.3
    • 21 15k SAS Disk RAID 5 (3×6+1) Pool on VNXe 3300 connected with iSCSI. VNXe Software version 2.0.3
  • VNXe 28D 2.0.3
    • 28 15k SAS Disk RAID 5 (4×6+1) Pool on VNXe 3300. connected with iSCSI. VNXe Software version 2.0.3
  • VNXe 28D 2.1.0
    • 28 15k SAS Disk RAID 5 (4×6+1) Pool on VNXe 3300 connected with iSCSI. VNXe Software version 2.1.0
  • VNXe RG 2.1.0
    • 7 15k SAS RAID5 (6+1) RG on VNXe connected with iSCSI. VNXe Software version 2.1.0
  • VNXe 28D NFS
    • 28 15k SAS RAID 5 (4×6+1) Pool on VNXe 3300 connected with NFS. VNXe Software version 2.1.0

100Gb thick LUN was created to each pool and RG where 20Gb virtual disk was stored. This 20Gb virtual disk was presented to the virtual Windows server that was used to conduct the test. Partition to this disk was created using diskpart command ’create partition primary align=1024′ and partition was formatted with a 32K allocation size.

Trunk restrictions

Before I go through the results I want to address the limitation with the trunk between VNXe and the 10Gb switch that it is connected to. Even though there are 4Gb (4x1Gb) trunk between the storage and the switch the maximum throughput is only the throughput of one physical port.

While I was running the tests I had SSH connection open to VNXe and I ran netstat -i -c command to see what was going on with the trunk and individual ports. The first screen capture is taken while 8k sequential read test was running. You can see that all the traffic is going through that one port:

The second screen capture is taken while the VNXe was in production and several virtual machines were accessing the disk. In this case the load is balanced randomly between the physical ports:

Load balancing

VNXe 3300 is active/active array but doesn’t support ALUA. This means that LUN can only be accessed through one SP. One iSCSI/NFS server can only have one IP and this IP can only be tied to one port or trunk. Also LUN can only be served by one iSCSI/NFS server. So there will be only one path from the switch to VNXe. Round robin path selection policy can be enabled on ESX side but this will only help to balance the load between the ESX NICs. Even without the trunk round robin can’t be used to balance the load between the four VNXe ports.

Test results

Each Iometer test was ran twice and the results are the average of those two test runs. If the results were not similar enough (i.e. several hundred difference in IOps) then a third test was ran and the results are the average of those three runs.

Same as previous but without NFS results:

Average wait time


The first thing that caught my eye was the difference between VNXe 28 disk pool and 7 disk RG on the random write test. Quote from my last post about the pool structure:

When LUN is created it will be placed on the RAID group (RG) that is the least utilized from a capacity point of view. If the LUN created is larger than the free space on an individual RG the LUN will be then extended across multiple RGs but there is no striping involved.

The tests were ran on 100Gb LUN so it should fit in one RG if the information that I got would be correct. So comparing the pool and the results from one RG random write test it seems that even smaller LUNs are divided across multiple RGs.

Another interesting detail is the difference of the software version 2.0.3 and 2.1.0. Looking at the difference of these results it is obvious that the software version has a big effect on the performance.

NFS storage performance with random write was really bad. But with the 1k sequential read it surprised giving out 30000 iops. Based on these test I would stick with the iSCSI and maybe look at the NFS again after the next software version.

Overall VNXe is performing very well compared to CX. With this configuration the VNXe it is hitting the limits of the one physical port. This could be fixed by adding 10Gb I/O modules. Would be nice run the same test with the 10Gb I/O modules.

We are coming to an end of my hands-on series and I’ll be wrapping up the series in the next post.

[Update 2/17/2012] Updated NFS performance results with VNXe 3100 and OE VNXe 3100 performance


These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.

Hands-on with VNXe 3300 Part 1: Initial setup

Couple of months ago I had a chance to play around with VNXe 3300 loaded with 30x300GB SAS drives before it was put on production. I was curious to see how the VNXe 3300 would pefrorm compared to CX4-240. I had already made some peformance testing on the CX4-240 so I had half of the data already collected. Before I could start testing I had to do the initial setup and configuration. During the configuration I encountered some issues.

I will be posting the whole experience in seven parts [edited 9/23/2011]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Upgrade to latest software version: new features and fixed “known issues”
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Initial setup

EMC has a really good video on YouTube about the initial setup and configuration. I also recommend reading the VNXe 3300 installation guide. I’m not going to go through the installation steps but basically there are two ways to set up the VNXe: auto discovery or manual configuration. I wasn’t physically at the same site where the VNXe was and couldn’t get the auto discovery working remotely. So I used the manual configuration method and one of my colleagues inserted the USB-drive containing the configuration to the VNXe and powered it on. After VNXe had completed the network configurations I was able to connect to Unisphere using web browser.

After the first login ‘Unisphere Configuration Wizard’ will show up and help to go through the configuration steps:

  • License Agreement
  • Unisphere Passwords
  • DNS
  • Time Server
  • Disk Configuration
  • EMC Product Support Option
  • EMC Support Credentials
  • Storage Server Options
  • Shared Folder Server
  • iSCSI Server
  • Unisphere Licenses

It takes about 10 minutes to complete the initial setup if all necessary dns and time server names and ip addresses are ready before starting it and then VNXe is basically ready to be used. The only things left to do is to connect the hosts and provision some storage for them. I wanted to play around with the configuration so I skipped the disk configurations and storage server options. When you exit the configuration wizard a “Unisphere Licenses” prompt will appear. There are two options to obtain the licenses: from file or from Powerlink. I chose the Powerlink option and this is when I encountered the first problem:

So I wasn’t able to obtain the license file from Powerlink. I didn’t think it was a big deal, maybe a configuration mishap during the setup so I used EMC’s Support portal ( to get the license file and uploaded it to the VNXe. Now I was ready to start playing around with the VNXe.


After been using Navisphere for several years Unisphere is a huge improvement for the UI. I was already familiar with Unisphere after using it on CLARiiON CX4 and Celerra NX3e.

Unisphere dashboard view gives an overall view of used and free space and also system alerts. Unisphere is really easy to navigate, simple to use and  fast (even when using through a slower link). EMC has posted really good videos on Youtube about Unisphere so I’m not going to go into details of that.

EMC Online Support

This is one of the big new features that EMC introduced with the VNXe. From Unisphere the user is able to watch how to videos, access online training and community, open support requests, start live chat with technical support, download software updates and more. Of course I wanted to try out these nice new features. When I tried to click anything on the support page I got the same “Unable to connect to EMC Online Support” error that I got when trying to obtain the licenses. So maybe there was some kind of mishap during the initial setup. I checked the EMC Support Credentials and those were ok. I also checked the network configuration on the VNXe and was able to connect to internet from the pc that I was using the Unishpere from. Everything seemed to be ok but I still got the same error. I decided to open a chat session with technical support from EMC Support portal ( I explained my issue to the technician and got an answer to my question. He told me that it’s a known issue reported by most customers and that they are looking into it and expecting to fix the issue in the future release.

Ok so I can’t use the support functions on VNXe for now. Not a big show stopper. I can still open my browser and access all the documentation, support contracts, documents and software upgrades via EMC Support portal (

Software Update

Browsing the EMC support portal I found out that there was a new software update (2.0.3 at the time) available for the VNXe. So I downloaded it to my local pc, uploaded it to the VNXe (Settings – More configurations – Update Software), performed a health check and installed the new software update. Again, very easy and the whole process took about an hour.


In this post I only covered the steps to get the VNXe up and running. At this point the VNXe3300 would get an excellent grade judging by the steps that I’ve gone through so far. I can’t disagree with EMC using “simple” as one of the adjectives to describe the VNXe. I would also add the adjective easy to that list. VNXe is easy and fast to implement, simple to navigate and also very easy to get familiar with.

Even though the built in VNXe support functions didn’t work it’s not like it’s missing some major functionality. All the same support functions can be found from site. Also the experience that I had using the online chat was very pleasant. The technician knew the product and promptly responded with an answer. I was really hoping that the issue would be fixed in the next software version. Well now I know it was and I will cover that in part four.

In the next post (part 2) I will go through the iSCSI and NFS server configurations and how to connect hosts to VNXe.


Have you ever wondered if there is a real performance difference when a LUN is connected using a Microsoft (MS) iSCSI initiator, using raw device mapping (RDM) or when the virtual disk is on VMFS? You might also have wondered if multipathing (MP) really makes difference. While investigating other iSCSI performance issues I ended up doing some Iometer tests with different disk configurations. In this post I will share some of the test results.

Test environment [list edited 9/7/2011]

  • EMC CX4-240
  • Dell PE M710HD Blade server
  • Two Dell PowerConnect switches
  • Two 10G iSCSI NICs per ESXi, total of four iSCSI paths.
  • Jumbo frames enabled
  • ESXi 4.1U1
  • Virtual Win 2008 (1vCPU and 4GB memory)

Disk Configurations

  • 4x300GB 15k FC Disk RAID10
    • 100GB LUN for VMFS partition (8MB block size)
      • 20GB virtual disk (vmdk) for VMFS and VMFS MP tests
    • 20GB LUN for MS iSCSI, MS iSCSI MP, RDM physical and RDM virtual tests

MS iSCSI initiator used virtual machine’s VNXNET 3 adapter (one or two depending on the configuration) that was connected to the dedicated iSCSI network through ESXi’s 10GB nic. MS iSCSI initiator multipathing was configured using Microsoft TechNet Installing and Configuring MPIO guide. Multipathing for RDM and VMFS disks was configured by enabling round robin path selection policy. When multipathing was enabled there were two active paths to storage.

Iometer configuration

When I was trying to figure out what would be the best way to test the different disk configurations I found a post “Open unofficial storage performance thread”  from VMware Communities. In the thread there is this Iometer configuration that would test maximum throughput and also simulate real life scenario. Other Community users have also posted their results there. I decided to use the Iometer configuration posted on the thread so that I could also compare my results with the others.

Max Throughput-100%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 100% Read distribution
  • 5 minute run time

Max Throughput-50%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 50% read/write distribution
  • 5 minute run time


  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 40% sequential / 60% random distribution
  • 35 % read /65% write distribution
  • 5 minute run time


  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 100% random distribution
  • 30 % read /70% write distribution
  • 5 minute run time

Test results

Each Iometer test was ran twice and the results are the average of those two test runs. If the results were not similar enough (i.e. several hundreds difference in IOps) then a third test was ran and the results are the average of those three runs.


Looking at these results VMFS was performing very well with both single and multipath. Both RDM disks with multipathing are really close to the performance of VMFS. And then there is MS iSCSI initiator that gave kind of conflicting results. You would think that multipathing would give better results than single path, but actually that was the case only on the max throughput test. Keep in mind that these tests were ran on a virtual machine that was running on ESXi and that the MS iSCSI initiator was configured to use virtual nics. I would guess that Windows Server 2008 running on a physical server with MS iSCSI initiator would give much better results.

Overall VMFS would be the best choice to put the virtual disk on but it’s not always that simple. Some clustering softwares don’t support virtual disks on VMFS and then the options are RDM or MS iSCSI. There could also be limitations for physical or virtual RDM disk usage.


These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.

EFD vs. FC Pools

Our CX4 with Flare30 has been in production for about six months now and we decided to add some more FAST Cache on it. It currently has two mirrored 100GB EFDs configured as FAST cache and we just got two new 100GB disks to be added to the cache. We’ve also been pondering if we should add EFDs to the current pools for databases. Before adding the two new disks to the cache I wanted to make some performance tests on the EFDs. I also wanted to compare the EFD performance with the performance of the current pools that we have in production.

The focus of these tests was to see if the EFDs would have the desired performance advantage against the current pools that we already have in use. Like I mentioned we already have 100GB FAST cache in use and it is also enabled on the pools that I used to run these tests.

I used Iometer to generate the load and to gather the results. In the past I’ve done Iometer tests with storage arrays that are not in any other use. In those cases I’ve used iometer setup described in VMware’s Recommendations for Aligning VMFS Partitions document. Using those settings to run Iometer tests would have been time consuming and would have also generated huge load on the production CX. Now that I was only focusing to compare the simulated database load on different disk configurations I decided to run the test with only one transfer request size.

While I was creating the disks for the test I decided to add a couple of more disks and run some additional tests. I was curious to see how a properly alligned disk would really perform compared to an unaligned one and also what kind of performance difference there was between VMFS and RAW-disks. Yes I know that the VMware’s document I mentioned above already proves that an aligned disk performs better than an unaligned. I just wanted to know what was the case in our environment.

Test environment [list edited 9/7/2011]

  • CX4-240 with 91GB FAST Cache
  • Dell PE M710HD Blade server
  • 2x Dell PowerConnect switches
  • Two 10G iSCSI NICs with total of four paths between storage and ESXi. Round robin path selection policy enabled for each lun with two I/O active paths.
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Disk Configurations

  • 15 FC Disk RAID5 Pool with FAST Cache enabled
    • 50GB LUN for VMFS partition
      • 20GB unaligned virtual disk (POOL_1_vmfs_u)
      • 20GB aligned virtual disk (POOL_1_vmfs_a)
    • 20GB LUN for unaligned RAW disk (POOL_1_raw_u)
    • 20GB LLUN for aligned RAW disk (POOL_1_raw_a)
  • 25 FC Disk RAID5 Pool with FAST Cache enabled
    • 50GB LUN for VMFS partition
      • 20GB unaligned virtual disk (POOL_2_vmfs_u)
      • 20GB aligned virtual disk (POOL_2_vmfs_a)
    • 20GB LUN for unaligned RAW disk  (POOL_2_raw_u)
    • 20GB LLUN for aligned RAW disk  (POOL_1_raw_a)
  • 2 EFD Disk RAID1 RAID Group
    • 50GB LUN for VMFS partition
      • 20GB unaligned virtual disk (EFD_vmfs_u)
      • 20GB aligned virtual disk (EFD_vmfs_a)
    • 20GB LUN for unaligned RAW disk (EFD_raw_u)
    • 20GB LLUN for aligned RAW disk (EFD_raw_a)

Raw disks were configured to use physical compatibility mode on ESXi.

Unaligned disks were configured using Windows Disk Management and formatted using default values.

Partitions to aligned disks were created using diskpart command ‘create partition primary align=1024′ and partition were formatted with a 32K allocation size.

Iometer configuration

  • 1 Worker
  • 8KB transfer request size
  • Read/write ratio of 66/34 and 100% random distribution
  • 8 outstanding I/Os per target
  • 4 minute run time
  • 60 sec ramp-up time


Each Iometer test to specific disk was repeated three times. Results are the average of these three runs. Keep in mind that the array was running over 100 production VMs during the tests, so these results are not absolute.


When comparing the results on unaligned disks and aligned disks there are no huge differences. Although POOL_1_raw_u and POOL_2_vmfs_u results kind of jump out from those charts. I did three more test runs for those disks and still got the same results. This might have something to do with the production load that we are having on the CX.

Also the performance differences between raw disks and disks on vmfs were not major, but still noticeable, i.e. the difference on IOps between POOL_2_vmfs_a and POOL_2_raw_a is over 200. EFD raw is also giving about 200 more IOps than EFD vmfs.

Let’s get to the point. The whole purpose of these tests was to compare FC pool and EFD performance. If you haven’t noticed from the graphs the difference is HUGE! Do I even have to say more? I think the graphs have spoken. Those 5000+ IOps was achieved only with two EFDs. Think about having a whole array full of those.

After these tests my suggestion is to use VMFS datastores instead of raw disks. But there are still some cases that you might need to use raw disks with virtual machines, i.e. when having a physical/virtual cluster. Aligning Windows Server disks is not a big thing anymore because Windows Server 2008 does that automatically. If you have some old Windows Server 2003 installations I would suggest you to check if the disks are aligned or not. There is a Microsoft KB that describes how to check disk alignment. If the server disks are not aligned you might want to start planning to move your data to aligned disks. What comes to the EFDs the performance gained using those is self-evident. EFDs are still a bit expensive. But think about the price of the arrays and the disks needed for the same IOps than what the EFDs can provide. In some cases you need to think more about the price per IO than price per GB.


These results reflect the performance of the environment that the tests were ran. Results may vary depending on the hardware and how the environment is configured.

Unisphere showing LUN in two pools after migration

Just wanted to do a quick post about Unisphere showing  LUN in two pools after migrating LUN from one pool to another. There is also a quick fix for it: close your browser window and log back in to Unisphere. This small glitch rose my blood pressure for a while and I already started thinking through all kinds of data corruption issues that it might cause.

Here is what happened: I have two pools, Pool 0 and Pool 1. Using Unisphere GUI I migrated LUN from Pool 0 to Pool 1 and the migration went throught just fine. After the migration I noticed that the migrated LUN was showing up in both pools. I tried refreshing both ‘Pools’ and ‘Detailed’ views, I also went to different tabs and back to the ‘Pools/RAID Groups’ view and refreshed views again. LUN was still in both pools. Properties for that specific LUN were identical on both pools. Luckily closing the browser window and logging back in to Unisphere solved this issue.

I’ve been using Unisphere since FLARE 30 was released and haven’t seen this kind of behaviour before. I’ve seen Unisphere to hang (just showing white screen) and crash several times but nothing like this has ever happened.

%d bloggers like this: