Category Archives: ESX

EMC World HOL sneak preview


Once again it’s time for EMC World; new product releases, breakout sessions, labs, networking, wandering around the show floor and of course some fun too. Breakout sessions will be recorded and those can be accessed after the conference but at least most of the hands-on labs are created just for the EMC World. So take an advantage of the ease of testing and evaluating EMC products in an isolated environment without needing to worry about messing up anything. It’s a really good opportunity to get hands on experience and see how things really work.

HOL1

Available labs

Here is the list of available labs. Bolded ones are the ones that I’ll try to take, that’ll be about 11 hours of lab time.

  • LAB01 SRM Suite – Visualize, Analyze, Optimize
  • LAB02 VNX with AppSync Lab: Simple Management, Advanced Protection
  • LAB03 EMC Software Defined Storage (SDS)
  • LAB04 Atmos Cloud Storage: Mature, Robust and Ready to Rock
  • LAB05 EMC NetWorker Backup and Recovery for Next Generation Microsoft Environments
  • LAB06 Flexible and Efficient Backup and Recovery for Microsoft SQL Always-On Availability Groups using EMC NetWorker
  • LAB07 Easier and Faster VMware Backup and Recovery with EMC Avamar For the Storage Administrator
  • LAB08 Automated Backup and Recovery for Your Software Defined Data Center with EMC Avamar
  • LAB09 Taking Backup and Archiving To New Heights with EMC SourceOne and EMC Data Domain
  • LAB10 Optimizing Backups for Oracle DBAs with EMC Data Domain and EMC Data Protection Advisor
  • LAB11 Operational and Disaster Recovery using RecoverPoint
  • LAB12 Achieving High Availability in SAP environments using VMware ESXi clusters and VPLEX
  • LAB13 VPLEX Metro with RecoverPoint: 3-site Solution for HighAvailability and Disaster Recovery
  • LAB14 Introduction to VMAX Cloud Edition
  • LAB15 Replication for the VMAX Family
  • LAB16 Performance Analyzer for the VMAX Family
  • LAB17 Introduction to the VMAX Family
  • LAB18 Storage Provisioning and Monitoring with EMC Storage Integrator (ESI 2.1) and Microsoft System Center Operations Manager
  • LAB19 EMC|Isilon Compliance Mode Cluster Setup, Configuration, and Management Simplicity
  • LAB20 EMC|Isilon Enterprise Ready with OneFS 7.0 Enhancements
  • LAB21 RSA Cloud Security and Compliance
  • LAB22 VMware vSphere Integration with VNX
  • LAB23 VNX Unisphere Analyzer
  • LAB24 VNX/VNXe Storage Monitoring & Analytics For Your Business Needs
  • LAB25 VNX Data Efficiency
  • LAB26 VNXe Unisphere Administration & Snapshots
  • LAB27 EMC VSPEX Virtualized Infrastructure for End User Computing
  • LAB28 Collaborative Big Data Analysis with the Greenplum Unified Analytics Platform
  • LAB29 Manage Your vCloud Suite Applications with VMware vFabric Application
  • LAB30 Discover VMware Horizon Workspace
  • LAB31 Deploy and Operate Your Cloud with the VMware vCloud Suite

The setup

Labs are running on VCE Vblock architecture based infrastructure at EMC North Carolina data center. The storage used to serve the content is VNX and XtremeIO and of course to make it all work together vSphere 5.1 & vCD are also utilized.

There will be two screens at the front of the HOL where the live performance of the environment can be monitored. There will also be product specialists demoing and answering questions about the HOL cloud infrastructure.

HOL2

There will be 200 seats for HOL attendees.

HOL3

Location and opening hours

The labs are located on the right hand side of the EMC Village (2nd floor of the Sands Expo Hall).

HOL4

Lab opening hours:

  • Monday: 11:00AM- 9PM
  • Tuesday: 7:00AM – 6:30PM
  • Wednesday: 7:00AM – 5:00PM
  • Thursday: 7:00AM – 2PM

Doors close 30 mins before the end of each day


New Year, New Continent, New Role


Some of you might have noticed that lately I haven’t been as active on social media as I have before. There is couple of reasons for that. One busy factor that’s not listed on the subject has been my involvement in the VNX implementation project that has taken lot of my time. The goal of that project was to replace CX/MirrorView/SRM with VNX/RecoverPoint/SRM and it didn’t go that smoothly. The project is now finalized and everything worked out in the end. I learned a lot during the project and I have some good ideas for blog posts for the future i.e. RecoverPoint journal sizing.

container

New Continent

In February 2008 my wife and I packed everything that we had, sold our condo in Finland and moved to Atlanta because of my internal transfer. We moved to the Atlanta suburbs and really didn’t know that many people around there. The initial plan was to stay for two years and then come back home. Well, those two years became almost five years. During that time we got very close with our neighbors and got to know lots of other great people from the same neighborhood. It was our home and we felt like we belonged to the community. The most amazing two things that happened during that time were the births of our children. It was hard to be so far from “home” and family in the beginning. We saw family once a year when we visited Finland and almost all closest family members visited us at least once. We enjoyed our time in the US but then came the time to move back to Finland. Once again everything we had was packed to a container and shipped to Finland. I had mixed feelings about the move. I was excited to go back “home” but then again I was sad to leave so many good friends behind. Driving to the Atlanta airport one last time wasn’t easy at all. All the good memories rushed through my mind. It was mid December 2012 and we moved back to Finland to the snow and coldness.

On my way to the office

On my way to the office

New Role

This spring I’ve been with the current company for 9 years. About right after I joined the company I started my virtualization journey with GSX and then with ESX 2.0. From that point on my main focus has been on virtualization and storage. I’ve been working as an architect and been involved in getting the ESX from version 2 to 5 and also implementing new features as those have been announced i.e. SRM and View. I’ve also got my hands dirty when implementing EMC CX300, upgrading it to CX3-40C and replacing it with CX4-120 and CX4-240. As from my VNXe post you might have noticed that I’ve done some work with those too. And of course now with the latest project I had also a chance to get some hands-on experience with VNX and RecoverPoint. In my new role I’ll be managing a team which is responsible for developing and maintaining the company’s whole infrastructure including virtualization, networking, storage, Windows/Linux servers and so forth. This is the same team that I’ve been a part of in the past years. I’m looking forward for the new challenges that the new role brings to my desk and don’t worry, I’ll still be involved with the technical stuff and will continue blogging about virtualization and storage. There might be some 2.5” form-factor VNXe and VNX/RecoverPoint posts coming out soon.

My first ESX installation media

My first ESX installation media

Thank you, all my followers, for the year 2012 and I hope this year is going to be even better. I’m happy to see that my posts in the past year have been helpful.


Changing round robin IO operation limit on ESXi 5


After I published the post VNXe 3300 performance follow up (EFDs and RR settings) I started seeing visitors landing to my blog through search engines searching “IO operation limit ESXi 5″. In the previous post I only described how the IO operation limit can be changed on ESX 4 using PowerCLI. Commands with ESXi 5 are a bit different. This post will describe how it can be done on ESXi 5 using ESXi Shell and PowerCLI.

Round Robin settings

First thing to do is to change the datastore path selection policy to RR (from vSphere client – select host – configure – storage adapters – iSCSI sw adapter – right click the device and select manage paths – for path selection select Round Robin (VMware) and click change)

Changing IO operation limit using PowerCLI

1. Open PowerCLI and connect to the server

Connect-VIServer -Server [servername]

2. Retrieve esxcli instance

$esxcli = Get-EsxCli

3. Change device IO Operation Limit to 1 and set Limit Type to Iops. [deviceidentifier] can be found from vSphere client’s iSCSI sw adapter view and is in format of naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.

$esxcli.storage.nmp.psp.roundrobin.deviceconfig. ‘

set($null,”[deviceidentifier]“,1,”iops”,$null)

3. Check that the changes were completed.

$esxcli.storage.nmp.psp.roundrobin.deviceconfig. ‘

get(“[deviceidentifier]“)

Chaning IO operation limit using ESXi Shell

1. Login to ESXi using SSH

2. Change device IO Operation Limit to 1 and set Limit Type to Iops. [deviceidentifier] can be found from vSphere client’s iSCSI sw adapter view and is in format of naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.

esxcli storage nmp psp roundrobin deviceconfig set –type=iops –iops 1 –device=[deviceidentifier]

3.   Check that the changes were completed.

esxcli storage nmp psp roundrobin deviceconfig get –device=[deviceidentifier]


Nested ESXi with swap to host cache on VMware Player


Just after the vSphere 5 was released I wrote a post about running ESXi 5 on VMware Player 3. It was an easy way to get to know the ESXi 5 and create a small home lab on your laptop. The issue with running multiple ESXi instances on my laptop is the lack of memory. I have 8GB of memory so that sets some limitations.

After VMware Player 4 was released on January 24 I upgraded my Player and started to play around with it. I found out that it was really easy to run nested ESXis with the new Player version. This wouldn’t help much because I still had only 8GB memory on my laptop. But I also had an SSD on my laptop. I knew that ESXi 5 has a feature called “swap to host cache” which allows the use of an SSD as a swap for the ESXi. So I started testing if it would be possible to run ESXi on the Player, to configure swap to host cache enabling the use of my SSD drive and then to run nested ESXis on the first ESXi. And yes it is possible. Here is how to do it.

Installing the first ESXi

ESXi installation follows the steps that I described on my previous post. The only addition to those steps is that the “Virtualize Intel VT-x/EPT or AMD-V/RVI” option should be selected for the processors to be able to run nested ESXis. I also added a 25GB disk for the host cache and a 100GB drive for nested VM’s.

Configuring the swap to host cache on the first ESXi

The first step before installing any nested VMs is to configure the swap to host cache on the ESXi that is running on the VMware Player. Duncan Epping has a really elaborate post (Swap to host cache aka swap to SSD?) that describes how the cache works and how it can be enabled. Duncan’s post has a link to William Lam’s post (How to Trick ESXi in seeing an SSD Datastore) that I followed to get the ESXi to actually show the virtual disk as an SSD datastore. I then followed Duncan’s instructions to enable the cache. So I now have the ESXi 5 running on VMware Player on my laptop with 23GB of SSD host cache.

Installing nested VMs

When creating a nested VM to run an ESXi the guest default operating system selection can be used.

After the VM is created the guest operating system type needs to be changed to Other/VMware ESXi 5.x:

Host cache in work

To test it up I created three 8GB VMs for the ESXis and then I also deployed the vCenter appliance that also has 8GB memory configured to it. I then started installing the ESXis and could see that the host cache was being utilized.


Ask The Expert wrap up


It has now been almost two weeks since the EMC Ask the Expert: VNXe front-end Networks with VMware event ended. We had a couple of meetings before hand where we discussed and planned the event, but we really didn’t know what to expect from it. Matt and I were committed to answer the questions during the two weeks so it was a bit different than a normal community thread. Now looking at the amount of views the discussion got we know that it was a success. During the two weeks of time that the event was active we had more than 2300 views on the page. We had several people asking questions and opinions from us. As a summary Matt and I wrote a document that covers the main concerns discussed during the event. In this document we look into the VNXe HA configurations, link aggregation and also do a quick overview of the ESX side configurations:

Ask the Expert Wrap ups – for the community, by the community

I was really excited when I was asked to participate a great event like this. Thank you Mark, Matt and Sean, it was great working with you guys!


EMC Ask The Expert


You may have already visited the EMC Support Community Ask The Expert Forum page or read posts about it by Matthew Brender, Mark Browne or Sean Thulin. EMC Ask The Expert Series is basically engagement between customers, partners and EMC employees or whoever wants to participate. The series consists of several topics and there are also several ways to take part (i.e. online webinar, forum conversation).

Like Matt, Mark and Sean have already mentioned on their posts the first Ask The Expert event started already on the January 16 and is running till  January 26. The first event is about VNXe network configurations and troubleshooting. Matthew and I have already been answering questions for a bit over a week and will continue until the end of this week. Just as I was writing this post we passed 1500 views on the topic.

How is this different from any other EMC support forum topic?

Both Matt and I are committed to monitor and answer this Ask The Expert topic for the period of two weeks. We will both get email alerts whenever someone posts on the topic and we will try to answer the questions during the same day. Matt will be answering as an EMC employee and I will be answering as a customer.

The topic is about VNXe networking but it doesn’t mean that you can’t ask questions about other topics concerning VNXe. The topic is set to keep the thread fairly short. If other than networking questions are raised we will start new topic on the forum and continue the conversation in that thread.

There are still four full days to take an advantage of my and Matt’s knowledge about VNXe. The event ends on Friday but that doesn’t mean we are not answering any VNXe related questions in the forums anymore. It means that after Friday you might not get your questions answered as quickly as you would get during this event while both of us are committed to interact with this topic.

I would encourage anyone to ask questions or raise concerns about any VNXe topic on the EMC support forums. If you don’t have ECN (EMC Community Network) account I would recommend creating one and interacting if you are working with EMC products. If you are EMC customer and have Powerlink account you can login to ECN using that account:

If you have a question about VNXe and for some reason don’t want to post it on the ECN forum just leave a comment on this post and I will address the question on Ask The Expert thread. We are also monitoring #EMCAskTheExpert tag on Twitter and will pick questions from there too.


VADP backup fails to remove snapshot


I have noticed that sometimes after the vStorage APIs for Data Protection (VADP) backup the virtual machine (VM) snapshot is not deleted even when the backup is successfully completed. This can cause a chain reaction that would save several snapshot vmdk-files on the datastore and eventually the datastore could run out of space. After the first failed snapshot removal VADP backups continue working normally except that the snapshot vmdk-file amount starts growing. In some cases the failed snapshot removal leaves an error message on the vCenter events but this is not always the case.

How to identify the problem

Like I already mentioned the issue can be spotted from the growing number of snapshot vmdk-files on the datastore. If you are monitoring VM snapshots then you should be able to notice the situation before the datastore runs out of space.

Another thing is to check if the VM has any shapshots. When VADP backup is running there should be “Consolidate Helper” snapshot active and after the VADP backup is done this should be deleted. If the backup is not running and this snapshot exists this confirms that there is an issue with the snapshots.

There could also be “Unable to access file <unspecified filename> since it is locked” error shown on the VM’s task details

I’ve also seen that even when the VADP initiated snapshot removal is successful the “Consolidate Helper” snapshot and snapshot vmdk-files still exist.

At this point I would suggest reading Ivo Beerens’ blog post about a similar issue with the snapshots. He is describing a solution when getting the “Unable to access file <unspecified filename> since it is locked” error. It didn’t work on my case so I had to find another way to solve this issue.

After the orphaned “Consolidate Helper” snapshot is manually removed vCenter is not showing any shapshots for the VM and also checking from ESX console confirms that there are no snapshots, however, all the snapshot vmdk-files are still present.

How to fix the problem

The first thing is to schedule downtime for the VM because it needs to be shut down to complete these steps. Because the snapshot files keep increasing there should be enough free space on the datastore to accommodate the snapshots until this fix can be performed.

The next thing would be to make sure that the VADP backup is disabled while the following operations are performed. Running VADP backup while working on the virtual disks can really mess up the snapshots.

After the previous steps are covered and the VM is shut down make a copy of the VM folder. This is the first thing I do if I have to work with vmdk-files. Just in case if something goes wrong.

The fix is to clone the vmdk-file with snapshots to a new vmdk using vmkfstools-command (the VM that I was working on was on ESX 4.1 so vmkfstools was still available) to consolidate the snapshots and then remove the current virtual disk(s) from the VM and add the new cloned disk(s) to it. Although there are some considerations before cloning the vmdks:

Don’t rely on the fact that the vmdk-file with the highest number (i.e. [servername]-000010.vmdk) is the latest snapshot. Always check from VM properties or from vmx-file if using command line.

VM properties:

[servername].vmx from command line:

If you plan to work with the copied vmdk-files keep in mind that the “parentFileNameHint=” row on the vmdk-file points to the original location of the parent. So before you clone the vmdk-file you should change the path to point to the path of the copy.

Now that the latest snapshot vmdk-file is recognized the clone can be done with the vmkfstools –i command from command line:

vmkfstools –i [servername]-0000[nn].vmdk [newname].vmdk

After the clone is done the virtual disk can be removed from VM (I used the “remove from virtual machine” option, not the delete option) and the new one can be added. If the VM has more than one virtual disk then this procedure hast to be done to all of them. After confirming that the VM starts normally and that all the data is intact the unused vmdks can be removed. In my case I had VM with two virtual disks and both had serveral snapshot vmdks so I used storage vMotion to move the VM to another datastore and then deleted the folder that was left to the old datastore.


VNXe 3300 performance follow up (EFDs and RR settings)


On my previous post about VNXe 3300 performance I introduced results from the performance tests I had done with VNXe 3300. I will use those results as a comparison for the new tests that I ran recently. In this post I will compare the performance difference with different Round Robin policy settings. I also had a chance to test the performance of EFD disks on VNXe.

Round Robin settings

On the previous post all the tests were ran on default RR settings which means that ESX would send 1000 commands through one path before changing the path. I observed that with the default RR settings I was only getting the bandwidth of one link on the four port LACP trunk. I got some feedback from Ken advising to change the default RR IO operation limit setting from 1000 to 1 to get two links worth of bandwidth from VNXe. So I wanted to test what kind of an effect would this change have on performance.

Arnim van Lieshout has a really good post about configuring RR using PowerCLI and I used his examples for configuring the IO operation limit from 1000 to 1. If you are not confident running the full PowerCLI scripts Arnim introduced in his post here is how RR settings for individual device could be changed using GUI and couple of simple PowerCLI commands:

1. Change datastore path selection policy to RR (from vSphere client – select host – configure – storage adapters – iSCSI sw adapter – right click the device and select manage paths – for path selection select Round Robin (VMware) and click change)

2. Open PowerCLI and connect to the server

Connect-VIServer -Server [servername]

3. Retrieve esxcli instance

$esxcli = Get-EsxCli

4. Change device IO Operation Limit to 1 and set Limit Type to Iops. [deviceidentifier] can be found from vSphere client’s iSCSI sw adapter view and is in format of naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.

$esxcli.nmp.roundrobin.setconfig($null,”[deviceidentifier]“,1,”iops”,$null)

5. Check that the changes were completed.

$esxcli.nmp.roundrobin.getconfig(“[deviceidentifier]“)

Results 1

For these tests I used same environment and Iometer settings that I described on my Hands-on with VNXe 3300 Part 6: Performance post.

Results 2

For these tests I used the same environment except instead of virtual Win 2003 I used virtual Win 2008 (1vCPU and 4GB memory) and the following Iometer settings (I picked up these settings from VMware Community post Open unofficial storage performance thread):

Max Throughput-100%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 100% Read distribution
  • 5 minute run time

Max Throughput-50%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 50% read/write distribution
  • 5 minute run time

RealLife-60%Rand-65%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 40% sequential / 60% random distribution
  • 35 % read /65% write distribution
  • 5 minute run time

Random-8k-70%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 100% random distribution
  • 30 % read /70% write distribution
  • 5 minute run time
[Updated 11/29/11] After I had published this post Andy Banta gave me a hint on twitter:
You might squeeze more by using a number like 5 to 10, skipping
some of the path change cost.
So I ran couple of more tests changing the IO operation limit between 5-10. With the 28 disk pool there was no big difference when used the values 1 or 5-10. With the EFDs the magic number seemed to be 6 and with that I managed to get 16 MBps and 1100 IOps more out of the disks with specific work loads. I added the new EFD results to the graphs. 


Conclusions

Changing the default RR policy setting from the default 1000 io to 1 io really makes a difference on VNXe. On random workload there is not much difference between these two settings. But with sequential workload the difference is significant. Sequental write IOps and throughput is more than double with certain block sizes when using the 1 io setting. If you have ESXs connected to VNXe with LACP trunk I would recommend changing the RR policy to 1 5-10. Like I already mentioned Arnim has a really good post about configuring RR settings using PowerCLI. Another good post about multipathing is A “Multivendor Post” on using iSCSI with VMware vSphere by Chad Sakac.

Looking at the results it is obvious that EFD disks perform much better than SAS disks. On sequential workload 28 Disk SAS pool’s performance is about the same as 5 disk EFD RG’s. But on random workload EFD’s performance is about two times better than SAS pool’s. There was no other load on the disks while these tests were ran so under additional load I would expect EFD’s performing much better on sequential load as well. Better performance doesn’t come withouth a bigger price tag. EFD disks are still over 20 times more expensive per TB than SAS disks but then again SAS disks are about 3 times more expensive per IO than EFD disks.

Now if only EFDs could be used as cache on VNXe.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


VNXe Operating Environment 2.1.1


[Edited 10/12/2011] Apparently EMC has pulled back the VNXe OE MR1 SP1 (2.1.1.14913) and it is not available for download anymore. According to EMC new image and release notes will be available soon.      

EMC has released VNXe operating environment version 2.1.1.14913 (2.1 SP 1) for VNXe 3100 and 3300. And how did I find out about the new update? Well my VNXe told me about it:

Release notes and software are available on supportbeta.emc.com. The first thing that I noticed on the short description on the upgrade package was that VNXe OE has to be 2.1.0.14097 or higher before upgrading to 2.1.1. On the release notes I couldn’t find any mention about this. The only mention about mandatory upgrade is that VNXe should be upgraded to version 2.0.2 or later within 42 days of the initial VNXe installation or otherwise the system has to be powered off before upgrade (KB article emc265195). I also mentioned about this issue on my previous post about VNXe software update. So I contacted the support using chat and quickly got a confirmation that the VNXe has to be on 2.1.0.14097 before upgrading to 2.1.1.

Here is a quick pick of the new features, enhancements and fixes. Full descriptions can be found from the release notes.

New features and enhancements

6+1 RAID5 is now supported on VNXe 3100 with SAS drives and user-defined pools. Automatic configuration will still use 4+1 RAID5 for SAS drives.

EFD drives and 1TB NL SAS drives are now supported on VNXe 3300 DPE and DAE.

There have also been improvements to Unisphere performance.

Fixed

  • Looping problem that might cause SP to boot when network or power cable is disconnected
  • Issues with email alerts
  • Issues with password reset button causing SP to reboot
  • Error with hidden shared folders
  • VMFS datastore creation issues

There is also a long list of known problems and limitations. Couple of those concern VMware integration and are good to keep in mind:

  • VMFS datastore created from VNXe will be VMFS3 an use 8MB blocksize.
  • Manual rescan for storage is required after deleting datastore from standalone ESX server

Hands-on with VNXe 3300 Part 7: Wrap up


When I started writing this series my first plan was to just make four quick posts about my experience with EMC VNXe 3300. After I started going further into detail on the configuration I realized that I couldn’t get everything that I wanted to express to fit in four posts. So I ended up adding three more posts to the series. Still, with this series I’m just touching the surface of VNXe 3300. There are so many functionalities that I didn’t go through or didn’t even mention. That was also my intention with this series. I wanted to write about my experience and look in to those features that I would implement.

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

EMCWORLD_39

Simple, Efficient, Affordable

Those are the words that EMC uses for marketing VNXe and I can mostly agree that those are accurate adjectives for VNXe. In the first part I also wanted to add the adjective “easy” among those words. A user can do the initial setup and VNXe can be operational in less than an hour depending on the software version. Unisphere UI layout is also very user friendly and illustrative. Configuration and updating SW are easy and simple.

EMCWORLD_42

Customers buying VNXe just based on the marketing material might face a big surprise when looking in to the actual memory configuration. Yes, VNXe has 12GB memory per SP, but only 256MB is dedicated for read cache and 1GB dedicated for write cache.

Configuration considerations

Even though it is easy and simple to get VNXe up and running and to start provisioning storage this doesn’t mean that the planning phase can be ignored. User can easily be in a really bad situation and the only way out is to delete all datastores to do the proper reconfigurations. Creating only one iSCSI server and putting all datastores on that one server creates a situation where all the I/O goes through one SP and the other SP is idle. Depending on the ESX iSCSI settings only one network port on VNXe could be utilized even if a four port trunk was configured. Fixing this problem is not as easy as creating it. VNXe doesn’t allow changing the datastore iSCSI server after the datastore is created. To assign a different iSCSI server for a datastore it has to be deleted and recreated. This is, again, one issue that I’m hoping will be fixed.

When using four 1GB ports my suggestion would be to configure NIC aggregation on VNXe as I described in part 2. For the ESX configuration I would suggest reading the detailed comment Ken posted in part 6 about the ESX iSCSI configurations. What comes to the VNXe iSCSI and datastore configurations I ended up creating equal number of datastores for each SP and also dedicating one iSCSI server per datastore to get the most out of the four port trunk.

Issues

The issues that I faced during the configuration were mostly minor usability flaws and some of those were already fixed in the latest software version. The biggest issue that I found was that the VNXe had to be powered off before a software update if it had been running for more than 42 days. I’ve discussed these issues with EMC and hopefully they will be fixed in the future releases.

Conclusions

Despite all the criticism I think that VNXe 3300 is a great product and it will be even better when the few small flaws are fixed. I’m really looking forward to seeing what kind of new features will be introduced in the future software releases. Chad Sakac gave a hint on his blog post about FAST VP support coming in to VNXe at some point. He also mentioned that VAAI (file) and SRM support will be coming out still this year.

I can see some new VNXe 3300 related blog posts in my near future but I think it is time to close this series and keep the new posts separate. If you have any questions about my experience with VNXe or other general questions about it please leave a comment.


Follow

Get every new post delivered to your Inbox.

Join 42 other followers

%d bloggers like this: