Tag Archives: Unified storage

Hands-on with VNXe 3200: Initial setup

Here I go again. About two years ago I started writing a series of blog posts about my hands-on experience with newly released VNXe 3300. At that time the VNXe 3300 was just released and there wasn’t that much documentation out there. So I did lots of testing and had to make my own “best practices”. The previous blog series is one of the reasons why I now have a chance to test and write about the VNXe 3200.

On June 10 Chad published this blog post: “Summer Gift Part 2 – 10 VNXe arrays free to play with for volunteers!”. I was surprised to find myself mentioned on the post and even more surprised to find out that one of the devices was reserved for me. So while I was on vacation last week the test unit arrived:

  • VNXe 3200 – 2U Form Factor/12 Drive DPE
    • 3.5” Drives
    • 6 x 600GB 15K Pack
    • 3 X 100GB eMLC Flash Drives (for FAST auto-tiering)
    • 2 X 100GB eMLC Flash Drives (for FAST Cache)
    • 9TB Raw Capacity

VNXe 3200 is basically a combination of the new VNX MCx multicore technology and the old VNXe OE/Unisphere. I won’t go through all the new features but there are a few worth mentioning:

  • Multi-core RAID (MCR), Multi-core Cache (MCC), Multi-core Flash (MCF)
  • “Active/Active” file
  • Single container for block and file
  • Linux-based platform

More details about the new features on EMC VNXe Series website.


Well not much to write about this: Install rack rails, lay DPE on those, connect cables and the VNXe was ready for configuration. However, this was something I hadn’t seen before with the previous VNXe:


Quick look in the installation documentation and it revealed to be a power adapter for the front LED-lights: image

Initial setup 

There have been several new versions of the VNXe OE since my first VNXe blog post but the “Unisphere Configuration Wizard” still looks similar. Going through the wizard takes about 10 minutes but I skipped most of the configurations as usual. I prefer to upgrade the VNXe to the latest software version before I do any configurations to new devices. After the configuration wizard is completed you will get a popup and you will be directed to the EMC support website where the latest version can be downloaded.



Even though there is totally new hardware running under the hood Unisphere still looks and feels the same as on the latest software vesion on VNXe 3100/3150/3300. I still agree that VNXe is simple to install and configure. Of course I haven’t configured any storage pools or iscsi servers yet. I’ll cover those on the next posts. Also performance and some of the new features will be reviewed later.

Software version 2.0.3:



Software version 2.4.2:


Software version 3.0.1:





Hands-on with VNXe 3100

On Friday before Christmas we ordered VNXe 3100 with 21 600GB SAS drives and it was delivered in less than two weeks. Exactly two weeks after the order was placed we had the first virtual machine running on it.

Last year I made a seven post series about my hands-on experience with VNXe 3300. This is my first VNXe 3100 that I’m working with and also my first VNXe unboxing. With the 3300s I relied on my colleagues to do all the physical installation because I was 5000 miles away from the datacenter. With this one I did everything by myself from unboxing to installing the first VM.

My previous posts are still valid so in this post I’ll try to concentrate on the differences between the 3300 and 3100. Will Huber has really good posts on unboxing and configuring the VNXe 3100. During the installation I also found a couple of problems from the latest OE ( I will describe one of the issues in this post and for the bigger thing I will do a separate post.

I will also do a follow up post about the performance differences between 3300 and 3100 when I get all the tests done. I’m also planning to do some tests with FusionIO IOturbine and I will post those results when I get the card and the tests done.

Initial setup

VNXe and the additional DAE came in two pretty heavy boxes. Which box to open first? Well the box that you need to open first tells you that:

So like a kid on Christmas day I opened the boxes and the first thing that I see is this big poster explaining the installation procedure. The rack rails are easy and quick to install. The arrays are quite heavy but managed to lift those on the rack also by myself.

After doing all the cabling it was time to power on the VNXe. Before doing this you need to decide how you are going to do the initial configuration (assigning IP). In my previous post I mentioned that there are two options for doing it using VNXe ConnectionUtility: auto discovery or manual configuration. With the manual configuration the VNXe ConnectionUtility basically creates a text file on the USB stick that will be inserted into the VNXe before the first boot. A faster way is to skip the download and installation of the 57mb package and create the file manually on a USB stick. So get a USB stick, create IW_CONF.txt file on the USB stick and add the following content to it replacing the [abcd] variables with your own:







After that just insert the USB into the VNXe and power it on. The whole process of unboxing, cabling and powering on took me about one and a half hours.

While the VNXe was starting up I downloaded the latest Operating Environment ( so that I was ready to run the upgrade after the system was up and running. After the first login ‘Unisphere Configuration Wizard’ will show up and you need to go through several steps. I skipped some of those (creating iscsi server, creating storage pool, licensing) and started the upgrade process (see my previous post).

After the upgrade was done I logged back in and saw the prompt about the license. I clicked the “obtain license” button and a new browser window opened and following the instructions I got the license file. I’ve heard complaints about IE and the licensing page not working. The issue might be browser popup blocker. It is also stated on the license page that the popup blocker should be disabled.

After this quick LACP trunk configuration on the switch and ESXi side iSCSI configurations I was ready to provision some storage and do some testing.

Issues that I found out

During the testing I found an issue with the MTU settings when using iSCSI. A problem that will cause datastores to be disconnected from ESXi. Even reverting back to the original MTU settings the datastores can’t be connected to ESXi and new datastores can’t be created. I will describe this in a separate post.

The other issue that I found was more cosmetic. When having two iSCSI servers on the same SP and provisioning the first VMware storage on the second iSCSI server Unisphere will give the following error:

For some reason VNXe can’t initiate the VMFS datastore creation process on ESXi. But LUN is still visible on ESXi and VMFS datastore can be manually created. So it’s not a big issue but still annoying.


There seems to be small improvements to the latest operation environment ( Provisioning storage to ESX server feels a bit faster and also VMFS datastore is also created every time if having only one iSCSI server on SP. In the previous OE the VMFS datastore was created about 50% of the time when storage was provisioned to ESXi.

In the previous posts I have mentioned how easy and simple VNXe is to configure and the 3100 is no different from the 3300 from that point of view. Overall VNXe 3100 seems to be a really good product considering the fairly low price of it. A quick look at the performance tests shows quite similar results to the ones that I got from the 3300. I will do a separate post about the performance comparison of these two VNXes.

Although it is good to keep in mind the difference between marketing material and the reality. VNXe 3300 is advertised to have 12GB memory per SP but in reality it has only 256MB read cache and 1GB write cache. 3100 is advertised to have 8GB memory but it has only 128MB read cache and 768MB write cache.

Hands-on with VNXe 3300 Part 4: Storage pools

When EMC announced that VNXe will also utilize storage pools my first thought was that it is similar to what CX/VNX has. Storage pool would consist of five disk RAID 5 groups and LUNs would be striped across all of these RAID groups to utilize all spindles. After some discussions with EMC experts I found out that this is not how the pool works in VNXe. In this part I will go a bit deeper into the pool structure and also explain how Storage Pool is created.

Hands-on with VNXe 3300 series [list edited 9/23/2011]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Pool Structure

VNXe 3300 can be furnished with SAS, NL-SAS or Flash drives. The one that I was configuring had 30 SAS disks so there were two options when creating Storage Pools: 6+1 drive RAID 5 groups or 3+3 RAID 1/0 groups. I chose to create one big pool with 28 disks (four 6+1 RAID 5 groups) and one hot spare disk (EMC recommends having one hot spare disk for every 30 SAS disks).  EMC also recommends not putting any I/O intensive load on the first four disks because PSL (Persistent Storage Layout) is located on those disks. I wanted to test the storage pool performance with all the disks that were available so I ignored this recommendation and also used the first four disks in the pool too.

When LUN is created it will be placed on the RAID group (RG) that is the least utilized from a capacity point of view. If the LUN created is larger than the free space on an individual RG the LUN will be then extended across multiple RGs but there is no striping involved. So depending of the LUN size and pool utilization a new LUN could reside either in one RG or several RGs. This means that only one RG is used for sequential workloads but random workload could be spread over several RGs. Now if disks are added to the storage pool those newly added RGs are the least utilized and will be used first when new LUNs are created. So a storage pool on VNXe can be considered more as a capacity pool than a performance pool.

Before I wrote this post I was in contact with EMC Technology Consultant (TC) and EMC vSpecialist to get my facts right. Both of them confirmed that the LUNs in VNXe pool are not striped across RGs. Pool structure was explained to me by the EMC TC. Looking at the test results that I posted on part 6 and also looking at the feedback that I got the description above is not accurate. Here is a quote from Brian Castelli’s (EMC employee) comment:

 “When provisioning storage elements, such as an iSCSI LUN, the VNXe will always stripe across as many RAID Groups as it can–up to a maximum of four.”

Based on Brian’s comment LUNs in VNXe pool are striped across multiple RGs. [Edited 9/15/2011]

Creating Storage Pools

Storage pools are configured and managed from System – Storage Pools. If no pools have been configured then Unconfigured Disk Pool is only shown.

Selecting Configure Disks will start disk configuration wizard and there are three options to select from: Automatically configure pools, Manually create a new pool, and Manually add disks to an existing pool. Quite easy to understand what each option stands for. I chose the Automatically configure pools option. When using the automatic configuration option 6+1 disk RAID 5 groups are used to create the pool.

Next step is to select how many disks are added to the new pool and you can see that the options are multiples of seven (6+1 RAID 5).

A hot spare pool will also be created when using the automatic pool configuration option.

When selecting Manually create a new pool there is a list of alternatives (see picture below) based on the desired purpose of the pool. This makes creating a storage pool easy because VNXe suggests the RAID level based on the selection that the user made. There is also an option further down on the wizard where the user can select the number of disks used and the RAID level (Balanced Perf/Cap R5 or High Performance R1/0).


It feels a little disappointing to find out that the pool structure wasn’t what I was expecting it to be. But maybe my expectations were also too high in the first place.

Creating a Storage Pool is in line with one of EMC’s definitions for VNXe: simple. When Automatic configuration option is selected Unisphere will take care of deciding what disks are used in the pool and what is the correct number of hot spares needed based on EMC’s best practices.

The next part will cover storage provisioning from VNXe and also using EMC’s VSI plug-in for vCenter.

Hands-on with VNXe 3300 Part 3: Software update

Software update, Operating Environment update or firmware update. Those are the most commonly used synonyms to describe the same thing: Software update. On supportbeta.emc.com they call it VNXe Operating Environment update. In Unisphere you can find “Update software”  and by clicking that you will see the current System software version. I’ll be talking about software update.

I was planning to post this as the last part of my hands-on experience series. But while I was writing the series I tested the latest software update (2.1.0) on other VNXe and found out there were fixes that would change some of the posts a lot. So I decided go through the software update at this point and write the remaining parts based on the 2.1.0 software version.

Hands-on with VNXe 3300 series [list edited 9/23/11]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Upgrade to latest software version: new features and fixed “known issues”
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up


The first thing is to download the latest software update from support.emc.com. While downloading the file I opened a chat to EMC technical support just to make sure I was good to go with the update on production environment. The answer was no. Because our systems had been up and running for more than 42 days we would have to power down the whole VNXe before we could proceed with the update. Yes, power down dual SP storage system. Seriously? For a while I didn’t know what to think. I was wondering was it because we were running 2.0.3 or because of the 42+ day uptime and the technician confirmed that it was the 42+ days. This is the information that I got last week. It could be different this week, or with other sw version so check with support when planning to do an update.

So we had two VNXe 3300’s on production with over hundred VMs running on them and we had to take all of those VMs down. We also had one VNXe that was waiting to be configured. I decided to do the update on the new VNXe 3300 first and see if it was worth of taking the VMs down and updating the software. Because the new VNXe didn’t have any hosts connected to it yet it was pretty straight forward. In software update 2.1.0 there were big enough fixes and enhancements to Unisphere that I decided to proceed updating the software. I’ll later go through the reasons which lead to this decision.

Powering down

Before moving to the software update procedure itself the VNXe had to be powered off. The technician gave me 30 step instructions how to do this so here is a short version:

  • Stop all I/O to VNXe
  • Place both SPs in Service Mode
  • Disconnect power from DPE
  • Disconnect power from DAE
  • Reconnect power on DAE
  • Reconnect power on DPE
  • Reboot each SP to return to Normal Mode.
Placing SPs in Service More is done from Settings – Service System. Service account password is needed to access this area. Non-primary SP has to be set to Service Mode first. After the SP has rebooted itself and and the state of the SP changes from unknown to service mode then the the primary SP can be also set to Service Mode.
After the VNXe is powered back on then Unisphere only allows login with the service account. Now SP A has to be rebooted first which will return it to Normal Mode and after this SP B can be rebooted. This whole process takes about an hour and now VNXe is ready for the software update.
Updating software
Now that the update package is downloaded from supportbeta.emc.com it has to be uploaded to the VNXe. All the update steps can be done from Settings – More configurations – Update Software.

When the upload is ready Candidate Version changes from Not Available to 2.1.0. Before it can be installed a Health Check needs to be ran. If the Health Check doesn’t report any errors the software update can be installed by clicking Install Candidate Version.

Issues with the update

I did the update on four VNXe’s and two out of those had some issues with the update. The other VNXe that we had to power down before the update wasn’t that willing to boot the SPs on Service Mode. I followed the instructions that I got from technical support and after stopping the I/O I placed the non-primary SP (A) on Service Mode. SP A didn’t come back up after 20 minutes, not even after an hour. That time I started suspecting that there was something wrong. From Unisphere everything seemed to be normal: In Service System SP B seemed to be primary and in Normal mode and SP A showed an unknown state, which is normal during the reboot. I then found out that the management IP was not responding at all so Unisphere must have shown pages from the browser cache. There was a small indicator about Unisphere not working properly on the lower left corner of the window and also got error “Storage System Unavailable” when trying to execute any commands on VNXe:

The way that the lights were blinking on SP A indicated that it was on Service Mode. Unplugging the SP B management port the management IP started answering and I was able to login to Unisphere. Now Unisphere was showing that SP A was primary and that it was also in Service Mode. SP B was in Normal mode. I then placed SP B in Service Mode and proceeded powering off the VNXe. It seemed that SPs were in some kind of conflict mode. Software update itself went through without problems.

Another issue that I faced was during the software upgrade of VNXe loaded with NL-SAS disks. The update was halfway done when Unisphere gave an error that upgrade has failed:

After restarting the upgrade it went through without any problems.

However, I noticed that the PSL (VNXe OS runs from internal SSD but uses an area called PSL – Persistent Storage Layout  – located on the first four disks on DPE) was now located on the 1st, 2nd, 3rd and 5th disk on the DPE. My guess is that the failed software update caused this. I hope this is not going to cause problems later.

Before the update:

After the update:

Fixes and new features

The first noticable difference is that the Unisphere version was updated from 1.5.3 to 1.6.0.

One issue that I criticised in the first part was that none of the support functions were working on version 2.0.3. Those are now fixed in the version 2.1.0. For example clicking How To Videos under Support will open the browser at supportbeta.emc.com and will ask powerlink username and password. My impression was that all the support functions were supposed to be integrated to the Unisphere. Now it is just a link to the supportbeta site. But hey, at least it works now. If it only could use the stored EMC Support Cretentials.

Bigger fix was done to the block size that is used when creating VMware Datastore. In 2.0.3 when VMware Datastore was created and automatically provisoned to ESX server the vmfs partition was created with 1MB block size. This meant that the maximum file size on that vmfs was 256GB. This has been changed on the version 2.1.0 and 8MB block size is used when new VMware Datastore is created.

Nice new feature on System performance monitoring is that now also Network Activity and Volume Activity can be monitored. It used to only show the CPU Activity.  Timeframe can be viewed on a scale 48h/24/1h and in addition to those CPU Activity can also be viewed by activity over the last 5 minutes.


Overall the update procedure is easy and takes about an hour if it’s done on newly installed VNXe. If the system has been up and running for over 42 days then scheduled downtime is necessary to complete the update. I would schedule at least four hours of downtime. Powering down and updating can be done in two hours if everything goes without problems. It took me about 3.5 hours to complete the update with one of the VNXe’s. This is something that should be fixed soon. Isn’t the point of a dual SP storage system the ease of serviceability: the other SP can be updated/replaced while the other SP takes care of the I/O. Currently that is not the case.

VNXe is still a fairly new product and I would think that it’s still heavily developed and software updates will be released quite often. To fix issues and also to add some new features. So it’s good to keep an eye on the supportbeta.emc.com downloads section for the new releases. It is also important to contact EMC technical support before installing new updates on VNXe just to check that it’s ok to update or if there are some extra steps that need to be done. Of course I recommend also reading the release notes first.

In the fourth part I will look in to the storage pools and storage provisioning.

Hands-on with VNXe 3300 Part 2: iSCSI and NFS servers

This is the second part in the series about my hands-on experience with EMC VNXe 3300. In the first part I described the initial setup of VNXe and also the challenges that I had during the setup. Before ESXi servers and virtual machines can access the storage there is still a couple of things that need to be done. In this post I will go through setting up network port aggregation, iSCSI server, NFS server and also how to connect ESXi hosts to VNXe.

Hands-on with VNXe 3300 series [list edited 9/23/11]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Upgrade to latest software version: new features and fixed “known issues”
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

NIC Aggregation

This VNXe is furnished with eight (four/SP) 1GB NICs and is connected to ESXi hosts that are using 10GbE NICs for iSCSI. So all four NICs in each SP will be aggregated for maximum throughput. From each SP these four aggregated ports are then connected to separate switches where trunk is configured. These switches are connected to ESXi hosts with 10Gb uplinks.

NIC aggregation can be configured from Settings – More configurations – Advanced Configuration. Default MTU size is 1500 so if jumbo frames are enabled it needs to be changed to 9000. This needs to be done only for the first port, because the other ports are aggregated to this port.

Port aggregation is enabled by selecting “Aggregate with eth2” under the eth3 and hitting “Apply Changes”. After these settings are also done to eth4 and eth5 the aggregation is ready.

These settings need to be done only once and Unisphere will automatically configure both SPs using the same settings. This also means that SPs can’t have different aggregation settings.

iSCSI Server configuration

iSCSI server can be configured from Settings – iSCSI Server Settings. When creating the first iSCSI server the default storage processor will be SP A. When SP A already has an iSCSI server then SP B is automatically selected as the storage processor when creating the second server. Storage processor, Ethernet port and VLAN ID can be also changed under the Advanced settings.

NFS Server configuration

NFS server can be configured from Settings – Shared Folder Server Settings.
This is very similar to configuring an iSCSI server: there is only one additional step.

On “Shared Folder Types” page NFS and/or CIFS is selected. I was planning to do some testing only with NFS so I chose that.

Connecting ESXi hosts to VNXe

This VNXe will be connected to an existing vCenter/ESXi environment, so all the iSCSI settings are already in place on the hosts. VNXe will automatically discover all the ESX/ESXi hosts from vCenter. Only vCenter name or ip address and the appropriate credentials are needed. This makes things a lot easier. No need to manually register tens or hundreds of hosts. VMware hosts can be added to VNXe from Hosts – VMware.

When the discovery is done all ESX hosts will be shown under Virtualization Hosts. Also the total number of datastores are shown even if those datastores are not from the particular VNXe.

Hosts can be expanded in the view and all virtual machines on that ESX host will be shown. Also some interesting details are shown: OS type, state and associated datastore. Associated datastore is the name of the datastore as it is shown on ESX server. VNXe is pulling all this data from vCenter using the credentials provided earlier.

Even more details of an individual virtual machine can be viewed by selecting the VM and clicking Details.


Unisphere is really made to be easy and simple to use. Everything can be easily found from the menus and sub menus. Icons are big but are not just icons, there is also a subject line and an explanation. If this “Shared Folder Server Settings” would be the only info given then the real meaning of it might not be clear to everyone. But with the explanation it is very understandable:

I have small criticism about the first page of iSCSI and NFS server settings. I’m wondering why the advanced settings are hidden under the “Show advanced” link. The window is already big enough to have those settings shown in default. Ok, the page looks cleaner without the settings shown but it would really be more user friendly if they were not hidden. First time that I went through the settings I noticed the VLAN setting appeared on the summary page but I couldn’t remember seeing where to actually set it. So I went back to the first page and discovered the “Show advanced” link.

In the next part I will go through the software update procedure and look in to the issues that have been fixed.

Hands-on with VNXe 3300 Part 1: Initial setup

Couple of months ago I had a chance to play around with VNXe 3300 loaded with 30x300GB SAS drives before it was put on production. I was curious to see how the VNXe 3300 would pefrorm compared to CX4-240. I had already made some peformance testing on the CX4-240 so I had half of the data already collected. Before I could start testing I had to do the initial setup and configuration. During the configuration I encountered some issues.

I will be posting the whole experience in seven parts [edited 9/23/2011]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Upgrade to latest software version: new features and fixed “known issues”
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Initial setup

EMC has a really good video on YouTube about the initial setup and configuration. I also recommend reading the VNXe 3300 installation guide. I’m not going to go through the installation steps but basically there are two ways to set up the VNXe: auto discovery or manual configuration. I wasn’t physically at the same site where the VNXe was and couldn’t get the auto discovery working remotely. So I used the manual configuration method and one of my colleagues inserted the USB-drive containing the configuration to the VNXe and powered it on. After VNXe had completed the network configurations I was able to connect to Unisphere using web browser.

After the first login ‘Unisphere Configuration Wizard’ will show up and help to go through the configuration steps:

  • License Agreement
  • Unisphere Passwords
  • DNS
  • Time Server
  • Disk Configuration
  • EMC Product Support Option
  • EMC Support Credentials
  • Storage Server Options
  • Shared Folder Server
  • iSCSI Server
  • Unisphere Licenses

It takes about 10 minutes to complete the initial setup if all necessary dns and time server names and ip addresses are ready before starting it and then VNXe is basically ready to be used. The only things left to do is to connect the hosts and provision some storage for them. I wanted to play around with the configuration so I skipped the disk configurations and storage server options. When you exit the configuration wizard a “Unisphere Licenses” prompt will appear. There are two options to obtain the licenses: from file or from Powerlink. I chose the Powerlink option and this is when I encountered the first problem:

So I wasn’t able to obtain the license file from Powerlink. I didn’t think it was a big deal, maybe a configuration mishap during the setup so I used EMC’s Support portal (http://supportbeta.emc.com/) to get the license file and uploaded it to the VNXe. Now I was ready to start playing around with the VNXe.


After been using Navisphere for several years Unisphere is a huge improvement for the UI. I was already familiar with Unisphere after using it on CLARiiON CX4 and Celerra NX3e.

Unisphere dashboard view gives an overall view of used and free space and also system alerts. Unisphere is really easy to navigate, simple to use and  fast (even when using through a slower link). EMC has posted really good videos on Youtube about Unisphere so I’m not going to go into details of that.

EMC Online Support

This is one of the big new features that EMC introduced with the VNXe. From Unisphere the user is able to watch how to videos, access online training and community, open support requests, start live chat with technical support, download software updates and more. Of course I wanted to try out these nice new features. When I tried to click anything on the support page I got the same “Unable to connect to EMC Online Support” error that I got when trying to obtain the licenses. So maybe there was some kind of mishap during the initial setup. I checked the EMC Support Credentials and those were ok. I also checked the network configuration on the VNXe and was able to connect to internet from the pc that I was using the Unishpere from. Everything seemed to be ok but I still got the same error. I decided to open a chat session with technical support from EMC Support portal (http://supportbeta.emc.com/). I explained my issue to the technician and got an answer to my question. He told me that it’s a known issue reported by most customers and that they are looking into it and expecting to fix the issue in the future release.

Ok so I can’t use the support functions on VNXe for now. Not a big show stopper. I can still open my browser and access all the documentation, support contracts, documents and software upgrades via EMC Support portal (http://supportbeta.emc.com/).

Software Update

Browsing the EMC support portal I found out that there was a new software update (2.0.3 at the time) available for the VNXe. So I downloaded it to my local pc, uploaded it to the VNXe (Settings – More configurations – Update Software), performed a health check and installed the new software update. Again, very easy and the whole process took about an hour.


In this post I only covered the steps to get the VNXe up and running. At this point the VNXe3300 would get an excellent grade judging by the steps that I’ve gone through so far. I can’t disagree with EMC using “simple” as one of the adjectives to describe the VNXe. I would also add the adjective easy to that list. VNXe is easy and fast to implement, simple to navigate and also very easy to get familiar with.

Even though the built in VNXe support functions didn’t work it’s not like it’s missing some major functionality. All the same support functions can be found from supportbeta.emc.com site. Also the experience that I had using the online chat was very pleasant. The technician knew the product and promptly responded with an answer. I was really hoping that the issue would be fixed in the next software version. Well now I know it was and I will cover that in part four.

In the next post (part 2) I will go through the iSCSI and NFS server configurations and how to connect hosts to VNXe.

%d bloggers like this: