Tag Archives: Unisphere

Hands-on with VNXe 3200: Initial setup


Here I go again. About two years ago I started writing a series of blog posts about my hands-on experience with newly released VNXe 3300. At that time the VNXe 3300 was just released and there wasn’t that much documentation out there. So I did lots of testing and had to make my own “best practices”. The previous blog series is one of the reasons why I now have a chance to test and write about the VNXe 3200.

On June 10 Chad published this blog post: “Summer Gift Part 2 – 10 VNXe arrays free to play with for volunteers!”. I was surprised to find myself mentioned on the post and even more surprised to find out that one of the devices was reserved for me. So while I was on vacation last week the test unit arrived:

  • VNXe 3200 – 2U Form Factor/12 Drive DPE
    • 3.5” Drives
    • 6 x 600GB 15K Pack
    • 3 X 100GB eMLC Flash Drives (for FAST auto-tiering)
    • 2 X 100GB eMLC Flash Drives (for FAST Cache)
    • 9TB Raw Capacity

VNXe 3200 is basically a combination of the new VNX MCx multicore technology and the old VNXe OE/Unisphere. I won’t go through all the new features but there are a few worth mentioning:

  • Multi-core RAID (MCR), Multi-core Cache (MCC), Multi-core Flash (MCF)
  • “Active/Active” file
  • Single container for block and file
  • Linux-based platform

More details about the new features on EMC VNXe Series website.

Unboxing

Well not much to write about this: Install rack rails, lay DPE on those, connect cables and the VNXe was ready for configuration. However, this was something I hadn’t seen before with the previous VNXe:

power

Quick look in the installation documentation and it revealed to be a power adapter for the front LED-lights: image

Initial setup 

There have been several new versions of the VNXe OE since my first VNXe blog post but the “Unisphere Configuration Wizard” still looks similar. Going through the wizard takes about 10 minutes but I skipped most of the configurations as usual. I prefer to upgrade the VNXe to the latest software version before I do any configurations to new devices. After the configuration wizard is completed you will get a popup and you will be directed to the EMC support website where the latest version can be downloaded.

softwareupgrade

Conclusions

Even though there is totally new hardware running under the hood Unisphere still looks and feels the same as on the latest software vesion on VNXe 3100/3150/3300. I still agree that VNXe is simple to install and configure. Of course I haven’t configured any storage pools or iscsi servers yet. I’ll cover those on the next posts. Also performance and some of the new features will be reviewed later.

Software version 2.0.3:

unisphere

 

Software version 2.4.2:

242_1

Software version 3.0.1:

301_1

 

 

 


EMC World 2012: Wrap-up


Week after the EMC World and my brain is still digesting all the information from the conference. I started going through my notes from last week and thought how I can squeeze everything in one post. So I decided to do separate posts about Hands-On Labs and Social Networking. In this post I will go through some of the new product announcements and also other interesting things that I witnessed last during the EMC World.

A couple of interesting facts that Joe Tucci mentioned on his keynote:

  • E and M on EMC comes from the founders names: Richard Egan and Roger Marino.
  • The first “EMC World” was held in 2001 and was called “EMC wizards” having about 1300 customers attending.

Announcements

Pat Gelsingers keynote was all about new product announcements and demos. Chad Sakacc did a great job with the demos and also managed to scare everyone with a big explosion on his second demo. There were actually 42 new products and technologies announced and one of those was very interesting: VNXe 3150. I already did a quick post about VNXe 3150 highlights during the EMC World.

Another really interesting announcement was the VNX software upgrade that goes by the name “Inyo” at this point and will be available in the second half of 2012. This brings several new enhancements and features to VNX. Two of those that I’ve been waiting since the FAST pools were initially introduced: mixed pools and automatic pool rebalancing. Also a very welcomed addition is the Storage Analytics package. Chad Sakacc and Sean Thulin have written great posts covering the “Inyo” and its new features.

Session highlights

One interesting session that I attended was titled as “VNX & VNXe: Unisphere future visions and directions”. The main topic was the future single Unisphere for VNX and VNXe combining simplicity and flexibility. This will bring the VNXe simplicity and application-centric storage management to VNXe but will not take away the flexibility of VNX and the ability to manually create datastores and LUNs. There will also be some improvements to serviceability: simplifying self-service and problem notifications. Downloading updates and scheduling those using Unisphere is one of the major improvements that were mentioned about the serviceability.

In the future both VNX and VNXe can be managed using Unisphere remote. There will also be performance monitoring, history and analytics available in the future Unisphere remote. Last but not least a mobile app (monitoring first) and a unified cli are also on the way.

Chad’s World

Once again Chad and Wade filled the room with their entertaining “Chad’s World Live II – The Comeback Tour” show. And of course they had something face melting to announce: Project Razor

If you wonder who the gorilla is hugging Chad, check out the Cloud Freaky 2012 video:


Hands-on with VNXe 3100


On Friday before Christmas we ordered VNXe 3100 with 21 600GB SAS drives and it was delivered in less than two weeks. Exactly two weeks after the order was placed we had the first virtual machine running on it.

Last year I made a seven post series about my hands-on experience with VNXe 3300. This is my first VNXe 3100 that I’m working with and also my first VNXe unboxing. With the 3300s I relied on my colleagues to do all the physical installation because I was 5000 miles away from the datacenter. With this one I did everything by myself from unboxing to installing the first VM.

My previous posts are still valid so in this post I’ll try to concentrate on the differences between the 3300 and 3100. Will Huber has really good posts on unboxing and configuring the VNXe 3100. During the installation I also found a couple of problems from the latest OE (2.1.3.16008). I will describe one of the issues in this post and for the bigger thing I will do a separate post.

I will also do a follow up post about the performance differences between 3300 and 3100 when I get all the tests done. I’m also planning to do some tests with FusionIO IOturbine and I will post those results when I get the card and the tests done.

Initial setup

VNXe and the additional DAE came in two pretty heavy boxes. Which box to open first? Well the box that you need to open first tells you that:

So like a kid on Christmas day I opened the boxes and the first thing that I see is this big poster explaining the installation procedure. The rack rails are easy and quick to install. The arrays are quite heavy but managed to lift those on the rack also by myself.

After doing all the cabling it was time to power on the VNXe. Before doing this you need to decide how you are going to do the initial configuration (assigning IP). In my previous post I mentioned that there are two options for doing it using VNXe ConnectionUtility: auto discovery or manual configuration. With the manual configuration the VNXe ConnectionUtility basically creates a text file on the USB stick that will be inserted into the VNXe before the first boot. A faster way is to skip the download and installation of the 57mb package and create the file manually on a USB stick. So get a USB stick, create IW_CONF.txt file on the USB stick and add the following content to it replacing the [abcd] variables with your own:

TYPE=CONFIGURE

PROTO_VERSION=1

FRIENDLYNAME=[VNXENAME]

MGMTADDRESSA1=[IP ADDRESS]

MGMTMASK1=[NETMASK]

GATEWAY=[GATEWAY]

After that just insert the USB into the VNXe and power it on. The whole process of unboxing, cabling and powering on took me about one and a half hours.

While the VNXe was starting up I downloaded the latest Operating Environment (2.1.3.16008) so that I was ready to run the upgrade after the system was up and running. After the first login ‘Unisphere Configuration Wizard’ will show up and you need to go through several steps. I skipped some of those (creating iscsi server, creating storage pool, licensing) and started the upgrade process (see my previous post).

After the upgrade was done I logged back in and saw the prompt about the license. I clicked the “obtain license” button and a new browser window opened and following the instructions I got the license file. I’ve heard complaints about IE and the licensing page not working. The issue might be browser popup blocker. It is also stated on the license page that the popup blocker should be disabled.

After this quick LACP trunk configuration on the switch and ESXi side iSCSI configurations I was ready to provision some storage and do some testing.

Issues that I found out

During the testing I found an issue with the MTU settings when using iSCSI. A problem that will cause datastores to be disconnected from ESXi. Even reverting back to the original MTU settings the datastores can’t be connected to ESXi and new datastores can’t be created. I will describe this in a separate post.

The other issue that I found was more cosmetic. When having two iSCSI servers on the same SP and provisioning the first VMware storage on the second iSCSI server Unisphere will give the following error:

For some reason VNXe can’t initiate the VMFS datastore creation process on ESXi. But LUN is still visible on ESXi and VMFS datastore can be manually created. So it’s not a big issue but still annoying.

Conclusions

There seems to be small improvements to the latest operation environment (2.1.3.16008). Provisioning storage to ESX server feels a bit faster and also VMFS datastore is also created every time if having only one iSCSI server on SP. In the previous OE the VMFS datastore was created about 50% of the time when storage was provisioned to ESXi.

In the previous posts I have mentioned how easy and simple VNXe is to configure and the 3100 is no different from the 3300 from that point of view. Overall VNXe 3100 seems to be a really good product considering the fairly low price of it. A quick look at the performance tests shows quite similar results to the ones that I got from the 3300. I will do a separate post about the performance comparison of these two VNXes.

Although it is good to keep in mind the difference between marketing material and the reality. VNXe 3300 is advertised to have 12GB memory per SP but in reality it has only 256MB read cache and 1GB write cache. 3100 is advertised to have 8GB memory but it has only 128MB read cache and 768MB write cache.


Hands-on with VNXe 3300 Part 5: Storage provisioning


When provisioning storage on VNXe there are many options depending on the host type: Microsoft Exchange, Shared folders, Generic iSCSI, VMware or Hyper-V . Because this VNXe is only going to serve VMware ESXi hosts I’m going to concentrate on that part. I will go through provisioning storage from VNXe and also using VSI Unified Storage Management Plug-in.

Hands-on with VNXe 3300 series [list updated 9/23/2011]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

iSCSI datastore

In the last part a storage pool was created. The next step is to create datastores on it and provision those to the hosts. There are two options when provisioning datastores for VMware from VNXe: VMware datastore or generic iSCSI. When using the VMware storage VNXe will automatically configure the iSCSI targets to selected ESX hosts and also create the VMFS datastore. If generic iSCSI is created then all those steps have to be done manually to each ESX. I really recommend using the VMware storage from these two options. VMware storage can be created from Storage – VMware storage.

At this point only iSCSI server was configured so the only option was to create a VMFS datastore.

The next step is to select the iSCSI server and the size for the datastore. When creating the first datastore it really doesn’t matter which iSCSI server is selected. If the iSCSI_A (on SP A) server is selected for the first datastore then iSCSI_B (on SP B) should be selected for the second datastore to balance the load on the SP’s. When selecting the iSCSI server on SP A this means that all the datastore I/O will go through the SP A. If all the datastores are placed on one SP there could be a situation where the whole VNXe performance is impacted because all I/O goes through that SP and the other SP is idle. So it is important to balance the datastores between SPs. VNXe is not doing this automatically so the user has to manually do this when creating datastores. If datastores are distributed between the SPs and the other SP fails all the datastores on the failed SP’s iSCSI server are moved to the other SP. When the failed SP comes back online all the datastores originally located on it are moved back.

There is an option to configure protection for the storage. Because this datastore is only for testing I chose not to configure the protection.

Step 5 is to select the hosts that the datastore will be attached to. Datastore can be connected to a specific iSCSI initiator on ESX server by expanding the host and selecting Datastore access for the specific IQN. If Datastore access is selected from the host level then the VNXe targets are added to all iSCSI initiators on ESX.

After these steps are completed VNXe starts creating the datastore, adding iSCSI targets to the selected ESX hosts and iSCSI initiators on those and finally mounts and creates the VMFS datastore.

 iSCSI datastore issues

After creating the first datastore I noticed from vCenter that there were “Add Internet SCSI send targets” and “Rescan all HBAs” tasks showing on the hosts that I selected to add the datastore to. After watching those tasks looping for 15 minutes and datastore not showing up on the ESXi servers I figured out that there was something wrong with the configuration.

I found out that the ESXi server had datastores connected also from other storage units that use CHAP authentication. On the ESXi iSCSI initiator the CHAP settings were set to “Inherit from parent” and that meant all the new targets would also inherit these CHAP settings. After disabling the inheritance the new datastore was connected and the VMFS datastore was created on it. I haven’t tried to use CHAP authentication with VNXe datastores so I don’t know if those settings are automatically configured to ESX. VNXe already has the ability to manipulate the ESX server configuration so I would imagine that it could also be possible to change the iSCSI target option to “Do not use CHAP” when VMware datastore is created on VNXe without CHAP authentication. Maybe in next the software version?

Another issue I had was that  a VMFS datastore was not always created during the process of creating a VMware datastore. VMware Storage Wizard on VNXe indicated that creating a VMware datastore is completed but actually the “create VMFS datastore” task was never initiated on the ESX. I’ve created over 20 datastores using this method and I would say that in about 50% of those cases the VMFS datastores were also created. Not a big thing, but still an annoying small glitch.

NFS datastore

Creating an NFS datastore is very similar to creating an iSCSI datastore. The only differences are steps 2 and 5 on the VMware Storage Wizard. On step 2 NFS is selected instead of VMFS (iSCSI). On this page there are two “hidden” advanced options: Deduplication and Caching. These options are hidden under the “Show advanced” link – similar to what criticized in iSCSI server configuration. In my opinion these should be shown by default.

On step 3 the same rule applies to selecting the NFS server as with the iSCSI server. The user has to manually balance the datastores between the SPs.

On step 5 datastore access (no access, read-only or read/write) will be chosen for the host or a specific ip address on host.

When all the steps on the wizard are done VNXe creates the datastore and mounts it to the selected host or hosts. I only gave one host access to this new NFS datastore but I could see that VNXe tried to do something NAS related to all the hosts in the Virtual Center connected to VNXe and gave some errors:

VSI

In a nutshell VSI Unified Storage Management (USM) is a plug-in that integrates with VMware vCenter and can be used to provision storage using vCenter UI. There is lots of good documentation on EMC Unified Storage Community and Powerlink so I’m not going to dig any deeper into it. VSI USM can be downloaded from Powerlink – Support – Software Downloads and Licensing – Downloads T-Z – Virtual Storage Integrator (VSI). I recommend reading the VSI USM Read Me First document to see what else needs to be installed to make the VSI plug-in work.

After the VSI USM plug-in and all the other needed packages have been installed the VNXe has to be connected to vCenter. This is done from vCenter Home – Solutions and Applications – EMC – Unified Storage Management – Add. A wizard will walk through the steps needed to connect vCenter and VNXe.

Now storage can be provisioned to a cluster or to an individual ESX host by right clicking cluster/host and selecting EMC – Unified Storage – Provision Storage.

The wizard follows the same steps as the VMware Storage Wizard when provisioning storage from VNXe.

Storage type:

Storage Array:

Storage Pool:

iSCSI Server:

Storage Details:

When using VSI to provision storage the iSCSI initiators and targets are configured automatically and VMFS datastore is also created in the process.

Conclusions

Again the suitable word to describe storage provisioning would be simple, if it would work every time. After provisioning several datastores I noticed that a VMFS datastore wasn’t always created when the iSCSI storage was provisioned from VNXe. Also there were issues if CHAP wasn’t used on VNXe but was used on ESX host for other datastores. This happens when using either VNXe or VSI storage provisioning.

Storage provisioning from VNXe is easy but it is even easier using VSI. When the initial setup is done, the iSCSI/NFS server configured and the storage pool(s) created there isn’t a need to login to VXNe anymore to provision storage if VSI is in use. This of course needs vCenter and all the necessary plug-ins to be installed.

Some users might never see these issues that I found out but for some these might be show stoppers. Not all businesses have vCenters in use so they have to use the Unisphere UI to provision storage and then the VMFS datastore might or might not be created. I can imagine how frustrated users can be when facing these kinds of issues.

Also, users shouldn’t  be responsible of the decision in which SP the new datastore is placed on. This should be something that VNXe decides.

Don’t take me wrong. The integration with VNXe and vCenter/ESX is smooth and it will be even better after these issues have been fixed.

In the next part of my hands-on series I will look into the performance of VNXe 3300 and I will also post some test statistics.


Hands-on with VNXe 3300 Part 4: Storage pools


When EMC announced that VNXe will also utilize storage pools my first thought was that it is similar to what CX/VNX has. Storage pool would consist of five disk RAID 5 groups and LUNs would be striped across all of these RAID groups to utilize all spindles. After some discussions with EMC experts I found out that this is not how the pool works in VNXe. In this part I will go a bit deeper into the pool structure and also explain how Storage Pool is created.

Hands-on with VNXe 3300 series [list edited 9/23/2011]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Pool Structure

VNXe 3300 can be furnished with SAS, NL-SAS or Flash drives. The one that I was configuring had 30 SAS disks so there were two options when creating Storage Pools: 6+1 drive RAID 5 groups or 3+3 RAID 1/0 groups. I chose to create one big pool with 28 disks (four 6+1 RAID 5 groups) and one hot spare disk (EMC recommends having one hot spare disk for every 30 SAS disks).  EMC also recommends not putting any I/O intensive load on the first four disks because PSL (Persistent Storage Layout) is located on those disks. I wanted to test the storage pool performance with all the disks that were available so I ignored this recommendation and also used the first four disks in the pool too.

When LUN is created it will be placed on the RAID group (RG) that is the least utilized from a capacity point of view. If the LUN created is larger than the free space on an individual RG the LUN will be then extended across multiple RGs but there is no striping involved. So depending of the LUN size and pool utilization a new LUN could reside either in one RG or several RGs. This means that only one RG is used for sequential workloads but random workload could be spread over several RGs. Now if disks are added to the storage pool those newly added RGs are the least utilized and will be used first when new LUNs are created. So a storage pool on VNXe can be considered more as a capacity pool than a performance pool.

Before I wrote this post I was in contact with EMC Technology Consultant (TC) and EMC vSpecialist to get my facts right. Both of them confirmed that the LUNs in VNXe pool are not striped across RGs. Pool structure was explained to me by the EMC TC. Looking at the test results that I posted on part 6 and also looking at the feedback that I got the description above is not accurate. Here is a quote from Brian Castelli’s (EMC employee) comment:

 “When provisioning storage elements, such as an iSCSI LUN, the VNXe will always stripe across as many RAID Groups as it can–up to a maximum of four.”

Based on Brian’s comment LUNs in VNXe pool are striped across multiple RGs. [Edited 9/15/2011]

Creating Storage Pools

Storage pools are configured and managed from System – Storage Pools. If no pools have been configured then Unconfigured Disk Pool is only shown.

Selecting Configure Disks will start disk configuration wizard and there are three options to select from: Automatically configure pools, Manually create a new pool, and Manually add disks to an existing pool. Quite easy to understand what each option stands for. I chose the Automatically configure pools option. When using the automatic configuration option 6+1 disk RAID 5 groups are used to create the pool.

Next step is to select how many disks are added to the new pool and you can see that the options are multiples of seven (6+1 RAID 5).

A hot spare pool will also be created when using the automatic pool configuration option.

When selecting Manually create a new pool there is a list of alternatives (see picture below) based on the desired purpose of the pool. This makes creating a storage pool easy because VNXe suggests the RAID level based on the selection that the user made. There is also an option further down on the wizard where the user can select the number of disks used and the RAID level (Balanced Perf/Cap R5 or High Performance R1/0).

Conclusions

It feels a little disappointing to find out that the pool structure wasn’t what I was expecting it to be. But maybe my expectations were also too high in the first place.

Creating a Storage Pool is in line with one of EMC’s definitions for VNXe: simple. When Automatic configuration option is selected Unisphere will take care of deciding what disks are used in the pool and what is the correct number of hot spares needed based on EMC’s best practices.

The next part will cover storage provisioning from VNXe and also using EMC’s VSI plug-in for vCenter.


Hands-on with VNXe 3300 Part 3: Software update


Software update, Operating Environment update or firmware update. Those are the most commonly used synonyms to describe the same thing: Software update. On supportbeta.emc.com they call it VNXe Operating Environment update. In Unisphere you can find “Update software”  and by clicking that you will see the current System software version. I’ll be talking about software update.

I was planning to post this as the last part of my hands-on experience series. But while I was writing the series I tested the latest software update (2.1.0) on other VNXe and found out there were fixes that would change some of the posts a lot. So I decided go through the software update at this point and write the remaining parts based on the 2.1.0 software version.

Hands-on with VNXe 3300 series [list edited 9/23/11]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Upgrade to latest software version: new features and fixed “known issues”
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Preparation

The first thing is to download the latest software update from support.emc.com. While downloading the file I opened a chat to EMC technical support just to make sure I was good to go with the update on production environment. The answer was no. Because our systems had been up and running for more than 42 days we would have to power down the whole VNXe before we could proceed with the update. Yes, power down dual SP storage system. Seriously? For a while I didn’t know what to think. I was wondering was it because we were running 2.0.3 or because of the 42+ day uptime and the technician confirmed that it was the 42+ days. This is the information that I got last week. It could be different this week, or with other sw version so check with support when planning to do an update.

So we had two VNXe 3300’s on production with over hundred VMs running on them and we had to take all of those VMs down. We also had one VNXe that was waiting to be configured. I decided to do the update on the new VNXe 3300 first and see if it was worth of taking the VMs down and updating the software. Because the new VNXe didn’t have any hosts connected to it yet it was pretty straight forward. In software update 2.1.0 there were big enough fixes and enhancements to Unisphere that I decided to proceed updating the software. I’ll later go through the reasons which lead to this decision.

Powering down

Before moving to the software update procedure itself the VNXe had to be powered off. The technician gave me 30 step instructions how to do this so here is a short version:

  • Stop all I/O to VNXe
  • Place both SPs in Service Mode
  • Disconnect power from DPE
  • Disconnect power from DAE
  • Reconnect power on DAE
  • Reconnect power on DPE
  • Reboot each SP to return to Normal Mode.
Placing SPs in Service More is done from Settings – Service System. Service account password is needed to access this area. Non-primary SP has to be set to Service Mode first. After the SP has rebooted itself and and the state of the SP changes from unknown to service mode then the the primary SP can be also set to Service Mode.
 
After the VNXe is powered back on then Unisphere only allows login with the service account. Now SP A has to be rebooted first which will return it to Normal Mode and after this SP B can be rebooted. This whole process takes about an hour and now VNXe is ready for the software update.
Updating software
Now that the update package is downloaded from supportbeta.emc.com it has to be uploaded to the VNXe. All the update steps can be done from Settings – More configurations – Update Software.
 

When the upload is ready Candidate Version changes from Not Available to 2.1.0. Before it can be installed a Health Check needs to be ran. If the Health Check doesn’t report any errors the software update can be installed by clicking Install Candidate Version.

Issues with the update

I did the update on four VNXe’s and two out of those had some issues with the update. The other VNXe that we had to power down before the update wasn’t that willing to boot the SPs on Service Mode. I followed the instructions that I got from technical support and after stopping the I/O I placed the non-primary SP (A) on Service Mode. SP A didn’t come back up after 20 minutes, not even after an hour. That time I started suspecting that there was something wrong. From Unisphere everything seemed to be normal: In Service System SP B seemed to be primary and in Normal mode and SP A showed an unknown state, which is normal during the reboot. I then found out that the management IP was not responding at all so Unisphere must have shown pages from the browser cache. There was a small indicator about Unisphere not working properly on the lower left corner of the window and also got error “Storage System Unavailable” when trying to execute any commands on VNXe:

The way that the lights were blinking on SP A indicated that it was on Service Mode. Unplugging the SP B management port the management IP started answering and I was able to login to Unisphere. Now Unisphere was showing that SP A was primary and that it was also in Service Mode. SP B was in Normal mode. I then placed SP B in Service Mode and proceeded powering off the VNXe. It seemed that SPs were in some kind of conflict mode. Software update itself went through without problems.

Another issue that I faced was during the software upgrade of VNXe loaded with NL-SAS disks. The update was halfway done when Unisphere gave an error that upgrade has failed:

After restarting the upgrade it went through without any problems.

However, I noticed that the PSL (VNXe OS runs from internal SSD but uses an area called PSL – Persistent Storage Layout  – located on the first four disks on DPE) was now located on the 1st, 2nd, 3rd and 5th disk on the DPE. My guess is that the failed software update caused this. I hope this is not going to cause problems later.

Before the update:

After the update:

Fixes and new features

The first noticable difference is that the Unisphere version was updated from 1.5.3 to 1.6.0.

One issue that I criticised in the first part was that none of the support functions were working on version 2.0.3. Those are now fixed in the version 2.1.0. For example clicking How To Videos under Support will open the browser at supportbeta.emc.com and will ask powerlink username and password. My impression was that all the support functions were supposed to be integrated to the Unisphere. Now it is just a link to the supportbeta site. But hey, at least it works now. If it only could use the stored EMC Support Cretentials.

Bigger fix was done to the block size that is used when creating VMware Datastore. In 2.0.3 when VMware Datastore was created and automatically provisoned to ESX server the vmfs partition was created with 1MB block size. This meant that the maximum file size on that vmfs was 256GB. This has been changed on the version 2.1.0 and 8MB block size is used when new VMware Datastore is created.

Nice new feature on System performance monitoring is that now also Network Activity and Volume Activity can be monitored. It used to only show the CPU Activity.  Timeframe can be viewed on a scale 48h/24/1h and in addition to those CPU Activity can also be viewed by activity over the last 5 minutes.

Conclusions

Overall the update procedure is easy and takes about an hour if it’s done on newly installed VNXe. If the system has been up and running for over 42 days then scheduled downtime is necessary to complete the update. I would schedule at least four hours of downtime. Powering down and updating can be done in two hours if everything goes without problems. It took me about 3.5 hours to complete the update with one of the VNXe’s. This is something that should be fixed soon. Isn’t the point of a dual SP storage system the ease of serviceability: the other SP can be updated/replaced while the other SP takes care of the I/O. Currently that is not the case.

VNXe is still a fairly new product and I would think that it’s still heavily developed and software updates will be released quite often. To fix issues and also to add some new features. So it’s good to keep an eye on the supportbeta.emc.com downloads section for the new releases. It is also important to contact EMC technical support before installing new updates on VNXe just to check that it’s ok to update or if there are some extra steps that need to be done. Of course I recommend also reading the release notes first.

In the fourth part I will look in to the storage pools and storage provisioning.


Hands-on with VNXe 3300 Part 2: iSCSI and NFS servers


This is the second part in the series about my hands-on experience with EMC VNXe 3300. In the first part I described the initial setup of VNXe and also the challenges that I had during the setup. Before ESXi servers and virtual machines can access the storage there is still a couple of things that need to be done. In this post I will go through setting up network port aggregation, iSCSI server, NFS server and also how to connect ESXi hosts to VNXe.

Hands-on with VNXe 3300 series [list edited 9/23/11]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Upgrade to latest software version: new features and fixed “known issues”
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

NIC Aggregation

This VNXe is furnished with eight (four/SP) 1GB NICs and is connected to ESXi hosts that are using 10GbE NICs for iSCSI. So all four NICs in each SP will be aggregated for maximum throughput. From each SP these four aggregated ports are then connected to separate switches where trunk is configured. These switches are connected to ESXi hosts with 10Gb uplinks.

NIC aggregation can be configured from Settings – More configurations – Advanced Configuration. Default MTU size is 1500 so if jumbo frames are enabled it needs to be changed to 9000. This needs to be done only for the first port, because the other ports are aggregated to this port.

Port aggregation is enabled by selecting “Aggregate with eth2″ under the eth3 and hitting “Apply Changes”. After these settings are also done to eth4 and eth5 the aggregation is ready.

These settings need to be done only once and Unisphere will automatically configure both SPs using the same settings. This also means that SPs can’t have different aggregation settings.

iSCSI Server configuration

iSCSI server can be configured from Settings – iSCSI Server Settings. When creating the first iSCSI server the default storage processor will be SP A. When SP A already has an iSCSI server then SP B is automatically selected as the storage processor when creating the second server. Storage processor, Ethernet port and VLAN ID can be also changed under the Advanced settings.

NFS Server configuration

NFS server can be configured from Settings – Shared Folder Server Settings.
This is very similar to configuring an iSCSI server: there is only one additional step.

On “Shared Folder Types” page NFS and/or CIFS is selected. I was planning to do some testing only with NFS so I chose that.

Connecting ESXi hosts to VNXe

This VNXe will be connected to an existing vCenter/ESXi environment, so all the iSCSI settings are already in place on the hosts. VNXe will automatically discover all the ESX/ESXi hosts from vCenter. Only vCenter name or ip address and the appropriate credentials are needed. This makes things a lot easier. No need to manually register tens or hundreds of hosts. VMware hosts can be added to VNXe from Hosts – VMware.

When the discovery is done all ESX hosts will be shown under Virtualization Hosts. Also the total number of datastores are shown even if those datastores are not from the particular VNXe.

Hosts can be expanded in the view and all virtual machines on that ESX host will be shown. Also some interesting details are shown: OS type, state and associated datastore. Associated datastore is the name of the datastore as it is shown on ESX server. VNXe is pulling all this data from vCenter using the credentials provided earlier.

Even more details of an individual virtual machine can be viewed by selecting the VM and clicking Details.

Conclusions

Unisphere is really made to be easy and simple to use. Everything can be easily found from the menus and sub menus. Icons are big but are not just icons, there is also a subject line and an explanation. If this “Shared Folder Server Settings” would be the only info given then the real meaning of it might not be clear to everyone. But with the explanation it is very understandable:

I have small criticism about the first page of iSCSI and NFS server settings. I’m wondering why the advanced settings are hidden under the “Show advanced” link. The window is already big enough to have those settings shown in default. Ok, the page looks cleaner without the settings shown but it would really be more user friendly if they were not hidden. First time that I went through the settings I noticed the VLAN setting appeared on the summary page but I couldn’t remember seeing where to actually set it. So I went back to the first page and discovered the “Show advanced” link.

In the next part I will go through the software update procedure and look in to the issues that have been fixed.


Follow

Get every new post delivered to your Inbox.

Join 43 other followers

%d bloggers like this: