Tag Archives: NFS

VNXe OE 2.4 and 2.5″ form factor disks


Once again I had a chance to play around with some shiny new hardware. And once again the hardware was VNXe 3300 but this time it was something that I hadn’t seen before: 2.5” form factor with 46 600GB 10k disks. If you have read about the new RAID configurations in OE  2.4.0 you might figure out what kind of configuration I have in my mind with this HW.

In this post I will go through some of the new features introduced in VNXe OE 2.4.0, do some configuration comparisons between 3.5” and 2.5” form factors and also between VNXe and VNX. Of course I had to do some performance testing as well with the new RAID configurations so I will introduce the results later in this post.

VNXe OE 2.4.0.20932 release notes

Customizable Dashboard

Along with the new OE came the ability to customize UI dashboard. The look of the Unisphere UI on new or upgraded VNXe is now similar to Unisphere Remote. You can customize the dashboard and also create new tabs and add desired view blocks to the tabs.

VNXe dashboard

vnxe_dashboard1

Jobs

Some of the operations are now added as background jobs and you don’t have to wait that the operation is finished. Steps of the operations are also more detailed when viewed from the jobs page. Number of active jobs is also shown next to the alerts on the status bar dependent on what page are you on.

jobs

New RAID configurations

Now this is one of the enhancements that I’ve been waiting for because VNXe can only utilize four RAID groups in a pool. So with the previous OE this would mean that datastore in 6+1 RAID 5 pool could only utilize 28 disks. Now with the 10+1 RAID 5 pool structure datastores can utilize as many as 44 disks. This also means increased max iops per datastore. 3.5” form factor 15k disk RAID 5 pool max iops is increased from ~4900 to ~7700 and with 2.5” form factor 10k disk RAID 5 pool max iops is increased from ~3500 to ~5500. Iops is not the only thing to be looked at. Size of the pool matters too and not to forget the rack space that the VNXe will use. While I was sizing the last VNXe that we ordered I made this comparison chart to compare the pool size, iops and rack space with different disk form factors in VNX and VNXe.

comparison

Interesting setup with the VNXe 3150 and 2.5” form factor disks is the 21TB and 5500 iops packed in 4U rack space. VNXe 3300 with same specs would take 5U space and VNX5300 would take 6U space. Of course the SP performance is a bit different between these arrays but so is the price.

Performance

I’ve already posted some performance test results from VNX 3100 and 3300 so I added those results to the charts for comparison. I’ve also ran some tests on VNX 5300 that I haven’t posted yet and also added those results on the charts.

avgmbps1

avgmbps2

avgiops1

avgiops2

avglatency1

avglatency2

There is a significant difference in the max throughput between 1G and 10G modules on VNXe. Then again the real life test results are quite similar.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


VNXe 3100 performance


Earlier this year I installed a VNXe 3100 and have now done some testing with it. I have already covered the VNXe 3300 performance in a couple of my previous posts: Hands-on with VNXe 3300 Part 6: Performance and VNXe 3300 performance follow up (EFDs and RR settings). The 3100 has fewer disks than the 3300, also less memory and only two I/O ports. So I wanted to see how the 3100 would perform compared to the 3300. I ran the same Iometer tests that I ran on the 3300. In this post I will compare those results to the ones that I introduced in the previous posts. The environment is a bit different so I will quickly describe that before presenting the results.

Test environment

  • EMC VNXe 3100 (21 600GB SAS Drives)
  • Dell PE 2900 server
  • HP ProCurve 2510G
  • Two 1Gb iSCSI NICs
  • ESXi 4.1U1 / ESXi 5.0
  • Virtual Win 2008 R2 (1vCPU and 4GB memory)

Test results

I ran the tests on both ESXi 4.1 and ESXi 5.0 but the iSCSI results were very similar so I used the average of both. NFS results had some differences so I will present the results for both 4 and 5 separately. I also did the tests with and without LAG and also when changing the default RR settings. VNXe was configured with one 20 disk pool with 100GB datastore provisioned to ESXi servers. The tests were run on 20GB virtual disk on the 100GB datastore.

[update] My main focus in these tests has been on iSCSI because that is what we are planning to use. I only ran quick tests with the generic NFS and not with the one that is configured under Storage – VMware. After Paul’s comment I ran a couple of test on the “VMware NFS” and I then added “ESXi 4 VMware NFS” to the test results:

Conclusions

With default settings the performance of the 3300 and the 3100 is fairly similar. The 3300 gives better throughput when the default IO operation limit is set from the default 1000 to 1. The differences on the physical configurations might also have an effect on this. With random workload the performance is similar even when the default settings are changed. Of course the real difference would be seen when both would be under heavy load. During the tests there was only the test server running on the VNXes.

On the NFS I didn’t have comparable results from the 3300. I ran different tests on the 3300 and those results weren’t good either. The odd thing is that ESXi 4 and ESXi 5 gave quite different results when running the tests on NFS.

Looking these and the previous results I would still be sticking with iSCSI on VNXe. What comes to the performance of the 3100 it is surprisingly close to its bigger sibling 3300.

[update] Looking at the new test results NFS is performing as well as iSCSI. With the modified RR settings iSCSI gets better max throughput but then again with random workloads NFS seems to perform better. So the type of NFS storage provisioned to the ESX hosts makes a difference. Now comes the question NFS or iSCSI? Performance vice either one is a good choice. But which one suits your environment better?

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


Ask The Expert wrap up


It has now been almost two weeks since the EMC Ask the Expert: VNXe front-end Networks with VMware event ended. We had a couple of meetings before hand where we discussed and planned the event, but we really didn’t know what to expect from it. Matt and I were committed to answer the questions during the two weeks so it was a bit different than a normal community thread. Now looking at the amount of views the discussion got we know that it was a success. During the two weeks of time that the event was active we had more than 2300 views on the page. We had several people asking questions and opinions from us. As a summary Matt and I wrote a document that covers the main concerns discussed during the event. In this document we look into the VNXe HA configurations, link aggregation and also do a quick overview of the ESX side configurations:

Ask the Expert Wrap ups – for the community, by the community

I was really excited when I was asked to participate a great event like this. Thank you Mark, Matt and Sean, it was great working with you guys!


Hands-on with VNXe 3300 Part 2: iSCSI and NFS servers


This is the second part in the series about my hands-on experience with EMC VNXe 3300. In the first part I described the initial setup of VNXe and also the challenges that I had during the setup. Before ESXi servers and virtual machines can access the storage there is still a couple of things that need to be done. In this post I will go through setting up network port aggregation, iSCSI server, NFS server and also how to connect ESXi hosts to VNXe.

Hands-on with VNXe 3300 series [list edited 9/23/11]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Upgrade to latest software version: new features and fixed “known issues”
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

NIC Aggregation

This VNXe is furnished with eight (four/SP) 1GB NICs and is connected to ESXi hosts that are using 10GbE NICs for iSCSI. So all four NICs in each SP will be aggregated for maximum throughput. From each SP these four aggregated ports are then connected to separate switches where trunk is configured. These switches are connected to ESXi hosts with 10Gb uplinks.

NIC aggregation can be configured from Settings – More configurations – Advanced Configuration. Default MTU size is 1500 so if jumbo frames are enabled it needs to be changed to 9000. This needs to be done only for the first port, because the other ports are aggregated to this port.

Port aggregation is enabled by selecting “Aggregate with eth2” under the eth3 and hitting “Apply Changes”. After these settings are also done to eth4 and eth5 the aggregation is ready.

These settings need to be done only once and Unisphere will automatically configure both SPs using the same settings. This also means that SPs can’t have different aggregation settings.

iSCSI Server configuration

iSCSI server can be configured from Settings – iSCSI Server Settings. When creating the first iSCSI server the default storage processor will be SP A. When SP A already has an iSCSI server then SP B is automatically selected as the storage processor when creating the second server. Storage processor, Ethernet port and VLAN ID can be also changed under the Advanced settings.

NFS Server configuration

NFS server can be configured from Settings – Shared Folder Server Settings.
This is very similar to configuring an iSCSI server: there is only one additional step.

On “Shared Folder Types” page NFS and/or CIFS is selected. I was planning to do some testing only with NFS so I chose that.

Connecting ESXi hosts to VNXe

This VNXe will be connected to an existing vCenter/ESXi environment, so all the iSCSI settings are already in place on the hosts. VNXe will automatically discover all the ESX/ESXi hosts from vCenter. Only vCenter name or ip address and the appropriate credentials are needed. This makes things a lot easier. No need to manually register tens or hundreds of hosts. VMware hosts can be added to VNXe from Hosts – VMware.

When the discovery is done all ESX hosts will be shown under Virtualization Hosts. Also the total number of datastores are shown even if those datastores are not from the particular VNXe.

Hosts can be expanded in the view and all virtual machines on that ESX host will be shown. Also some interesting details are shown: OS type, state and associated datastore. Associated datastore is the name of the datastore as it is shown on ESX server. VNXe is pulling all this data from vCenter using the credentials provided earlier.

Even more details of an individual virtual machine can be viewed by selecting the VM and clicking Details.

Conclusions

Unisphere is really made to be easy and simple to use. Everything can be easily found from the menus and sub menus. Icons are big but are not just icons, there is also a subject line and an explanation. If this “Shared Folder Server Settings” would be the only info given then the real meaning of it might not be clear to everyone. But with the explanation it is very understandable:

I have small criticism about the first page of iSCSI and NFS server settings. I’m wondering why the advanced settings are hidden under the “Show advanced” link. The window is already big enough to have those settings shown in default. Ok, the page looks cleaner without the settings shown but it would really be more user friendly if they were not hidden. First time that I went through the settings I noticed the VLAN setting appeared on the summary page but I couldn’t remember seeing where to actually set it. So I went back to the first page and discovered the “Show advanced” link.

In the next part I will go through the software update procedure and look in to the issues that have been fixed.


%d bloggers like this: