Tag Archives: iSCSI

VNXe OE 2.4 and 2.5″ form factor disks


Once again I had a chance to play around with some shiny new hardware. And once again the hardware was VNXe 3300 but this time it was something that I hadn’t seen before: 2.5” form factor with 46 600GB 10k disks. If you have read about the new RAID configurations in OE  2.4.0 you might figure out what kind of configuration I have in my mind with this HW.

In this post I will go through some of the new features introduced in VNXe OE 2.4.0, do some configuration comparisons between 3.5” and 2.5” form factors and also between VNXe and VNX. Of course I had to do some performance testing as well with the new RAID configurations so I will introduce the results later in this post.

VNXe OE 2.4.0.20932 release notes

Customizable Dashboard

Along with the new OE came the ability to customize UI dashboard. The look of the Unisphere UI on new or upgraded VNXe is now similar to Unisphere Remote. You can customize the dashboard and also create new tabs and add desired view blocks to the tabs.

VNXe dashboard

vnxe_dashboard1

Jobs

Some of the operations are now added as background jobs and you don’t have to wait that the operation is finished. Steps of the operations are also more detailed when viewed from the jobs page. Number of active jobs is also shown next to the alerts on the status bar dependent on what page are you on.

jobs

New RAID configurations

Now this is one of the enhancements that I’ve been waiting for because VNXe can only utilize four RAID groups in a pool. So with the previous OE this would mean that datastore in 6+1 RAID 5 pool could only utilize 28 disks. Now with the 10+1 RAID 5 pool structure datastores can utilize as many as 44 disks. This also means increased max iops per datastore. 3.5” form factor 15k disk RAID 5 pool max iops is increased from ~4900 to ~7700 and with 2.5” form factor 10k disk RAID 5 pool max iops is increased from ~3500 to ~5500. Iops is not the only thing to be looked at. Size of the pool matters too and not to forget the rack space that the VNXe will use. While I was sizing the last VNXe that we ordered I made this comparison chart to compare the pool size, iops and rack space with different disk form factors in VNX and VNXe.

comparison

Interesting setup with the VNXe 3150 and 2.5” form factor disks is the 21TB and 5500 iops packed in 4U rack space. VNXe 3300 with same specs would take 5U space and VNX5300 would take 6U space. Of course the SP performance is a bit different between these arrays but so is the price.

Performance

I’ve already posted some performance test results from VNX 3100 and 3300 so I added those results to the charts for comparison. I’ve also ran some tests on VNX 5300 that I haven’t posted yet and also added those results on the charts.

avgmbps1

avgmbps2

avgiops1

avgiops2

avglatency1

avglatency2

There is a significant difference in the max throughput between 1G and 10G modules on VNXe. Then again the real life test results are quite similar.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


VNXe 3100 performance


Earlier this year I installed a VNXe 3100 and have now done some testing with it. I have already covered the VNXe 3300 performance in a couple of my previous posts: Hands-on with VNXe 3300 Part 6: Performance and VNXe 3300 performance follow up (EFDs and RR settings). The 3100 has fewer disks than the 3300, also less memory and only two I/O ports. So I wanted to see how the 3100 would perform compared to the 3300. I ran the same Iometer tests that I ran on the 3300. In this post I will compare those results to the ones that I introduced in the previous posts. The environment is a bit different so I will quickly describe that before presenting the results.

Test environment

  • EMC VNXe 3100 (21 600GB SAS Drives)
  • Dell PE 2900 server
  • HP ProCurve 2510G
  • Two 1Gb iSCSI NICs
  • ESXi 4.1U1 / ESXi 5.0
  • Virtual Win 2008 R2 (1vCPU and 4GB memory)

Test results

I ran the tests on both ESXi 4.1 and ESXi 5.0 but the iSCSI results were very similar so I used the average of both. NFS results had some differences so I will present the results for both 4 and 5 separately. I also did the tests with and without LAG and also when changing the default RR settings. VNXe was configured with one 20 disk pool with 100GB datastore provisioned to ESXi servers. The tests were run on 20GB virtual disk on the 100GB datastore.

[update] My main focus in these tests has been on iSCSI because that is what we are planning to use. I only ran quick tests with the generic NFS and not with the one that is configured under Storage – VMware. After Paul’s comment I ran a couple of test on the “VMware NFS” and I then added “ESXi 4 VMware NFS” to the test results:

Conclusions

With default settings the performance of the 3300 and the 3100 is fairly similar. The 3300 gives better throughput when the default IO operation limit is set from the default 1000 to 1. The differences on the physical configurations might also have an effect on this. With random workload the performance is similar even when the default settings are changed. Of course the real difference would be seen when both would be under heavy load. During the tests there was only the test server running on the VNXes.

On the NFS I didn’t have comparable results from the 3300. I ran different tests on the 3300 and those results weren’t good either. The odd thing is that ESXi 4 and ESXi 5 gave quite different results when running the tests on NFS.

Looking these and the previous results I would still be sticking with iSCSI on VNXe. What comes to the performance of the 3100 it is surprisingly close to its bigger sibling 3300.

[update] Looking at the new test results NFS is performing as well as iSCSI. With the modified RR settings iSCSI gets better max throughput but then again with random workloads NFS seems to perform better. So the type of NFS storage provisioned to the ESX hosts makes a difference. Now comes the question NFS or iSCSI? Performance vice either one is a good choice. But which one suits your environment better?

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


Ask The Expert wrap up


It has now been almost two weeks since the EMC Ask the Expert: VNXe front-end Networks with VMware event ended. We had a couple of meetings before hand where we discussed and planned the event, but we really didn’t know what to expect from it. Matt and I were committed to answer the questions during the two weeks so it was a bit different than a normal community thread. Now looking at the amount of views the discussion got we know that it was a success. During the two weeks of time that the event was active we had more than 2300 views on the page. We had several people asking questions and opinions from us. As a summary Matt and I wrote a document that covers the main concerns discussed during the event. In this document we look into the VNXe HA configurations, link aggregation and also do a quick overview of the ESX side configurations:

Ask the Expert Wrap ups – for the community, by the community

I was really excited when I was asked to participate a great event like this. Thank you Mark, Matt and Sean, it was great working with you guys!


Changing MTU on VNXe disconnects datastores from ESXi


While testing the VNXe 3100 (OE 2.1.3.16008) I found a problem when changing the MTU settings for link aggregate. With specific combination of configurations, changing the MTU causes the ESXi (4.1.0, 502767) to loose all iSCSI datastores and even changing the settings back the datastores are still not visible on ESXi. VNXe also can’t provision new datastores to ESXi while this problem is occurring. There are a couple of workarounds for this but no official fix is available to avoid this kind of a situation.

How did I find it?

After the initial configuration I created link aggregate from two ports, set the MTU to 9000 and also created one iSCSI server on SP A with two IP addresses. I then configured ESXi also to use MTU 9000. Datastore creation on VNXe side went through successfully but on the ESXi side I could see an error that the VMFS volume couldn’t be created.

I could see the LUN under iSCSI adapter but manually creating a VMFS datastore also failed. I then realized that I hadn’t configured jumbo frames on the switch and decided to change the ESXi and VNXe MTUs back to 1500. After I changed the VNXe MTU the LUN disappeared from ESXi. Manually changing the datastore access settings from VNXe didn’t help either. I just couldn’t get the ESXi see the LUN anymore. I then tried to provision a new datastore to ESX but got this error:

Ok, so I deleted the datastore and the iSCSI server and then recreated the iSCSI server and provisioned a new datastore for the ESXi without any problems. I had a suspicion that the MTU change caused the problem and tried it again. I changed the link aggregation on VNXe from 1500 to 9000 and after that was done the datastore disappeared from ESXi. Changing MTU back to 1500 didn’t help, the datastore and LUN were not visible on ESX. Also creating a new datastore gave the same error as before. Datastore was created on VNXe but was not accessible from ESXi. Deleting and recreating datastores and iSCSI servers resolved the issue again.

What is the cause of this problem?

So it seemed that the MTU change was causing the problem. I started testing with different scenarios and found out that the problem was the combination of the MTU change and also the iSCSI server having two IP addresses. Here are some scenarios that I tested (sorry about the rough grammar, tried to keep the descriptions short):

Link aggregation MTU 1500 and iSCSI server with two IP addresses. Provisioned storage works on ESXi. Changing VNXe link aggregation MTU to 9000 and ESXi lose connection to datastore. Change VNXe MTU back to 1500 and ESXi still can’t see the datastore. Trying to provision new datastore to ESXi results an error. Removing the other IP address doesn’t resolve the problem.

Ling aggregation MTU 1500 and iSCSI server with two IP addresses. Provisioned storage works on ESXi. Removing the other IP from iSCSI server and changing MTU to 9000. Datastore is still visible and accessible from ESXi side. Changing MTU back to 1500 and datastore is still visible and accessible from ESXi. Datastore provisioning to ESXi is successful. After adding another IP address to iSCSI server ESX loses the connection to datastore. Provisioning new datastore to ESXi results an error. Removing the other IP address also doesn’t resolve the problem.

Ling aggregation MTU 1500 and iSCSI server with one IP address. Provisioned storage works on ESX. Change MTU to 9000. Datastore is still visible and accessible from ESXi side. Changing MTU back to 1500 and datastore is still visible and accessible from ESXi. Datastore provisioning to ESXi is successful. After adding another IP address to iSCSI server ESX loses the connection to datastore. Provisioning new datastore to ESXi results an error. Removing the other IP doesn’t resolve the problem.

Link aggregation MTU 1500 and two iSCSI servers on one SP both configured with one IP. One datastore on both iSCSI servers (there is also an issue getting the datastore on the other iSCSI server provisioned, see my previous post). Adding a second IP for the first iSCSI server and both datastores are still accessible from ESXi. When changing MTU to 9000 ESX loses connection to both datastores. Changing MTU back to 1500 and both datastores are still not visible on ESXi. Also getting the same error as previously when trying to provision new storage.

I also tested different combinations with iSCSI servers on different SPs and if SPA iSCSI server has two IP addresses and SPB iSCSI server has only one IP and the MTU is changed then the datastores on SPB iSCSI server are not affected.

How to fix this?

Currently there is no official fix for this. I have reported the problem to EMC support and demonstrated the issue to EMC support technician and uploaded all the logs, so they are working on trying to find the root cause of this.

Apparently when an iSCSI server has two IP addresses and the MTU is changed the iSCSI server goes to some kind of “lockdown” mode and doesn’t allow any connections to be initiated. Like I already described the VNXe can be returned to operational state by removing all datastores and iSCSI servers and recreating them. Of course this is not an option when there is production data on the datastores.

EMC support technician showed me a quicker and a less radical workaround to get the array back to operational state: Restarting the iSCSI service on the VNXe. CAUTION: Restarting iSCSI service will disconnect all provisioned datastores from hosts. Connection to datastores will be established after the iSCSI service is restarted. But this will cause all running VMs to crash.

The easiest way to restart iSCSI service is enabling the iSNS server from iSCSI server settings, giving it an IP address and applying changes. After the changes are applied iSNS server can be disabled. This will trigger the iSCSI service to restart and all datastores that were disconnected are again visible and usable on ESXi.

Conclusions

After this finding I would suggest not to configure iSCSI serves with two IP addresses. If MTU change can do this much damage what about other changes?

If you have two iSCSI servers with two IP addresses I would advise not to change MTU even if it would be done during a planned service break. If for some reason it is mandatory to do the change, contact EMC support before doing it. If you have arrays affected by this issue I would encourage to contact EMC support before trying to restart the iSCSI service.

Once again I have to give credit to EMC support. They have some great people working there.


VNXe 3300 performance follow up (EFDs and RR settings)


On my previous post about VNXe 3300 performance I introduced results from the performance tests I had done with VNXe 3300. I will use those results as a comparison for the new tests that I ran recently. In this post I will compare the performance difference with different Round Robin policy settings. I also had a chance to test the performance of EFD disks on VNXe.

Round Robin settings

On the previous post all the tests were ran on default RR settings which means that ESX would send 1000 commands through one path before changing the path. I observed that with the default RR settings I was only getting the bandwidth of one link on the four port LACP trunk. I got some feedback from Ken advising to change the default RR IO operation limit setting from 1000 to 1 to get two links worth of bandwidth from VNXe. So I wanted to test what kind of an effect would this change have on performance.

Arnim van Lieshout has a really good post about configuring RR using PowerCLI and I used his examples for configuring the IO operation limit from 1000 to 1. If you are not confident running the full PowerCLI scripts Arnim introduced in his post here is how RR settings for individual device could be changed using GUI and couple of simple PowerCLI commands:

1. Change datastore path selection policy to RR (from vSphere client – select host – configure – storage adapters – iSCSI sw adapter – right click the device and select manage paths – for path selection select Round Robin (VMware) and click change)

2. Open PowerCLI and connect to the server

Connect-VIServer -Server [servername]

3. Retrieve esxcli instance

$esxcli = Get-EsxCli

4. Change device IO Operation Limit to 1 and set Limit Type to Iops. [deviceidentifier] can be found from vSphere client’s iSCSI sw adapter view and is in format of naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.

$esxcli.nmp.roundrobin.setconfig($null,”[deviceidentifier]”,1,”iops”,$null)

5. Check that the changes were completed.

$esxcli.nmp.roundrobin.getconfig(“[deviceidentifier]”)

Results 1

For these tests I used same environment and Iometer settings that I described on my Hands-on with VNXe 3300 Part 6: Performance post.

Results 2

For these tests I used the same environment except instead of virtual Win 2003 I used virtual Win 2008 (1vCPU and 4GB memory) and the following Iometer settings (I picked up these settings from VMware Community post Open unofficial storage performance thread):

Max Throughput-100%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 100% Read distribution
  • 5 minute run time

Max Throughput-50%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 50% read/write distribution
  • 5 minute run time

RealLife-60%Rand-65%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 40% sequential / 60% random distribution
  • 35 % read /65% write distribution
  • 5 minute run time

Random-8k-70%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 100% random distribution
  • 30 % read /70% write distribution
  • 5 minute run time
[Updated 11/29/11] After I had published this post Andy Banta gave me a hint on twitter:
You might squeeze more by using a number like 5 to 10, skipping
some of the path change cost.
So I ran couple of more tests changing the IO operation limit between 5-10. With the 28 disk pool there was no big difference when used the values 1 or 5-10. With the EFDs the magic number seemed to be 6 and with that I managed to get 16 MBps and 1100 IOps more out of the disks with specific work loads. I added the new EFD results to the graphs. 


Conclusions

Changing the default RR policy setting from the default 1000 io to 1 io really makes a difference on VNXe. On random workload there is not much difference between these two settings. But with sequential workload the difference is significant. Sequental write IOps and throughput is more than double with certain block sizes when using the 1 io setting. If you have ESXs connected to VNXe with LACP trunk I would recommend changing the RR policy to 1 5-10. Like I already mentioned Arnim has a really good post about configuring RR settings using PowerCLI. Another good post about multipathing is A “Multivendor Post” on using iSCSI with VMware vSphere by Chad Sakac.

Looking at the results it is obvious that EFD disks perform much better than SAS disks. On sequential workload 28 Disk SAS pool’s performance is about the same as 5 disk EFD RG’s. But on random workload EFD’s performance is about two times better than SAS pool’s. There was no other load on the disks while these tests were ran so under additional load I would expect EFD’s performing much better on sequential load as well. Better performance doesn’t come withouth a bigger price tag. EFD disks are still over 20 times more expensive per TB than SAS disks but then again SAS disks are about 3 times more expensive per IO than EFD disks.

Now if only EFDs could be used as cache on VNXe.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


Hands-on with VNXe 3300 Part 6: Performance


Now that the VNXe is installed, configured and also some storage has been provisioned to ESXi hosts it is time to look at the performance. Like I mentioned in the first post I had already gathered some test results from CX4-240 using Iometer and I wanted to make similar tests with VNXe so that the results could be comparable.

Hands-on with VNXe 3300 series:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Software update
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

Test environment CX4-240

  • EMC CX4-240
  • Dell PE M710HD Blade server
  • Two 10G iSCSI NICs with total of four paths between storage and ESXi. Round robin path selection policy enabled for each LUN with two active I/O  paths
  • Jumbo Frames enabled
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Test environment VNXe 3300

  • EMC VNXe 3300
  • Dell PE M710HD Blade server
  • Two 10Gb iSCSI NICs with total of two paths between storage and ESXi. Round robin path selection policy enabled for each LUN with two active I/O paths (see Trunk restrictions and Load balancing)
  • Jumbo Frames enabled
  • ESXi 4.1U1
  • Virtual Win 2003 SE SP2 (1vCPU and 2GB memory)

Iometer Configuration

I used Iometer setup described in VMware’s Recommendations for Aligning VMFS Partitions (page 7) document.

Disk configuration

I had to shorten the explanations on the charts so here are the definitions:

  • CX4 FC 15D
    • 15 15k FC Disk RAID5 Pool on CX4-240 connected with iSCSI
  • CX4 SATA 25D
    • 25 7.2k SATA Disk RAID5 Pool on CX240 connected with iSCSI
  • VNXe 21D 2.0.3
    • 21 15k SAS Disk RAID 5 (3×6+1) Pool on VNXe 3300 connected with iSCSI. VNXe Software version 2.0.3
  • VNXe 28D 2.0.3
    • 28 15k SAS Disk RAID 5 (4×6+1) Pool on VNXe 3300. connected with iSCSI. VNXe Software version 2.0.3
  • VNXe 28D 2.1.0
    • 28 15k SAS Disk RAID 5 (4×6+1) Pool on VNXe 3300 connected with iSCSI. VNXe Software version 2.1.0
  • VNXe RG 2.1.0
    • 7 15k SAS RAID5 (6+1) RG on VNXe connected with iSCSI. VNXe Software version 2.1.0
  • VNXe 28D NFS
    • 28 15k SAS RAID 5 (4×6+1) Pool on VNXe 3300 connected with NFS. VNXe Software version 2.1.0

100Gb thick LUN was created to each pool and RG where 20Gb virtual disk was stored. This 20Gb virtual disk was presented to the virtual Windows server that was used to conduct the test. Partition to this disk was created using diskpart command ’create partition primary align=1024′ and partition was formatted with a 32K allocation size.

Trunk restrictions

Before I go through the results I want to address the limitation with the trunk between VNXe and the 10Gb switch that it is connected to. Even though there are 4Gb (4x1Gb) trunk between the storage and the switch the maximum throughput is only the throughput of one physical port.

While I was running the tests I had SSH connection open to VNXe and I ran netstat -i -c command to see what was going on with the trunk and individual ports. The first screen capture is taken while 8k sequential read test was running. You can see that all the traffic is going through that one port:

The second screen capture is taken while the VNXe was in production and several virtual machines were accessing the disk. In this case the load is balanced randomly between the physical ports:

Load balancing

VNXe 3300 is active/active array but doesn’t support ALUA. This means that LUN can only be accessed through one SP. One iSCSI/NFS server can only have one IP and this IP can only be tied to one port or trunk. Also LUN can only be served by one iSCSI/NFS server. So there will be only one path from the switch to VNXe. Round robin path selection policy can be enabled on ESX side but this will only help to balance the load between the ESX NICs. Even without the trunk round robin can’t be used to balance the load between the four VNXe ports.

Test results

Each Iometer test was ran twice and the results are the average of those two test runs. If the results were not similar enough (i.e. several hundred difference in IOps) then a third test was ran and the results are the average of those three runs.

Same as previous but without NFS results:

Average wait time

Conclusions

The first thing that caught my eye was the difference between VNXe 28 disk pool and 7 disk RG on the random write test. Quote from my last post about the pool structure:

When LUN is created it will be placed on the RAID group (RG) that is the least utilized from a capacity point of view. If the LUN created is larger than the free space on an individual RG the LUN will be then extended across multiple RGs but there is no striping involved.

The tests were ran on 100Gb LUN so it should fit in one RG if the information that I got would be correct. So comparing the pool and the results from one RG random write test it seems that even smaller LUNs are divided across multiple RGs.

Another interesting detail is the difference of the software version 2.0.3 and 2.1.0. Looking at the difference of these results it is obvious that the software version has a big effect on the performance.

NFS storage performance with random write was really bad. But with the 1k sequential read it surprised giving out 30000 iops. Based on these test I would stick with the iSCSI and maybe look at the NFS again after the next software version.

Overall VNXe is performing very well compared to CX. With this configuration the VNXe it is hitting the limits of the one physical port. This could be fixed by adding 10Gb I/O modules. Would be nice run the same test with the 10Gb I/O modules.

We are coming to an end of my hands-on series and I’ll be wrapping up the series in the next post.

[Update 2/17/2012] Updated NFS performance results with VNXe 3100 and OE 2.1.3.16008: VNXe 3100 performance

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


Hands-on with VNXe 3300 Part 2: iSCSI and NFS servers


This is the second part in the series about my hands-on experience with EMC VNXe 3300. In the first part I described the initial setup of VNXe and also the challenges that I had during the setup. Before ESXi servers and virtual machines can access the storage there is still a couple of things that need to be done. In this post I will go through setting up network port aggregation, iSCSI server, NFS server and also how to connect ESXi hosts to VNXe.

Hands-on with VNXe 3300 series [list edited 9/23/11]:

  1. Initial setup and configuration
  2. iSCSI and NFS servers
  3. Upgrade to latest software version: new features and fixed “known issues”
  4. Storage pools
  5. Storage provisioning
  6. VNXe performance
  7. Wrap up

NIC Aggregation

This VNXe is furnished with eight (four/SP) 1GB NICs and is connected to ESXi hosts that are using 10GbE NICs for iSCSI. So all four NICs in each SP will be aggregated for maximum throughput. From each SP these four aggregated ports are then connected to separate switches where trunk is configured. These switches are connected to ESXi hosts with 10Gb uplinks.

NIC aggregation can be configured from Settings – More configurations – Advanced Configuration. Default MTU size is 1500 so if jumbo frames are enabled it needs to be changed to 9000. This needs to be done only for the first port, because the other ports are aggregated to this port.

Port aggregation is enabled by selecting “Aggregate with eth2” under the eth3 and hitting “Apply Changes”. After these settings are also done to eth4 and eth5 the aggregation is ready.

These settings need to be done only once and Unisphere will automatically configure both SPs using the same settings. This also means that SPs can’t have different aggregation settings.

iSCSI Server configuration

iSCSI server can be configured from Settings – iSCSI Server Settings. When creating the first iSCSI server the default storage processor will be SP A. When SP A already has an iSCSI server then SP B is automatically selected as the storage processor when creating the second server. Storage processor, Ethernet port and VLAN ID can be also changed under the Advanced settings.

NFS Server configuration

NFS server can be configured from Settings – Shared Folder Server Settings.
This is very similar to configuring an iSCSI server: there is only one additional step.

On “Shared Folder Types” page NFS and/or CIFS is selected. I was planning to do some testing only with NFS so I chose that.

Connecting ESXi hosts to VNXe

This VNXe will be connected to an existing vCenter/ESXi environment, so all the iSCSI settings are already in place on the hosts. VNXe will automatically discover all the ESX/ESXi hosts from vCenter. Only vCenter name or ip address and the appropriate credentials are needed. This makes things a lot easier. No need to manually register tens or hundreds of hosts. VMware hosts can be added to VNXe from Hosts – VMware.

When the discovery is done all ESX hosts will be shown under Virtualization Hosts. Also the total number of datastores are shown even if those datastores are not from the particular VNXe.

Hosts can be expanded in the view and all virtual machines on that ESX host will be shown. Also some interesting details are shown: OS type, state and associated datastore. Associated datastore is the name of the datastore as it is shown on ESX server. VNXe is pulling all this data from vCenter using the credentials provided earlier.

Even more details of an individual virtual machine can be viewed by selecting the VM and clicking Details.

Conclusions

Unisphere is really made to be easy and simple to use. Everything can be easily found from the menus and sub menus. Icons are big but are not just icons, there is also a subject line and an explanation. If this “Shared Folder Server Settings” would be the only info given then the real meaning of it might not be clear to everyone. But with the explanation it is very understandable:

I have small criticism about the first page of iSCSI and NFS server settings. I’m wondering why the advanced settings are hidden under the “Show advanced” link. The window is already big enough to have those settings shown in default. Ok, the page looks cleaner without the settings shown but it would really be more user friendly if they were not hidden. First time that I went through the settings I noticed the VLAN setting appeared on the summary page but I couldn’t remember seeing where to actually set it. So I went back to the first page and discovered the “Show advanced” link.

In the next part I will go through the software update procedure and look in to the issues that have been fixed.


%d bloggers like this: