Category Archives: ESXi

EMC World HOL sneak preview


Once again it’s time for EMC World; new product releases, breakout sessions, labs, networking, wandering around the show floor and of course some fun too. Breakout sessions will be recorded and those can be accessed after the conference but at least most of the hands-on labs are created just for the EMC World. So take an advantage of the ease of testing and evaluating EMC products in an isolated environment without needing to worry about messing up anything. It’s a really good opportunity to get hands on experience and see how things really work.

HOL1

Available labs

Here is the list of available labs. Bolded ones are the ones that I’ll try to take, that’ll be about 11 hours of lab time.

  • LAB01 SRM Suite – Visualize, Analyze, Optimize
  • LAB02 VNX with AppSync Lab: Simple Management, Advanced Protection
  • LAB03 EMC Software Defined Storage (SDS)
  • LAB04 Atmos Cloud Storage: Mature, Robust and Ready to Rock
  • LAB05 EMC NetWorker Backup and Recovery for Next Generation Microsoft Environments
  • LAB06 Flexible and Efficient Backup and Recovery for Microsoft SQL Always-On Availability Groups using EMC NetWorker
  • LAB07 Easier and Faster VMware Backup and Recovery with EMC Avamar For the Storage Administrator
  • LAB08 Automated Backup and Recovery for Your Software Defined Data Center with EMC Avamar
  • LAB09 Taking Backup and Archiving To New Heights with EMC SourceOne and EMC Data Domain
  • LAB10 Optimizing Backups for Oracle DBAs with EMC Data Domain and EMC Data Protection Advisor
  • LAB11 Operational and Disaster Recovery using RecoverPoint
  • LAB12 Achieving High Availability in SAP environments using VMware ESXi clusters and VPLEX
  • LAB13 VPLEX Metro with RecoverPoint: 3-site Solution for HighAvailability and Disaster Recovery
  • LAB14 Introduction to VMAX Cloud Edition
  • LAB15 Replication for the VMAX Family
  • LAB16 Performance Analyzer for the VMAX Family
  • LAB17 Introduction to the VMAX Family
  • LAB18 Storage Provisioning and Monitoring with EMC Storage Integrator (ESI 2.1) and Microsoft System Center Operations Manager
  • LAB19 EMC|Isilon Compliance Mode Cluster Setup, Configuration, and Management Simplicity
  • LAB20 EMC|Isilon Enterprise Ready with OneFS 7.0 Enhancements
  • LAB21 RSA Cloud Security and Compliance
  • LAB22 VMware vSphere Integration with VNX
  • LAB23 VNX Unisphere Analyzer
  • LAB24 VNX/VNXe Storage Monitoring & Analytics For Your Business Needs
  • LAB25 VNX Data Efficiency
  • LAB26 VNXe Unisphere Administration & Snapshots
  • LAB27 EMC VSPEX Virtualized Infrastructure for End User Computing
  • LAB28 Collaborative Big Data Analysis with the Greenplum Unified Analytics Platform
  • LAB29 Manage Your vCloud Suite Applications with VMware vFabric Application
  • LAB30 Discover VMware Horizon Workspace
  • LAB31 Deploy and Operate Your Cloud with the VMware vCloud Suite

The setup

Labs are running on VCE Vblock architecture based infrastructure at EMC North Carolina data center. The storage used to serve the content is VNX and XtremeIO and of course to make it all work together vSphere 5.1 & vCD are also utilized.

There will be two screens at the front of the HOL where the live performance of the environment can be monitored. There will also be product specialists demoing and answering questions about the HOL cloud infrastructure.

HOL2

There will be 200 seats for HOL attendees.

HOL3

Location and opening hours

The labs are located on the right hand side of the EMC Village (2nd floor of the Sands Expo Hall).

HOL4

Lab opening hours:

  • Monday: 11:00AM- 9PM
  • Tuesday: 7:00AM – 6:30PM
  • Wednesday: 7:00AM – 5:00PM
  • Thursday: 7:00AM – 2PM

Doors close 30 mins before the end of each day


New Year, New Continent, New Role


Some of you might have noticed that lately I haven’t been as active on social media as I have before. There is couple of reasons for that. One busy factor that’s not listed on the subject has been my involvement in the VNX implementation project that has taken lot of my time. The goal of that project was to replace CX/MirrorView/SRM with VNX/RecoverPoint/SRM and it didn’t go that smoothly. The project is now finalized and everything worked out in the end. I learned a lot during the project and I have some good ideas for blog posts for the future i.e. RecoverPoint journal sizing.

container

New Continent

In February 2008 my wife and I packed everything that we had, sold our condo in Finland and moved to Atlanta because of my internal transfer. We moved to the Atlanta suburbs and really didn’t know that many people around there. The initial plan was to stay for two years and then come back home. Well, those two years became almost five years. During that time we got very close with our neighbors and got to know lots of other great people from the same neighborhood. It was our home and we felt like we belonged to the community. The most amazing two things that happened during that time were the births of our children. It was hard to be so far from “home” and family in the beginning. We saw family once a year when we visited Finland and almost all closest family members visited us at least once. We enjoyed our time in the US but then came the time to move back to Finland. Once again everything we had was packed to a container and shipped to Finland. I had mixed feelings about the move. I was excited to go back “home” but then again I was sad to leave so many good friends behind. Driving to the Atlanta airport one last time wasn’t easy at all. All the good memories rushed through my mind. It was mid December 2012 and we moved back to Finland to the snow and coldness.

On my way to the office

On my way to the office

New Role

This spring I’ve been with the current company for 9 years. About right after I joined the company I started my virtualization journey with GSX and then with ESX 2.0. From that point on my main focus has been on virtualization and storage. I’ve been working as an architect and been involved in getting the ESX from version 2 to 5 and also implementing new features as those have been announced i.e. SRM and View. I’ve also got my hands dirty when implementing EMC CX300, upgrading it to CX3-40C and replacing it with CX4-120 and CX4-240. As from my VNXe post you might have noticed that I’ve done some work with those too. And of course now with the latest project I had also a chance to get some hands-on experience with VNX and RecoverPoint. In my new role I’ll be managing a team which is responsible for developing and maintaining the company’s whole infrastructure including virtualization, networking, storage, Windows/Linux servers and so forth. This is the same team that I’ve been a part of in the past years. I’m looking forward for the new challenges that the new role brings to my desk and don’t worry, I’ll still be involved with the technical stuff and will continue blogging about virtualization and storage. There might be some 2.5” form-factor VNXe and VNX/RecoverPoint posts coming out soon.

My first ESX installation media

My first ESX installation media

Thank you, all my followers, for the year 2012 and I hope this year is going to be even better. I’m happy to see that my posts in the past year have been helpful.


VNXe document updates


Along with the operating environment version 2.2 upgrade there were several documents added or updated on the EMC Support page. The documents can be found from Support by product –  VNXe Series – Documentation. Here are links to some of the documents:

VNXe Unisphere CLI User Guide

Using a VNXe System with VMware

Using a VNXe System with Microsoft Exchange

Using a VNXe System with Generic iSCSI Storage

Using a VNXe System with Microsoft Windows Hyper-V

Using an EMC VNXe System with CIFS Shared Folders

Using an EMC VNXe System with NFS Shared Folders

VNXe Security Configuration Guide

Couple of previously published useful documents:

White Paper: EMC VNXe High Availability

VNXe Service Commands

Check out the EMC Support page for other updated documents.


Changing round robin IO operation limit on ESXi 5


After I published the post VNXe 3300 performance follow up (EFDs and RR settings) I started seeing visitors landing to my blog through search engines searching “IO operation limit ESXi 5”. In the previous post I only described how the IO operation limit can be changed on ESX 4 using PowerCLI. Commands with ESXi 5 are a bit different. This post will describe how it can be done on ESXi 5 using ESXi Shell and PowerCLI.

Round Robin settings

First thing to do is to change the datastore path selection policy to RR (from vSphere client – select host – configure – storage adapters – iSCSI sw adapter – right click the device and select manage paths – for path selection select Round Robin (VMware) and click change)

Changing IO operation limit using PowerCLI

1. Open PowerCLI and connect to the server

Connect-VIServer -Server [servername]

2. Retrieve esxcli instance

$esxcli = Get-EsxCli

3. Change device IO Operation Limit to 1 and set Limit Type to Iops. [deviceidentifier] can be found from vSphere client’s iSCSI sw adapter view and is in format of naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.

$esxcli.storage.nmp.psp.roundrobin.deviceconfig. ‘

set($null,”[deviceidentifier]“,1,”iops”,$null)

3. Check that the changes were completed.

$esxcli.storage.nmp.psp.roundrobin.deviceconfig. ‘

get(“[deviceidentifier]“)

Chaning IO operation limit using ESXi Shell

1. Login to ESXi using SSH

2. Change device IO Operation Limit to 1 and set Limit Type to Iops. [deviceidentifier] can be found from vSphere client’s iSCSI sw adapter view and is in format of naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.

esxcli storage nmp psp roundrobin deviceconfig set –type=iops –iops 1 –device=[deviceidentifier]

3.   Check that the changes were completed.

esxcli storage nmp psp roundrobin deviceconfig get –device=[deviceidentifier]


Nested ESXi with swap to host cache on VMware Player


Just after the vSphere 5 was released I wrote a post about running ESXi 5 on VMware Player 3. It was an easy way to get to know the ESXi 5 and create a small home lab on your laptop. The issue with running multiple ESXi instances on my laptop is the lack of memory. I have 8GB of memory so that sets some limitations.

After VMware Player 4 was released on January 24 I upgraded my Player and started to play around with it. I found out that it was really easy to run nested ESXis with the new Player version. This wouldn’t help much because I still had only 8GB memory on my laptop. But I also had an SSD on my laptop. I knew that ESXi 5 has a feature called “swap to host cache” which allows the use of an SSD as a swap for the ESXi. So I started testing if it would be possible to run ESXi on the Player, to configure swap to host cache enabling the use of my SSD drive and then to run nested ESXis on the first ESXi. And yes it is possible. Here is how to do it.

Installing the first ESXi

ESXi installation follows the steps that I described on my previous post. The only addition to those steps is that the “Virtualize Intel VT-x/EPT or AMD-V/RVI” option should be selected for the processors to be able to run nested ESXis. I also added a 25GB disk for the host cache and a 100GB drive for nested VM’s.

Configuring the swap to host cache on the first ESXi

The first step before installing any nested VMs is to configure the swap to host cache on the ESXi that is running on the VMware Player. Duncan Epping has a really elaborate post (Swap to host cache aka swap to SSD?) that describes how the cache works and how it can be enabled. Duncan’s post has a link to William Lam’s post (How to Trick ESXi in seeing an SSD Datastore) that I followed to get the ESXi to actually show the virtual disk as an SSD datastore. I then followed Duncan’s instructions to enable the cache. So I now have the ESXi 5 running on VMware Player on my laptop with 23GB of SSD host cache.

Installing nested VMs

When creating a nested VM to run an ESXi the guest default operating system selection can be used.

After the VM is created the guest operating system type needs to be changed to Other/VMware ESXi 5.x:

Host cache in work

To test it up I created three 8GB VMs for the ESXis and then I also deployed the vCenter appliance that also has 8GB memory configured to it. I then started installing the ESXis and could see that the host cache was being utilized.


VNXe 3100 performance


Earlier this year I installed a VNXe 3100 and have now done some testing with it. I have already covered the VNXe 3300 performance in a couple of my previous posts: Hands-on with VNXe 3300 Part 6: Performance and VNXe 3300 performance follow up (EFDs and RR settings). The 3100 has fewer disks than the 3300, also less memory and only two I/O ports. So I wanted to see how the 3100 would perform compared to the 3300. I ran the same Iometer tests that I ran on the 3300. In this post I will compare those results to the ones that I introduced in the previous posts. The environment is a bit different so I will quickly describe that before presenting the results.

Test environment

  • EMC VNXe 3100 (21 600GB SAS Drives)
  • Dell PE 2900 server
  • HP ProCurve 2510G
  • Two 1Gb iSCSI NICs
  • ESXi 4.1U1 / ESXi 5.0
  • Virtual Win 2008 R2 (1vCPU and 4GB memory)

Test results

I ran the tests on both ESXi 4.1 and ESXi 5.0 but the iSCSI results were very similar so I used the average of both. NFS results had some differences so I will present the results for both 4 and 5 separately. I also did the tests with and without LAG and also when changing the default RR settings. VNXe was configured with one 20 disk pool with 100GB datastore provisioned to ESXi servers. The tests were run on 20GB virtual disk on the 100GB datastore.

[update] My main focus in these tests has been on iSCSI because that is what we are planning to use. I only ran quick tests with the generic NFS and not with the one that is configured under Storage – VMware. After Paul’s comment I ran a couple of test on the “VMware NFS” and I then added “ESXi 4 VMware NFS” to the test results:

Conclusions

With default settings the performance of the 3300 and the 3100 is fairly similar. The 3300 gives better throughput when the default IO operation limit is set from the default 1000 to 1. The differences on the physical configurations might also have an effect on this. With random workload the performance is similar even when the default settings are changed. Of course the real difference would be seen when both would be under heavy load. During the tests there was only the test server running on the VNXes.

On the NFS I didn’t have comparable results from the 3300. I ran different tests on the 3300 and those results weren’t good either. The odd thing is that ESXi 4 and ESXi 5 gave quite different results when running the tests on NFS.

Looking these and the previous results I would still be sticking with iSCSI on VNXe. What comes to the performance of the 3100 it is surprisingly close to its bigger sibling 3300.

[update] Looking at the new test results NFS is performing as well as iSCSI. With the modified RR settings iSCSI gets better max throughput but then again with random workloads NFS seems to perform better. So the type of NFS storage provisioned to the ESX hosts makes a difference. Now comes the question NFS or iSCSI? Performance vice either one is a good choice. But which one suits your environment better?

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


Ask The Expert wrap up


It has now been almost two weeks since the EMC Ask the Expert: VNXe front-end Networks with VMware event ended. We had a couple of meetings before hand where we discussed and planned the event, but we really didn’t know what to expect from it. Matt and I were committed to answer the questions during the two weeks so it was a bit different than a normal community thread. Now looking at the amount of views the discussion got we know that it was a success. During the two weeks of time that the event was active we had more than 2300 views on the page. We had several people asking questions and opinions from us. As a summary Matt and I wrote a document that covers the main concerns discussed during the event. In this document we look into the VNXe HA configurations, link aggregation and also do a quick overview of the ESX side configurations:

Ask the Expert Wrap ups – for the community, by the community

I was really excited when I was asked to participate a great event like this. Thank you Mark, Matt and Sean, it was great working with you guys!


%d bloggers like this: