Tag Archives: SSD

Nested ESXi with swap to host cache on VMware Player


Just after the vSphere 5 was released I wrote a post about running ESXi 5 on VMware Player 3. It was an easy way to get to know the ESXi 5 and create a small home lab on your laptop. The issue with running multiple ESXi instances on my laptop is the lack of memory. I have 8GB of memory so that sets some limitations.

After VMware Player 4 was released on January 24 I upgraded my Player and started to play around with it. I found out that it was really easy to run nested ESXis with the new Player version. This wouldn’t help much because I still had only 8GB memory on my laptop. But I also had an SSD on my laptop. I knew that ESXi 5 has a feature called “swap to host cache” which allows the use of an SSD as a swap for the ESXi. So I started testing if it would be possible to run ESXi on the Player, to configure swap to host cache enabling the use of my SSD drive and then to run nested ESXis on the first ESXi. And yes it is possible. Here is how to do it.

Installing the first ESXi

ESXi installation follows the steps that I described on my previous post. The only addition to those steps is that the “Virtualize Intel VT-x/EPT or AMD-V/RVI” option should be selected for the processors to be able to run nested ESXis. I also added a 25GB disk for the host cache and a 100GB drive for nested VM’s.

Configuring the swap to host cache on the first ESXi

The first step before installing any nested VMs is to configure the swap to host cache on the ESXi that is running on the VMware Player. Duncan Epping has a really elaborate post (Swap to host cache aka swap to SSD?) that describes how the cache works and how it can be enabled. Duncan’s post has a link to William Lam’s post (How to Trick ESXi in seeing an SSD Datastore) that I followed to get the ESXi to actually show the virtual disk as an SSD datastore. I then followed Duncan’s instructions to enable the cache. So I now have the ESXi 5 running on VMware Player on my laptop with 23GB of SSD host cache.

Installing nested VMs

When creating a nested VM to run an ESXi the guest default operating system selection can be used.

After the VM is created the guest operating system type needs to be changed to Other/VMware ESXi 5.x:

Host cache in work

To test it up I created three 8GB VMs for the ESXis and then I also deployed the vCenter appliance that also has 8GB memory configured to it. I then started installing the ESXis and could see that the host cache was being utilized.

Advertisements

VNXe 3300 performance follow up (EFDs and RR settings)


On my previous post about VNXe 3300 performance I introduced results from the performance tests I had done with VNXe 3300. I will use those results as a comparison for the new tests that I ran recently. In this post I will compare the performance difference with different Round Robin policy settings. I also had a chance to test the performance of EFD disks on VNXe.

Round Robin settings

On the previous post all the tests were ran on default RR settings which means that ESX would send 1000 commands through one path before changing the path. I observed that with the default RR settings I was only getting the bandwidth of one link on the four port LACP trunk. I got some feedback from Ken advising to change the default RR IO operation limit setting from 1000 to 1 to get two links worth of bandwidth from VNXe. So I wanted to test what kind of an effect would this change have on performance.

Arnim van Lieshout has a really good post about configuring RR using PowerCLI and I used his examples for configuring the IO operation limit from 1000 to 1. If you are not confident running the full PowerCLI scripts Arnim introduced in his post here is how RR settings for individual device could be changed using GUI and couple of simple PowerCLI commands:

1. Change datastore path selection policy to RR (from vSphere client – select host – configure – storage adapters – iSCSI sw adapter – right click the device and select manage paths – for path selection select Round Robin (VMware) and click change)

2. Open PowerCLI and connect to the server

Connect-VIServer -Server [servername]

3. Retrieve esxcli instance

$esxcli = Get-EsxCli

4. Change device IO Operation Limit to 1 and set Limit Type to Iops. [deviceidentifier] can be found from vSphere client’s iSCSI sw adapter view and is in format of naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.

$esxcli.nmp.roundrobin.setconfig($null,”[deviceidentifier]”,1,”iops”,$null)

5. Check that the changes were completed.

$esxcli.nmp.roundrobin.getconfig(“[deviceidentifier]”)

Results 1

For these tests I used same environment and Iometer settings that I described on my Hands-on with VNXe 3300 Part 6: Performance post.

Results 2

For these tests I used the same environment except instead of virtual Win 2003 I used virtual Win 2008 (1vCPU and 4GB memory) and the following Iometer settings (I picked up these settings from VMware Community post Open unofficial storage performance thread):

Max Throughput-100%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 100% Read distribution
  • 5 minute run time

Max Throughput-50%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 32KB transfer request size
  • 100% sequential distribution
  • 50% read/write distribution
  • 5 minute run time

RealLife-60%Rand-65%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 40% sequential / 60% random distribution
  • 35 % read /65% write distribution
  • 5 minute run time

Random-8k-70%Read

  • 1 Worker
  • 8000000 sectors max disk size
  • 64 outstanding I/Os per target
  • 500 transactions per connection
  • 8KB transfer request size
  • 100% random distribution
  • 30 % read /70% write distribution
  • 5 minute run time
[Updated 11/29/11] After I had published this post Andy Banta gave me a hint on twitter:
You might squeeze more by using a number like 5 to 10, skipping
some of the path change cost.
So I ran couple of more tests changing the IO operation limit between 5-10. With the 28 disk pool there was no big difference when used the values 1 or 5-10. With the EFDs the magic number seemed to be 6 and with that I managed to get 16 MBps and 1100 IOps more out of the disks with specific work loads. I added the new EFD results to the graphs. 


Conclusions

Changing the default RR policy setting from the default 1000 io to 1 io really makes a difference on VNXe. On random workload there is not much difference between these two settings. But with sequential workload the difference is significant. Sequental write IOps and throughput is more than double with certain block sizes when using the 1 io setting. If you have ESXs connected to VNXe with LACP trunk I would recommend changing the RR policy to 1 5-10. Like I already mentioned Arnim has a really good post about configuring RR settings using PowerCLI. Another good post about multipathing is A “Multivendor Post” on using iSCSI with VMware vSphere by Chad Sakac.

Looking at the results it is obvious that EFD disks perform much better than SAS disks. On sequential workload 28 Disk SAS pool’s performance is about the same as 5 disk EFD RG’s. But on random workload EFD’s performance is about two times better than SAS pool’s. There was no other load on the disks while these tests were ran so under additional load I would expect EFD’s performing much better on sequential load as well. Better performance doesn’t come withouth a bigger price tag. EFD disks are still over 20 times more expensive per TB than SAS disks but then again SAS disks are about 3 times more expensive per IO than EFD disks.

Now if only EFDs could be used as cache on VNXe.

Disclaimer

These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.


%d bloggers like this: