Here I go again. About two years ago I started writing a series of blog posts about my hands-on experience with newly released VNXe 3300. At that time the VNXe 3300 was just released and there wasn’t that much documentation out there. So I did lots of testing and had to make my own “best practices”. The previous blog series is one of the reasons why I now have a chance to test and write about the VNXe 3200.
On June 10 Chad published this blog post: “Summer Gift Part 2 – 10 VNXe arrays free to play with for volunteers!”. I was surprised to find myself mentioned on the post and even more surprised to find out that one of the devices was reserved for me. So while I was on vacation last week the test unit arrived:
- VNXe 3200 – 2U Form Factor/12 Drive DPE
- 3.5” Drives
- 6 x 600GB 15K Pack
- 3 X 100GB eMLC Flash Drives (for FAST auto-tiering)
- 2 X 100GB eMLC Flash Drives (for FAST Cache)
- 9TB Raw Capacity
VNXe 3200 is basically a combination of the new VNX MCx multicore technology and the old VNXe OE/Unisphere. I won’t go through all the new features but there are a few worth mentioning:
- Multi-core RAID (MCR), Multi-core Cache (MCC), Multi-core Flash (MCF)
- “Active/Active” file
- Single container for block and file
- Linux-based platform
More details about the new features on EMC VNXe Series website.
Well not much to write about this: Install rack rails, lay DPE on those, connect cables and the VNXe was ready for configuration. However, this was something I hadn’t seen before with the previous VNXe:
Quick look in the installation documentation and it revealed to be a power adapter for the front LED-lights:
There have been several new versions of the VNXe OE since my first VNXe blog post but the “Unisphere Configuration Wizard” still looks similar. Going through the wizard takes about 10 minutes but I skipped most of the configurations as usual. I prefer to upgrade the VNXe to the latest software version before I do any configurations to new devices. After the configuration wizard is completed you will get a popup and you will be directed to the EMC support website where the latest version can be downloaded.
Even though there is totally new hardware running under the hood Unisphere still looks and feels the same as on the latest software vesion on VNXe 3100/3150/3300. I still agree that VNXe is simple to install and configure. Of course I haven’t configured any storage pools or iscsi servers yet. I’ll cover those on the next posts. Also performance and some of the new features will be reviewed later.
Software version 2.0.3:
Software version 2.4.2:
Software version 3.0.1:
Once again I had a chance to play around with some shiny new hardware. And once again the hardware was VNXe 3300 but this time it was something that I hadn’t seen before: 2.5” form factor with 46 600GB 10k disks. If you have read about the new RAID configurations in OE 2.4.0 you might figure out what kind of configuration I have in my mind with this HW.
In this post I will go through some of the new features introduced in VNXe OE 2.4.0, do some configuration comparisons between 3.5” and 2.5” form factors and also between VNXe and VNX. Of course I had to do some performance testing as well with the new RAID configurations so I will introduce the results later in this post.
VNXe OE 126.96.36.19932 release notes
Along with the new OE came the ability to customize UI dashboard. The look of the Unisphere UI on new or upgraded VNXe is now similar to Unisphere Remote. You can customize the dashboard and also create new tabs and add desired view blocks to the tabs.
Some of the operations are now added as background jobs and you don’t have to wait that the operation is finished. Steps of the operations are also more detailed when viewed from the jobs page. Number of active jobs is also shown next to the alerts on the status bar dependent on what page are you on.
New RAID configurations
Now this is one of the enhancements that I’ve been waiting for because VNXe can only utilize four RAID groups in a pool. So with the previous OE this would mean that datastore in 6+1 RAID 5 pool could only utilize 28 disks. Now with the 10+1 RAID 5 pool structure datastores can utilize as many as 44 disks. This also means increased max iops per datastore. 3.5” form factor 15k disk RAID 5 pool max iops is increased from ~4900 to ~7700 and with 2.5” form factor 10k disk RAID 5 pool max iops is increased from ~3500 to ~5500. Iops is not the only thing to be looked at. Size of the pool matters too and not to forget the rack space that the VNXe will use. While I was sizing the last VNXe that we ordered I made this comparison chart to compare the pool size, iops and rack space with different disk form factors in VNX and VNXe.
Interesting setup with the VNXe 3150 and 2.5” form factor disks is the 21TB and 5500 iops packed in 4U rack space. VNXe 3300 with same specs would take 5U space and VNX5300 would take 6U space. Of course the SP performance is a bit different between these arrays but so is the price.
I’ve already posted some performance test results from VNX 3100 and 3300 so I added those results to the charts for comparison. I’ve also ran some tests on VNX 5300 that I haven’t posted yet and also added those results on the charts.
There is a significant difference in the max throughput between 1G and 10G modules on VNXe. Then again the real life test results are quite similar.
These results reflect the performance of the environment that the tests were ran in. Results may vary depending on the hardware and how the environment is configured.
Some of you might have noticed that lately I haven’t been as active on social media as I have before. There is couple of reasons for that. One busy factor that’s not listed on the subject has been my involvement in the VNX implementation project that has taken lot of my time. The goal of that project was to replace CX/MirrorView/SRM with VNX/RecoverPoint/SRM and it didn’t go that smoothly. The project is now finalized and everything worked out in the end. I learned a lot during the project and I have some good ideas for blog posts for the future i.e. RecoverPoint journal sizing.
In February 2008 my wife and I packed everything that we had, sold our condo in Finland and moved to Atlanta because of my internal transfer. We moved to the Atlanta suburbs and really didn’t know that many people around there. The initial plan was to stay for two years and then come back home. Well, those two years became almost five years. During that time we got very close with our neighbors and got to know lots of other great people from the same neighborhood. It was our home and we felt like we belonged to the community. The most amazing two things that happened during that time were the births of our children. It was hard to be so far from “home” and family in the beginning. We saw family once a year when we visited Finland and almost all closest family members visited us at least once. We enjoyed our time in the US but then came the time to move back to Finland. Once again everything we had was packed to a container and shipped to Finland. I had mixed feelings about the move. I was excited to go back “home” but then again I was sad to leave so many good friends behind. Driving to the Atlanta airport one last time wasn’t easy at all. All the good memories rushed through my mind. It was mid December 2012 and we moved back to Finland to the snow and coldness.
On my way to the office
This spring I’ve been with the current company for 9 years. About right after I joined the company I started my virtualization journey with GSX and then with ESX 2.0. From that point on my main focus has been on virtualization and storage. I’ve been working as an architect and been involved in getting the ESX from version 2 to 5 and also implementing new features as those have been announced i.e. SRM and View. I’ve also got my hands dirty when implementing EMC CX300, upgrading it to CX3-40C and replacing it with CX4-120 and CX4-240. As from my VNXe post you might have noticed that I’ve done some work with those too. And of course now with the latest project I had also a chance to get some hands-on experience with VNX and RecoverPoint. In my new role I’ll be managing a team which is responsible for developing and maintaining the company’s whole infrastructure including virtualization, networking, storage, Windows/Linux servers and so forth. This is the same team that I’ve been a part of in the past years. I’m looking forward for the new challenges that the new role brings to my desk and don’t worry, I’ll still be involved with the technical stuff and will continue blogging about virtualization and storage. There might be some 2.5” form-factor VNXe and VNX/RecoverPoint posts coming out soon.
My first ESX installation media
Thank you, all my followers, for the year 2012 and I hope this year is going to be even better. I’m happy to see that my posts in the past year have been helpful.