Nutanix .NEXT first day

When I got to the key note this morning, the word was out that there would be some pretty cool announcements be made. And that was the fact.

One of the biggest announcements is Acropolis.
This is Nutanix own hypervisor platform with there management interface Prism. The hypervisor is based on KVM but is modified to run optimal with Nutanix filesystem. Acropolis is not the name of the hypervisor but the solution. And it can not only manage KVM but also VMware and Hypev-V. During the key-note we saw a a demo that a ESXi hosts was reinstalled with KVM managed from the Prism interface.
The vision of Nutanix is a multi-hypervisor world were you put your workload on the hypervisor you think it best fits. Al managed from one interface. Even the creation of one or more VMs are done from Prism.

I must say that I’m impressed with the capabilities of Acropolis. One single interface for multiple hypervisors. Nice!

I don’t think that a company want to run multiple hypervisor in there environment. Even if it’s managed from one interface. Every hypervisor must be managed and configured for performance, security and patches. Same goes for upgrades. Most company will choose one hypervisor as there standard.

On the other hand, if you choose for a web scale solution like Nutanix, you have the possibility to change your hypervisor without loosing your management interface. Reducing the learning curve for your IT department. And you have to possibility to use a different (cheaper) hypervisor for you R&D or LAB environment.

Some other features of Acropolis are:

  • Ip address management for virtual machines.
  • Collection of statistics of different hypervisors.
  • HA scheduler
  • VM console (done with VNC)
  • Configuring network settings for VMs.

Beging a old Novell consultant/trainer I’m pleased that a open source project as KVM gets a boost for the enterprise with a company like Nutanix. Although not everything is possible at the moment (VDI for example) I’m confinced this is only a matter of time.

I’m impressed with the work that Nutanix has done in the last 5 years and I’m curious whats coming .NEXT.

Hyper converged vs. traditional SAN/NAS?

The Hyper converged market is booming. Vendors like Nutanix, Simplivity or VMware with there VSAN are almost on every event spreading the word. But is Hyper converged the solution to everything? As always the answer to this question is: It depends.

Let me start by explaining the basics of a hyper converged system.

Hyper converged it a method of combining the local storage of multiple servers to one logical shared storages. So when a hypervisor does a read/write, it’s talking to local storage.
Most of these solutions are using a auto-tiering solution with SSD or RAM in every host, so when the hypervisor does a read/write, it’s talking to a local SSD.
Hyper converged solutions make sure that when a VM runs on hosts A the data of this VM is also on the local storage of Host A. When this VM gets moved to another host, the data of that VM is moved also.

Talking to local SSD , Flash or RAM has some advantages than talking to SSD, Flash or RAM in a SAN/NAS and that is latency. When talking to as SAN/NAS you always have to go through a storage network. This storage network brings extra latency that local SSD, Flash or RAM doesn’t have. It doesn’t matter if this storage network is ethernet or fiber channel, although fiber channel has lower latency than ethernet.
Oke, when the hosts write a block of data, this block has to be replicated to another host. So your storage network can be a bottleneck again. But with read you cannot beat local storage in case of IOPS and latency. Even when you have a full flash storage array.

You can ask yourself: Does the extra latency that a storage network adds matter? And of course the answer again is: It depends.

When you have for example a desktop workload (VDI, RDS), a database workload or a webserver, the low latency and high IOPS can me a great advantage. When you just have a file or print server you can accept a higher latency and less IOPS.

So what is the disadvantage of a hyper converged solution?
In my opinion the amount of TB you have in a hyper converged solution and the costs that this brings with it.
In a SAN/NAS you can easily have 1 PB of storage. Yes, this isn’t storage with 100K IOPS and a latency of less than 1 ms but who cares? We all know that in most cases 80% of the storage we have in our environment is that data who haven’t been toughest in the lasts 2 years.

There is also a third solution like PernixData FVP. In this case you add Flash, RAM in you hosts and use this like a intelligent caching solution for you SAN/NAS. This brings the same advantage of a hyper converged solution.
A solution like PernixData has 2 disadvanteges:

  1. When you read a block the first time, you have to fetch it first from you SAN/NAS.
  2. When you write data, this is on local SSD, Flash or memory but after a while it has to written to your SAN/NAS. This can only be this fast as your SAN/NAS is.

So, to come to a conclusion: I think that when you have a workload that demands low latency and high IOPS you cannot beat hyper converged. If you want to have a lot of TB than you want to go for a traditional SAN/NAS with or without a solution like PernixData.

Do you have to make a choice? Of course not, why not use them both. Use the type of storage or the type of workload you think is best.
Use the best of both worlds in your world!