Jul
7
2016

Install VMware PSC fails: vdcsetupldu failed. Error [9234] – User invalid credentials

I was setting-up a redundant VMware PSC setup stretching 2 datacenters. Every datacenter has 2 PSC and an load balancer.

Eventually, the virtual machines who run the PSC services, will run in a management cluster consisting of 3 nodes. These nodes are using Virtual SAN (VSAN) for storage.

I first installed 1 node with VMware vSphere and created on one of the SAS disks a datastore. Later on, the VMs will be moved to a VSAN datastore.

The first datacenter went as expected. No problems. But the second datacenter the installation of the PSC software failed with an error: Encountered an internal error

PSC Error

Looking in the logfile vmafd-firstbood-py-xxxx_stderr.log, vdcsetupldu failed with Error [9234] – User invalid credential

PSC Error

I was sure that the password provided was ok. Diving deeper into the log files, I found that after installation, VMware Identity services starts and the install tries to make a LDAP connection on TCP 389, who would fail. I created a PowerShell script that would check TCP 389 every 5 seconds. I find out that eventually, I could make a connection on TCP 389, but the install already gave up.

Ok, so the services will start, but too late.

Looking at ESXTOP (best troubleshooting program for ESXi), I saw, that when the VMware Identity services start, the disk latency went up to 100ms. Could it be that the disk is slowing down the virtual machine, so that the installation would fail? I moved the virtual machine to a SSD disk, restart the installation, and guess what. Install successful.

So, probably the PSC installation program does not check if the service is available, before trying to login. I will fail, saying that the credentials are not valid, rather than saying that it cannot make a connection.

Jun
22
2016

Nutanix .NEXT 2016 presentation: VCDX design for a 4000 seat Horizon View Deployment

Today I had the pleasure of presenting a session at Nutanix .NEXT 2016 in Las Vegas. My session “VCDX design for a 4000 seat Horizon View Deployment “ was about my VCDX journey, the choses that I had to make, and the problems I ran into.

NextPic01

Although this isn’t the first time I presented a session, this was the first-time in English, and for a room where 90% of the crowd is native, but I’m very pleased how it went. The room had a good vibe and I had good feedback.

Screen Shot 2016-06-22 at 01.27.37 Screen Shot 2016-06-22 at 01.27.24

I’ve always liked presenting. Sharing your knowledge and opinion, the feedback from the crowd. It always gives my positive energy.  This is definitely something I will be doing more in the future.

Although my presentation probably can be found on the .NEXT website, I promised to share the presentation. If you want a copy please contact me through twitter @WilmsenIT

Jun
17
2016

Do IOPS matter?

I’ve been designing VMware vSphere clusters for the last 10 years know. And with every design, the storage part is one of the most challenging. A improper storage design, results in an poor virtual machine performance.

Over the years, storage vendors added all kind of optimizations to their solution, in the form of cache. Almost every vendor added flash as an caching tear. Some only do read, but most of them do read and write.

With this cache, most vendors claim that there solution can handle 100.000 IOPS or more. But we all know, adding an flash drive who can handle 100.000 IOPS, won’t give you 100.000 IOPS in your vSphere environment.

We also have to deal with different block sizes, read/write ratio and write penalties.

What I see, is that most of the time, the storage processor, or the storage area network is the bottleneck.

 

In the near future, NVMe will be common in our datacenter, and the successor 3DXpoint will follow shortly. While NVMe can deliver 1,800 MB/sec read/write sequential speeds and 150K/80K random IOPS, 3DXpoint is a 1000x faster. Both solutions have an average latency of less than 1ms.

This is going to change the way we design our storage solution!

@vcdxnz001 wrote a great article about the storage area network and de speeds that are involved with the upcoming flash devices.

 

So, to get back to the question: Do IOPS matter?

If we have a full flash array, consisting of SSD, NVMe or 3DXpoint, IOPS are no longer the problem. All types of flash can deliver plenty of IOPS.

What matters is, latency. Ok, all flash devices are low latency (<1ms). But all IOPS have to be processed by the storage processor (in your array, or if you’re running a hyper-converged solution by your CPU) and by the storage area network. So these 2 components will determine your latency, thus the performance of your virtual environment.

Therefore IOPS are no longer an concern when you’re designing your storage.

When you design your storage solution, determine the highest latency you want to encounter and monitor this carefully. Design you whole stack, from HBA to the disk for low latency. If your latency is low, IOPS won’t be a problem.

Feb
16
2016

Determine your vSphere storage needs – Part 3: Availability, Security and Connectivity

 

This is the last part about the mini series: Determine your vSphere storage needs.

In this part, we’re going to cover 3 subjects:

  • Availability
  • Security
  • Connectivity

Although important, these aren’t the parts where you have many options.

Availability

When we talk availability in storage solution, where actually talking about high availability. Most (enterprise) storage solutions are in basis high available. Almost all components are redundant. Meaning that in case of a component failure, a second component takes over instantly. So this type of availability we don’t have to design. The part we may have to determine is when you storage solution is part of a Disaster Recovery (DR) solution. In this case we first have to determine the RPO and RTO for our environment. This is something you have to ask you business. How long may your storage (including all data) be unavailable (RTO), and when it’s unavailable, how many data may be lost in terms of second, minutes, hours of day’s (RPO)?

Let’s assume that our storage is located in 2 datacenters.

When you have a RPO of 0, you know you must have a storage solution that supports synchronous replication. Meaning, that if you write in one datacenter, this write is instantly done on the second datacenter. The writer (VMware vSphere) get’s a acknowledge back, when both storage solutions can confirm that write is finished.

This option is the most expensive option. This connection between the 2 datacenters has to be low latency and high capacity (depending on your change rate).

In most case synchronic replication also provides a low RTO. If and how the data becomes available in case of a DR depends on your storage solution. Active-Active you probably won’t have to do much. In case of a DR, data is instantly available. In case of active-passive, you have to pull the trikker to make the data available on the passive side. This can be done manually (through a management interface), or automatically by a script or VMware SRM.

When you have a RPO greater than 0, you have to option to go for asynchronous replication. In this case the write on the second datacenter can be acknowledge later than the first one. You also can replicate data once a hour, day, week, etc. The choice is yours.

If and how that becomes available in case of a DR, is the same as in the active-passive option in the pervious section (RPO=0).

Security

Most of the time, securing your storage solution is determine how you can access storage solution management interface and which servers can access your data. Most of the time, the storage network is a dedicated, non-routed network which cannot be accessed from external.

Normally, I advice a storage management server where the management software runs. If you make sure that this server is only accessible from your management workstations, your storage management is secure enough for most implementations.

To secure which servers can access your storage, depends on the protocol you’re using. To sum it up:

  • FC -> Based on WWN
  • iSCSI -> Based on IQN
  • NFS -> Based on IP address.

The choice of your protocol also determines the way you can secure your storage. Talk to your storage vendor about best practices how to secure your storage network.

Connectivity

And that brings us to the last part, connectivity.

As noted in the security part, with VMware vSphere we have 3 connectivity options:

  • Fiber Channel (FC)
  • iSCSI
  • NFS

So, what’s the best protocol? As always in IT, the answer to this question is: It depends.

It depends on your storage solution. Every storage solution is created with some principles. This makes this storage solution unique. These principles determine the best storage protocol for the storage solution. Of course, almost every storage solution supports 2 or more protocols, but only one performance best.  You probably know that FC is the fasted protocol, in theory. But what if you storage solution implemented NFS the most efficient? You probably going to choose NFS.

So ask your vendor. Especially if you made them responsible for the performance part, as discussed in part 1 of this series.

This ends this series of to determine your storage needs. Although you can design and determine a lot more, these series will give you a head start.

 

 

Jan
20
2016

Determine your vSphere storage needs – Part2: Performance

Besides capacity, performance is probably one of the most important design factors.

  • IOPS
  • Latency

The first step is to determine your performance needs in IOPS. This is probably the hardest part. In order to determine your IOPS needs, you have to know what type of virtual machines or applications, and there corresponding I/O characteristics your going to run on your storage solution.

You can take 2 approaches to determine this:

  • Theoretically
  • Practical

In the theoretical approach categorize the type of virtual machines you’re going host on your storage solution. For example:

  • Database
  • Email
  • Windows
  • Linux
  • Etc

Then you determine the corresponding block size, read/write ratio, and if it’s sequential or non sequential data per type.

In the practical, or real live method, we’re going to use a tool to determine the type of IO issues by your current virtual machines. There are a couple of tools available for this. One of them is PernixData Architect. This tool gives in depth information about your storage IO workload  including read/write ratio and which block size is used.

Of course you can use vCenter to see your current read/write ratio, but not the block size which is used. You even can use ESXTOP in combination with perfmon to determine your IO workloads. The last option isn’t the most easiest way, but it’s a option if you don’t have a vCenter or budget for Architect.

You can make a table like this.

Type #VMs Blocksize #KB Avg. #IOPS %READ %WRITE Sequential Note
Windows 2000 4 50 60 40 No Basic application servers
Linux 1000 4 50 60 40 No Basic application servers
MS SQL 4 64 1000 65 35 No 2 for production, 2 for DevOPS

Note: Values are only a example. Your environment can (and probably will be) different.

So, why want you to determine the read/write ratio and block size?

Every storage solution has his own way of writing data to the disks. The performance depends on the block size, read/write ratio and RAID-level (if any) used. A Windows virtual machine can use a block size of 64k. When a storage solutions write data in 4k block. 1 IO from this virtual machine will issue 16 IO’s on the storage. If your storage solution uses RAID6, you have a write penalty of 6. Ok, so in this example. When a windows guest issues 1 64k IO, this results in (16*6) 96 IO’s on your storage. Hmmm, kind of make you think not?

Nowadays, every storage system had some type of cache onboard. This makes it hard to determine how many IOPS a storage system can deliver. But remember, a write IO can be handled by the cache. But a read IO, which is not already in cache, needs to be read from disk first.

Second, you have to determine the average latency you want to encounter on your storage solution. Every storage vendor will promote his solution as a low latency storage. This depends on where you measure this latency.

Below a overview where latency is introduced in a SAN or NAS solution.

Storage Latency

As you can see, there are 12 places where latency is added. Of course, you want the lowest latency in the virtual machines. As this hard to determine, if you have 5000+ virtual machines, the next best place is the VMkernel.

Again, there a several tools to determine the read and write latency in the VMkernel. ESXTOP, vCenter of tools like Architect are great examples.

In my opinion these are the maximum average latency you want to encounter.

Type Read Write
VMs for servers  2ms  5ms
VMs for desktops or RDS <1ms <2ms

As always, lower is always better.

So, how do you determine which is the best storage solution for your needs?

Talk to your storage vendor. Give the table you’ve created and ask them which type and model suites your needs, and how to configure the storage solution. This way, your sure that your getting the right storage for your needs. And if, in the future, you’re encountering storage problems, you can check if the storage solution is performing according the requirement you discussed with your storage vendor.

Dec
2
2015

New Community reward: Nutanix NTC

Yesterday Angelo Luciani @AngeloLuciani announced the 2016 Nutanix Tech Champions (#NutanixNTC), and I’m please to say that: “I made it”. The list has grown since 2015. But that’s was expected as Nutanix continues there success story with the #Acropolis Hypervisor and #Webscale solution.

For you who don’t know what Nutanix NTC stands for a quote:

This program recognizes Nutanix and web-scale experts for their ongoing and consistent contributions to the community and industry. It also provides them with unique opportunities to further expand their knowledge, amplify their brand, help shape the future of web-scale IT.

A complete list of all Nutnix NTCs 2016 can be found Nutanix NTC 2016.

Last October I was delighted achieving my #VMware #VCDX. Maybe 2016 will bring Nutanix #NPX?

Nov
4
2015

My VCDX journey

Friday 30 October 2015, 9:30 in the morning. I’m in the middle of a lesson talking about SRM and recovery plans, I see a popup on the top right of my screen: “VCDX Defense Results”.
Me to my students: “Sorry, this is not what I normally do. But I have to check this email”.

Michael Wilmsen,

Congratulations! You passed! It gives me great pleasure to welcome you to the VMware Certified Design Expert community.

Your VCDX number is 210.

My journey started about 1 year ago. I was contracted by a program for a city in the Netherlands to architect a 3800 Horizon View Solution. I already passed the VCAP DCD and DCA exams, so this was my opportunity to go for VCDX.
After 6 months, the technical design document was at version 1.0 and the implementation started. At this time I really didn’t had the time to go for VCDX. After 4 months the project was at his end and the VCDX was back on my mind. But how to go from here?
First I asked permission from the customer if I could use the design for my VCDX defense. This was no problem as long as I anonymize the design. Ok, no problem of course. My design was in Dutch, so I had to translate it anyway.
In June I attendant Nutanix .NEXT (my design was based on Nutanix hardware). There I had a conversation with @repping. Nutanix was willing to help me out with a mentor and they arrange @vcdx026 Alexander Thoma for me. Alexander is also know as the Iron Panelist has he did more than 100+ panels when he worked for VMware. Now that he works for Nutanix he can mentor me, but is still restricted by the code of conduct.
After I came back from Miami I looked at the defense dates for 2015. These where in October but the application had to be submitted before 25 of august. This gave me about 3 months but my holiday with the family was about 3 weeks in august. This limited my time.
At this point I said to myself: “Either you go for it, or you probably won’t to it again for the next few years.”.
I had a good conversation with my wife because this is going to take a lot of free time and late working. You probably heard this before, this is really important! If you don’t have the support from your family, and you don’t want a divorce, don’t start. I’m lucky enough to have a great wife who supports me al the way, although this isn’t always easy for her.
Just before my holiday I had a concept version the design ready. My collogue and friend Joop Kramp offered me to do a review. In my final week of my family holiday he provided me the necessary feedback. As I wanted to know what his feedback was, my wife agreed that I could spend a morning in Italy going to his comments.

Screen Shot 2015-11-04 at 20.49.31
When I was back home, I spend the last to day’s before sending my application to VMware finalizing my documents. On the 25 of august 2015 at 4PM I clicked the send button.
At this point I was more relaxed. This ended a few weeks before the defense date. These weeks friends and family often asked where I was with my mind. Easy, going over and over my design. Thinking about possible question I could get.
My defense date was on the 22 of oktober in Stains-upon-thames, UK. My wife wanted to go to London for a couple of years to go to the Phantom of the Opera. So I looked at possible flights and dates for the Phantom. The only possible option for the Phantom was the night before my defense. This made me a little nervous. Didn’t I have to go over my design one more time the night before. But when I thought it over, I said to myself: “Your defense doesn’t depends on 1 evening the night before. If that’s the case, you are not ready to defend.”
So I went to the Phantom of the Opera with my wife and really enjoyed it. The next morning we drove to Stains.
At 2PM my defense started. I thought I would be nervous, but this wasn’t the case. Actually, I was really relaxed. Looking forward to the defense.
The defense part I really enjoyed. This was more a conversation between equally minded people. Although I was stated that you don’t know how the panel thinks about your answer, I felt they understood me. In my case I had some weird assumptions as the where political. For this I had good answer why this was.
The design and troubleshooting part was more frustrating. You’re constantly thinking: “Am I not forgetting something?”
After the defense was over my wife asked me how it went? My response: “Ok, I think. But I really don’t know if its enough. Did I give the correct in-depth answers the panel was looking for?”
The rest you know. I’m VCDX number 210!

Do I have some tips that you haven’t found already Googling the internet? There is one thing I can think of.
If you’re allowed to defend, your design is technically approved. The defense part is about knowing your design and that you can defend yourself. This is not about knowing how for example Host Isolation Response works. You have to explain why you made the decisions to go for Power Off. In other words, you’re not tested about your technical knowledge. You have already proven this by passing your VCAP exams and being allowed to defend.

Of course I want to thank a few people
First of all, my wife, Marjolein. Again. Thanks honey! You made this possible for me!
Alexander Thoma, The Iron Panelist! Thanks for our late night Webex discussions and clear DMs when I asked you a question. They where sort and clear. Just the way I like them.
Niels Hagoort (VCDX212), Rutger Koster (VCDX209) and Leo Scheltema. My VCDX study buddy’s. We had some good discussions and a nice mock exam in my pub.
Raymon Epping. Thanks for your support and that of Nutanix by providing me feedback and a mentor.
Michael Webster (VCDX066). Thanks for the mock exam. Although you couldn’t give me feedback if I did ok, it helped me out allot how a defense goes.
Duncan Epping (VCDX007) and Frank Denneman (VCDX029). You guy’s made me believe that I was able to go for VCDX. This was the real first step for me. Frank, Thanks for the tips so I was more relaxed the day’s before my defense.

A few day’s ago I read a tweet asking: “I’ve I want to go for VCDX, what is the first step?”. My response: “This tweet is your first step.”.
If you want to go for VCDX, go! It’s better failing in trying than not to try at all.
And if you need a mentor, you can contact me.

Sep
16
2015

Determine your vSphere storage needs – Part 1: Capacity

Currently I’m working on a project where I have been asked to determine the storage requirements for a new storage solution. The customer is going to run a VMware vSphere 6 environment in a active-passive twin datacenter setup.

As I was gathering the customer requirements I thought to write a blog post how this process goes will I’m working on it.

When designing a vSphere storage setup you have to design the following sections.

  1. Capacity
  2. Performance
  3. Availability
  4. Security
  5. Connectivity

Let’s start with the first one, capacity.

Capacity is the easiest part of the design and with most storage solutions it’s easy to extend. You can use the following formula to determine your total RAW capacity storage need:

((TotalVMs * (OSDriveCapacityGB + AvgMemSizeVMNotReserved)) + TotalDataGB) * (RestoreCapacity% + SnapshotReserveCapacity%) = TotalGB

For example:

  • You have 1000 VMs.
  • Every VM has a C-Drive (or root) of 50GB.
  • The average memory size of a VM is 8 GB where nothing of is reserved.
  • You want to reserve 10% for backup restores.
  • You want to reserve 5% for snapshots.
  • Total data capacity is 50TB

 

This makes:

((1000 * (50GB + 8GB)) + 50TB) * (10% + 5%) = 88,14 TB

This is the RAW capacity available to the VMware vSphere hosts, and doesn’t take RAID overhead, and storage features like thin provisioning, deduplication and compression into account. These storage features are most of the time reasons for a discussion whether if you want to use these features and what the impact on the performance is.

Using features like compression and deduplication depend on the use case.

I can imaging that you want to use compression and deduplication for archiving purposes, and not for production virtual machine like MS SQL or SAP because of the potential performance impact.

The type of RAID level used by the storage solution implicates the RAID overhead and how you size the storage solution. Do you configure 1 large container containing all disks? Or do you create multiple containers? This again depends on the type of storage solution, and the capacity and performance requirements you have. The performance part I will cover in a later blog post. For now we focus on the storage capacity.

VMware has 3 type of VMDKs.

  • Thin
  • Thick (Lazy)
  • Eager Zero Thick

I’m not going to explain in-depth the VMDK type because this I well covered in Death to false myths: The type of virtual disk used determines your performance, Thin or thick disks? – it’s about management not performance, and http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf blog post.

If you’re going to use thin provisioned VMDKs in you environment you can subtract the free space from your RAW capacity needs. Do take into account that thin VMDKs will grow and you have to monitor the free capacity of your storage solution to make sure that it won’t fill-up. This will result in virtual machines who will be unavailable.

So how your (new) storage solution is going to provide your RAW capacity depends on the type of storage solution. In my opinion you have to talk with your storage vendor what is the best configuration based on your company needs.

What you do need to take into account is how easy it is to extend the capacity of your storage solution. The new storage solution probably will run for the next 5 years. No way that you can determine the capacity need for the next 5 years. And when you have to add capacity, do you have to reconfigure storage pools, LUNs and/or RAID groups? Can this be done on the fly without impacting production workloads?

Of course performance definitely will impact the setup and configuration of your storage solution. This I will cover in a later blog post.

Aug
30
2015

Silicon Valley Road Trip – Day 2

Today we had the pleasure to visit to other companies in Sillicon Valley, Nutanix and Pernixdata. Although these 2 companies cannot called an startup anymore they had quite some interesting stuff to show us.

At Nutanix we had a 1 hour meeting with @stevenpoitrais (who wrote the Nutanix Bible). If you ever worked with Nutanix you probably visits his web page. If not, you should!

Nutanix and there solution is new for me. So what I really liked is that Steven didn’t presented a slide deck but just asked what we would like to talk about. Before we knew it, we here in a discussion about the position of Nutanix and there new Acropolix Hypervisor and the impact on existing VMware, KVM and Openstack customers. Although sometimes the filter kicked in about what Steven can and cannot say I really like the discussion. After 1,5 hour Steven noticed that I was time to lunch. And with a few slices of pizza the discussion continued. Steven told us that most of there Acropolix customers (Hypervisor based on KVM) are in Asian and that we only see the top of the iceberg (and than the filter kicked in again 🙂 ).
After more than 2,5 hours we had to go, but where not finished!

Next was Pernixdata with @Frankdenneman. After a tour through there building (including the nice transformed pictures of Starwars characters with the faces of @SatyamVaghani and Frank Denneman) we where offered a beer (it was after 12AM) and went to the boardroom.
Here Frank showed us there latest product FVP Architect. FVP Architect comes with a VIB module in the VMware Hypervisor and is gathering all storage metadata and send this to database where it’s crunched. This gives VMware and Storage administrators realtime and historical overview how there environment is performing and what can be improved. It also gives a in-dept view about what type of storage workload and which virtual machine is generating this workload. Giving you the option to adjust this VM instead of just buying more IOPS for your storage.
After a while we had a discusion about when a virtual machine writes a block of data and how this is handled by the VMware kernel to storage . I had a mis conception that the VMware Kernel alway’s writes in 8Kb block to VMFS (8Kb is used for sub-allocation). So Frank pulled in one of the VMFS inventors @mnv104, and where taught a lesson in how the VMkernel kernel sends data to the storage layer. Woh!

CNh766zUcAAkKFS.jpg-large
I know Frank has tolled me a lot of times  that most people who where working on the cool stuff in the VMkernel now work at Pernixdata, but now I experienced it!
After a quick visit from Satyam I was time to leave. This was really a cool visit!

Aug
28
2015

Silicon Valley Road Trip – Day 1

On the 27 and 28 of august I’m having a road trip through Silicon Valley with fellow colleges @rutgerkoster @nielshagoort and @frankdenneman.

These day where’re going to visit 4 startup companies who are so kind to spend a couple of hours on us to tell us there latest innovations.

Yesterday we had the pleasure to visit Rubrik and Platform9.

Rubrik developed a Converged Data Management time machine/backup appliance for the midrange and upper marked who can be setup in 15 minutes. Chris Wahl took us through a 1 hour presentation which he is also going to present at VMworld in San Francisco and probably  also in Barcelona.

The Converged Data Management solution of Rubrik can be scaled from 3 nodes to infinite. Enabling a backup solution with a unlimited retention time without the need for tapes. And let’s be honest. Tape should be death but if you want a retention time more than a year it’s hard to get around tapes these day’s. There solution can be attached to the Amazon cloud with S3 so you’re able to do a tier2 backup in house (or on a second datacenter) and a tier3 backup in the cloud. When a file or a virtual machine needs to be restore you just query the database of Rubrik (which is a Casandara database) for that specific file or virtual machine and a version who you want to restore. It doesn’t matter if this file is situated in your private datacenter or in the public cloud. This is completely transparent and encrypted. Using the cloud for tier3 backup can be useful for companies who need a long retention time like legal or health care.

I especially like to 2 things about there solution.

In traditional backup solutions you create a backup job where you configure several settings like destination and retention time. In Rubrik you create a backup policy and attaches this policy to a virtual machine. This policy has al the necessary setting configured. The dashboard of Rubrik gives a overview how long you can keep your backup based on the current capacity. If after a few years you need more capacity, you just add more nodes to the solution. Enabling you to grow on demand when every you need to.

Because Rubrik is API driven you can create custom backup scripted as you need. Currently Rubrik has no support for application aware backup for example Exchange. This is one the definitely looking into. But with the API (who are well documented) it shouldn’t be that hard to create a consistent application aware backups.

Platform9 was so kind to tell us there story during lunch. Platform9 developed a cloud solution for administrating and automating a multi hypervisor environment in the private cloud who can be global situated. There main focus is Openstack but also docker, KVM and VMware are supported.

The first part of our meeting we saw a slide deck prsented by Sirish Raghuram. After about 30 minutes the projector was turned of and we had a real nice and inspiring discussion with Hirish Raghuram and Madhura Maskasky about there product and what the world needs a this moment.

Traditional solutions are normally VM based and not workload based. Meaning that if your webserver environment needs more performance you add more performance to your webserver environment. This can be a web server or maybe a proxy with a load balancer configured or both. This is a completely different approach of deploying services. It doesn’t matter where it runs, on which hypervisor it runs, you just need the capacity for this type of services.

That there is going to be a  multi hypervisor world in the next couple of years it for sure. I know that a lot of companies don’t want to support more than 1 hypervisor because of the cost of managing different platforms but what if you have 1 management tool who can do it all.  Yes, of course you need to support vSphere, KVM and/or Openstack.  But I’m convinced the installation and maintenance of these hypervisors will be simplified in the next couple of years. And does the other hypervisor need to be on-premise? Or can this also be the public cloud?
The need for a capacity calculation is no longer needed if you just can add resources to your environment as you grow or shrink. Yes of course there are licenses involved. But the licensing model as we know right now will change. It has to! The number 1 cost in a datacenter at the moment are licenses. This is why the public cloud can be interesting for companies. You do not need to worry about licensing, hardware cost, etc. But I can imaging you don’t want all your data in the cloud. Putting your data in the cloud is easy. Getting it out can be painful. So a transparent solution supporting multiple hypervisors is a welcome solution!