Jun
22
2016

Nutanix .NEXT 2016 presentation: VCDX design for a 4000 seat Horizon View Deployment

Today I had the pleasure of presenting a session at Nutanix .NEXT 2016 in Las Vegas. My session “VCDX design for a 4000 seat Horizon View Deployment “ was about my VCDX journey, the choses that I had to make, and the problems I ran into.

NextPic01

Although this isn’t the first time I presented a session, this was the first-time in English, and for a room where 90% of the crowd is native, but I’m very pleased how it went. The room had a good vibe and I had good feedback.

Screen Shot 2016-06-22 at 01.27.37 Screen Shot 2016-06-22 at 01.27.24

I’ve always liked presenting. Sharing your knowledge and opinion, the feedback from the crowd. It always gives my positive energy.  This is definitely something I will be doing more in the future.

Although my presentation probably can be found on the .NEXT website, I promised to share the presentation. If you want a copy please contact me through twitter @WilmsenIT

Jun
17
2016

Do IOPS matter?

I’ve been designing VMware vSphere clusters for the last 10 years know. And with every design, the storage part is one of the most challenging. A improper storage design, results in an poor virtual machine performance.

Over the years, storage vendors added all kind of optimizations to their solution, in the form of cache. Almost every vendor added flash as an caching tear. Some only do read, but most of them do read and write.

With this cache, most vendors claim that there solution can handle 100.000 IOPS or more. But we all know, adding an flash drive who can handle 100.000 IOPS, won’t give you 100.000 IOPS in your vSphere environment.

We also have to deal with different block sizes, read/write ratio and write penalties.

What I see, is that most of the time, the storage processor, or the storage area network is the bottleneck.

 

In the near future, NVMe will be common in our datacenter, and the successor 3DXpoint will follow shortly. While NVMe can deliver 1,800 MB/sec read/write sequential speeds and 150K/80K random IOPS, 3DXpoint is a 1000x faster. Both solutions have an average latency of less than 1ms.

This is going to change the way we design our storage solution!

@vcdxnz001 wrote a great article about the storage area network and de speeds that are involved with the upcoming flash devices.

 

So, to get back to the question: Do IOPS matter?

If we have a full flash array, consisting of SSD, NVMe or 3DXpoint, IOPS are no longer the problem. All types of flash can deliver plenty of IOPS.

What matters is, latency. Ok, all flash devices are low latency (<1ms). But all IOPS have to be processed by the storage processor (in your array, or if you’re running a hyper-converged solution by your CPU) and by the storage area network. So these 2 components will determine your latency, thus the performance of your virtual environment.

Therefore IOPS are no longer an concern when you’re designing your storage.

When you design your storage solution, determine the highest latency you want to encounter and monitor this carefully. Design you whole stack, from HBA to the disk for low latency. If your latency is low, IOPS won’t be a problem.

Feb
16
2016

Determine your vSphere storage needs – Part 3: Availability, Security and Connectivity

 

This is the last part about the mini series: Determine your vSphere storage needs.

In this part, we’re going to cover 3 subjects:

  • Availability
  • Security
  • Connectivity

Although important, these aren’t the parts where you have many options.

Availability

When we talk availability in storage solution, where actually talking about high availability. Most (enterprise) storage solutions are in basis high available. Almost all components are redundant. Meaning that in case of a component failure, a second component takes over instantly. So this type of availability we don’t have to design. The part we may have to determine is when you storage solution is part of a Disaster Recovery (DR) solution. In this case we first have to determine the RPO and RTO for our environment. This is something you have to ask you business. How long may your storage (including all data) be unavailable (RTO), and when it’s unavailable, how many data may be lost in terms of second, minutes, hours of day’s (RPO)?

Let’s assume that our storage is located in 2 datacenters.

When you have a RPO of 0, you know you must have a storage solution that supports synchronous replication. Meaning, that if you write in one datacenter, this write is instantly done on the second datacenter. The writer (VMware vSphere) get’s a acknowledge back, when both storage solutions can confirm that write is finished.

This option is the most expensive option. This connection between the 2 datacenters has to be low latency and high capacity (depending on your change rate).

In most case synchronic replication also provides a low RTO. If and how the data becomes available in case of a DR depends on your storage solution. Active-Active you probably won’t have to do much. In case of a DR, data is instantly available. In case of active-passive, you have to pull the trikker to make the data available on the passive side. This can be done manually (through a management interface), or automatically by a script or VMware SRM.

When you have a RPO greater than 0, you have to option to go for asynchronous replication. In this case the write on the second datacenter can be acknowledge later than the first one. You also can replicate data once a hour, day, week, etc. The choice is yours.

If and how that becomes available in case of a DR, is the same as in the active-passive option in the pervious section (RPO=0).

Security

Most of the time, securing your storage solution is determine how you can access storage solution management interface and which servers can access your data. Most of the time, the storage network is a dedicated, non-routed network which cannot be accessed from external.

Normally, I advice a storage management server where the management software runs. If you make sure that this server is only accessible from your management workstations, your storage management is secure enough for most implementations.

To secure which servers can access your storage, depends on the protocol you’re using. To sum it up:

  • FC -> Based on WWN
  • iSCSI -> Based on IQN
  • NFS -> Based on IP address.

The choice of your protocol also determines the way you can secure your storage. Talk to your storage vendor about best practices how to secure your storage network.

Connectivity

And that brings us to the last part, connectivity.

As noted in the security part, with VMware vSphere we have 3 connectivity options:

  • Fiber Channel (FC)
  • iSCSI
  • NFS

So, what’s the best protocol? As always in IT, the answer to this question is: It depends.

It depends on your storage solution. Every storage solution is created with some principles. This makes this storage solution unique. These principles determine the best storage protocol for the storage solution. Of course, almost every storage solution supports 2 or more protocols, but only one performance best.  You probably know that FC is the fasted protocol, in theory. But what if you storage solution implemented NFS the most efficient? You probably going to choose NFS.

So ask your vendor. Especially if you made them responsible for the performance part, as discussed in part 1 of this series.

This ends this series of to determine your storage needs. Although you can design and determine a lot more, these series will give you a head start.

 

 

Jan
20
2016

Determine your vSphere storage needs – Part2: Performance

Besides capacity, performance is probably one of the most important design factors.

  • IOPS
  • Latency

The first step is to determine your performance needs in IOPS. This is probably the hardest part. In order to determine your IOPS needs, you have to know what type of virtual machines or applications, and there corresponding I/O characteristics your going to run on your storage solution.

You can take 2 approaches to determine this:

  • Theoretically
  • Practical

In the theoretical approach categorize the type of virtual machines you’re going host on your storage solution. For example:

  • Database
  • Email
  • Windows
  • Linux
  • Etc

Then you determine the corresponding block size, read/write ratio, and if it’s sequential or non sequential data per type.

In the practical, or real live method, we’re going to use a tool to determine the type of IO issues by your current virtual machines. There are a couple of tools available for this. One of them is PernixData Architect. This tool gives in depth information about your storage IO workload  including read/write ratio and which block size is used.

Of course you can use vCenter to see your current read/write ratio, but not the block size which is used. You even can use ESXTOP in combination with perfmon to determine your IO workloads. The last option isn’t the most easiest way, but it’s a option if you don’t have a vCenter or budget for Architect.

You can make a table like this.

Type #VMs Blocksize #KB Avg. #IOPS %READ %WRITE Sequential Note
Windows 2000 4 50 60 40 No Basic application servers
Linux 1000 4 50 60 40 No Basic application servers
MS SQL 4 64 1000 65 35 No 2 for production, 2 for DevOPS

Note: Values are only a example. Your environment can (and probably will be) different.

So, why want you to determine the read/write ratio and block size?

Every storage solution has his own way of writing data to the disks. The performance depends on the block size, read/write ratio and RAID-level (if any) used. A Windows virtual machine can use a block size of 64k. When a storage solutions write data in 4k block. 1 IO from this virtual machine will issue 16 IO’s on the storage. If your storage solution uses RAID6, you have a write penalty of 6. Ok, so in this example. When a windows guest issues 1 64k IO, this results in (16*6) 96 IO’s on your storage. Hmmm, kind of make you think not?

Nowadays, every storage system had some type of cache onboard. This makes it hard to determine how many IOPS a storage system can deliver. But remember, a write IO can be handled by the cache. But a read IO, which is not already in cache, needs to be read from disk first.

Second, you have to determine the average latency you want to encounter on your storage solution. Every storage vendor will promote his solution as a low latency storage. This depends on where you measure this latency.

Below a overview where latency is introduced in a SAN or NAS solution.

Storage Latency

As you can see, there are 12 places where latency is added. Of course, you want the lowest latency in the virtual machines. As this hard to determine, if you have 5000+ virtual machines, the next best place is the VMkernel.

Again, there a several tools to determine the read and write latency in the VMkernel. ESXTOP, vCenter of tools like Architect are great examples.

In my opinion these are the maximum average latency you want to encounter.

Type Read Write
VMs for servers  2ms  5ms
VMs for desktops or RDS <1ms <2ms

As always, lower is always better.

So, how do you determine which is the best storage solution for your needs?

Talk to your storage vendor. Give the table you’ve created and ask them which type and model suites your needs, and how to configure the storage solution. This way, your sure that your getting the right storage for your needs. And if, in the future, you’re encountering storage problems, you can check if the storage solution is performing according the requirement you discussed with your storage vendor.

Dec
2
2015

New Community reward: Nutanix NTC

Yesterday Angelo Luciani @AngeloLuciani announced the 2016 Nutanix Tech Champions (#NutanixNTC), and I’m please to say that: “I made it”. The list has grown since 2015. But that’s was expected as Nutanix continues there success story with the #Acropolis Hypervisor and #Webscale solution.

For you who don’t know what Nutanix NTC stands for a quote:

This program recognizes Nutanix and web-scale experts for their ongoing and consistent contributions to the community and industry. It also provides them with unique opportunities to further expand their knowledge, amplify their brand, help shape the future of web-scale IT.

A complete list of all Nutnix NTCs 2016 can be found Nutanix NTC 2016.

Last October I was delighted achieving my #VMware #VCDX. Maybe 2016 will bring Nutanix #NPX?