Testdrive Infinio Accelerator 3.0

Nowadays, many legacy storage devices (SAN/NAS) have the option for hosting flash devices in their solution. Flash devices leverage high IOPS and low latency.
Most common used storage protocols (iSCSI, NFS and even Fiber Channel) are bandwidth optimized, and are not latency optimized. Of course Fiber Channel (FC) has a lower latency, as Ethernet based protocols like iSCSI and NFS. But still remote device will have a higher latency as local devices. In this blog post from Mellanox, they explain why FC is doomed according to them.

In the past I wrote a blog post explaining why you want your flash devices as close as possible to your applications. Hence why Hyper Converged Infrastructure (HCI) are so popular nowadays.

But what if you have a legacy storage solution, and still want low latency for your applications?

VMware vSphere has a feature called VMware vSphere Flash Read Cache (VFFS). VFFS uses a local flash device to cache read IO’s.
In my opinion VFFS has one major disadvantage, you have to specify the block size. This block size is determined by your application(s).
Let us assume that you actually know the block size used by your application. Will every application uses the same block size? Probably not. This makes VFFS harder to use.

As of VMware vSphere 6.0, VMware has introduced vSphere APIs for IO Filtering (VAIO). Vendors can use VAIO to filter IO from a virtual machine and do smart things with this IO.

Infinio is such a vendor and today I tested there solution: Infinio Accelerator 3.0

 

Infinio Accelerator 3.0

Infinio Accelerator uses VAIO to accelerate read IOs using flash devices or RAM (RAM is mandatory). When using flash devices, you have to configure VFFS to use the flash devices. After enabling VFFS Infinio takes over, so you don’t have to specify the block size for your applications.
Note: You always need a minimum of 0.5% of the flash capacity for RAM. This is used for storing meta-data.

Installation

The installation of Infinio is straightforward. You get a windows installer, who deploys the infinio appliance to a vSphere cluster. After you configured the network settings (This can be done on the appliance console) you connect to the Infinio Accelerator dashboard using a web browser.

When selecting the Infinio vAPP in the vSphere Web Client, you will see a warning that the VMtools are out of date. This is nothing to worry about. A vApp can be configured that check won’t be done. A small change that Infinio can do for their next release.

Configuration

In order to start accelerating a virtual machine, you click on the blue button (top-right) “Accelerate VMs”. Here you can select a storage policy. For this setup, I selected the simple setup.

In the virtual machine section, you select the virtual machine you want to accelerate.

If this is the first time, you have to install the storage policy’s and infinio software to the ESXi hosts in your cluster. This is also pretty straight forward. Just select if you want to use Flash or RAM and the amount of space you want to use. Wait a minute and everything is configured.

Now your VM is accelerated.

Performance test

Now we’ve finished the installation and configuration, we want to know how much Infinio can accelerate our application.

For this performance test, I’m using IO Analyzer from the VMware Fling website. There’re many storage performance test software available, but for this POC, I just wanted to do a simple read and write (random) workload and see how this workload is accelerated.

I’m using the following IO Analyzer workload:

8k_50%Read_100%Random

The first test, I run the test for 900 seconds. The following picture shows the result.

Then, I enabled the acceleration in the Infinio Dashboard for this VM. The following picture shows the result.

As you can see, for this test, we have more than 170% more performance.
What was surprising to me, not only the read had improved, but also the writes.

This is because the NAS used in this test, can only deliver a total of 15000 IOPS. When Infinio offloads the read IOs, more IOs are available for write!

Next, I was curious what the total read IOPS would be, if I did a 100% read workload with 0% random IO. This workload will write 1 IO, which will be cached in memory. Then we try to read that same IO as many as possible.

As you can see, the total amount of IOPS are amazing.In this test, I didn’t use random data, but amazing your database is accelerate in RAM:) Again, this is a synthetic workload, and for no real-life workload. 

What will the result be, when we do the same read test but with 100% random data?

Still quit impressive. 

 

To conclude

The installation and configuration of Infinio Accelerator is simple. Just upload the appliance software to your cluster and configure the amount of memory you want to use for acceleration.
Although Infinio Accelerator only accelerates read IOs, your write IOs will also improve because your storage system in offloaded. Do mind that in order to accelerator a read IO, this IO has to be in cache. If not, this IO has to read from you storage system.

If you have a legacy storage system and want to boost the performance, you can use Infinio Accelerator to do so.
Infinio as a 30-day evalution version available, so make sure you testdrive there software to ensure your workloads will benefit from the acceleration.

 

About Michael
Michael Wilmsen is a experienced VMware Architect with more than 20 years in the IT industry. Main focus is VMware vSphere, Horizon View and Hyper Converged with a deep interest into performance and architecture. Michael is VCDX 210 certified, has been rewarded with the vExpert title from 2011, Nutanix Tech Champion and a Nutanix Platform Professional.

RSS feed for comments on this post.

Leave a Reply

You must be logged in to post a comment.