Today we are announcing several new and exciting features that establish FVP as the industry’s premier enterprise-class platform for server side storage intelligence.
With these new features, FVP is taking an even bigger role in datacenter design by being media proof, workload agnostic, topology aware, and tightly integrated with existing data services.
Satyam Vaghani will be introducing these features (with demos) in his Storage Field day presentation today at 1 PM pst. (This link will also host recorded versions of his presentation after the event.) In addition, I will publish a collection of articles in the coming weeks covering these features in more detail.
In the meantime, below is a quick preview:
FVP Clustering™ using RAM (i.e. Distributed Fault Tolerant Memory)
This is perhaps the most exciting feature being announced today. FVP can now turn RAM into a fault tolerant acceleration tier! For the first time ever, volatile memory can now be used as part of an enterprise class storage architecture, delivering mind-boggling performance in the form of extremely low latency that is predictable and persistent.
The beauty of DTFM is that it leverages the acceleration media that is connected to the fastest bus inside a computer and the closest resource to the CPU. No hardware and software configuration is needed. FVP integrates directly into the kernel, aggregating memory into a pool of acceleration resources that provides dynamic resizing without reboots or impact on VM operations. Best of all, there is no need to run and manage a virtual appliance.
Want to increase the pool of resources? Just add more memory. Want to reduce assigned memory? FVP lets you do it on the fly.
In reality, you already had the resources available in your servers to get the best storage performance. However, up until now you didn’t have the software to exploit this raw power. FVP changes all that, revolutionizing the way you look at your compute layer. With DFTM, you can leverage the full potential of your current datacenter configuration.
Storage protocol agnostic
FVP now supports whatever shared storage is connected to your vSphere environment. This can be block storage (iSCSI, FC, FCoE), local storage, or file (NFS).
NFS, in particular, is a much anticipated feature that will not disappoint. FVP will connect to NFS devices with the same transparency you are already accustomed to with block storage. With full VMkernel integrated functionality, you can attach an NFS datastore without modifying or rebooting virtual machines, changing your hosts, or reconfiguring your storage medium (e.g. IP addresses, VLANs, mount pints, etc).
I am often asked how the underlying network impacts the performance of the PernixData solution. Well, PernixData has integrated a compression functionality into FVP software to make network performance a non-issue. FVP monitors the workload of the virtual machine in real time, compressing network traffic adaptively. More specifically, it looks at the I/O size and uses an advanced cost-benefit analysis algorithm to determine if compressing data will reduce replication traffic while ensuring low CPU overhead. With FVP’s compression capabilities, write data being replicated between hosts can be vastly reduced, allowing more data to be sent over the same network connection. In addition, the IOPS achieved when replicating data remains consistent with the performance achieved when replication is not enabled (see below).
Write back + 0 replica’s (local host only)
Write Back +1 replica over 1 Gbps Ethernet (uncompressed)
Write Back + 1 replica over 1 Gbps Ethernet (compressed)
Topology aware FVP via User Defined Replica groups
FVP now allows users to define replica groups, providing a topology aware replication design for accurate fault tolerance. By choosing where replica data is stored, FVP’s fault tolerance capabilities can be aligned with assigned failure domains in your virtual datacenter design. Replica groups can be used to indicate the boundary of a blade enclosure or different physical sites, for example, allowing you to protect against failure scenarios. Another great use case is to configure FVP to keep the data inside a failure domain (for instance a blade enclosure) to keep the latency as low as possible.
Following FVP’s operational simplicity model, once you assign hosts to a replica group, FVP will automatically select an appropriate replica host to receive write data. When the replica host or the network connectivity fails, FVP automatically assigns another host from the same replica group to become the new destination replica host.
I am very excited about these new features, and look forward to explaining them in more detail in the coming weeks. In the meantime, don’t forget to check out Satyam’s presentation and demos here.