Introducing PernixData FVP Software, Version 2.5


I am very excited to introduce PernixData FVP software Version 2.5. Similar to previous releases, FVP 2.5 embodies two of the core tenets of PernixData products, namely:

  • Cutting edge innovation
  • Continuous improvement based on anticipated needs of the end user

FVP 2.5 further pushes the envelope on the concept of Server Side Storage Intelligence and the Decoupled Architecture that enables it. Here’s a brief description of the three key features in this release and why we built them.


With FVP 2.0, PernixData introduced the capability to use RAM as a fault tolerant acceleration medium; we called it Distributed Fault Tolerant Memory (DFTM). DFTM had two primary goals:

(1) Enable our customers to leverage servers with ever increasing amounts of RAM. As a result, we launched DFTM with support for up to a 1 TB of RAM per host in the vSphere cluster.

(2) Change the industry status quo of running only non-persistent VDI in RAM by providing a robust platform for Tier-1 workloads like production databases. We accomplished this by building DFTM for fault tolerance—Write Back mode with replication and Fault Domains to make the environment more resilient against component, network or host failure.

DFTM aligns well with the general trend towards in-memory computing and the public reception has been extremely positive. Encouraged by its success, we decided to further enhance DFTM with DFTM-Z.

DFTM-Z introduces adaptive compression for RAM. This allows for more of the application’s working set to fit into RAM thereby allowing FVP to satisfy even more IO from the server side. The impact on performance is, of course, tremendous. But doesn’t compression come with a CPU cost, you ask? This is indeed true and, as a company that embodies performance, it is something we thought about carefully. The data compression for DFTM-Z introduces minimal compute overhead as it happens asynchronously and an active I/O isn’t impacted by the cost of compression.

To date, the industry has always looked toward compression as a feature that enhances the capacity that primary storage has to offer. PernixData has turned this on its head by leveraging compression for performance instead—a first in the industry and something we are very excited about!

Intelligent I/O Profiling

The design for a decoupled architecture requires a reimagining of how storage has traditionally been deployed in the data center. This is because FVP decouples storage performance from storage capacity by moving storage performance to the server side while leveraging existing investments for capacity. In this type of deployment, a key design aspect to consider is the I/O profiles of the VMs. Certain I/O profiles are well suited to be satisfied from the server side performance tier while others are best handled by the underlying datastore because they do not have stringent performance requirements. A good example of the latter would be database backup jobs. While you definitely want to accelerate all database transactions and database reports, you really don’t want to accelerate the backup job itself.

The Intelligent I/O Profiling feature in FVP 2.5 provides a mechanism to make this distinction. It allows users to bypass I/O profiles that aren’t well suited to be satisfied from the server tier without impacting the server side footprints of VMs. The value of this feature becomes immediately apparent when you consider the status quo in the storage industry where the storage system does not recognize application boundaries and therefore it is impossible to fine tune storage resources on a per­VM or per­application basis.

Role Based Access Control (RBAC)

As we deploy FVP in increasingly larger data centers, we’ve come to realize that a number of different users in the organization would benefit from gaining visibility to the environment through our UI. While it’s a given that the virtualization admin would love to see the positive impact FVP is having on their environment, it is also true that a user such as a DBA would benefit tremendously from seeing how a decoupled architecture is positively impacting the performance of the database as well.

Having said that, it is also important to note that different users require a different set of privileges.  While the virtualization admin requires administrator privileges so that she can configure and manage FVP, the DBA in our example above would only require Read access to understand how their databases are benefiting. They also do not need to be able to see the entire environment (perhaps there are other FVP Clusters accelerating other applications)—only the database cluster would be of interest.

We are introducing RBAC in FVP 2.5 to enable this and other similar use cases. As multi-tenancy becomes mainstream, we expect many of our users to leverage this feature successfully.

I am super excited to announce the general availability of FVP 2.5. A huge shout out to the PernixData R&D team who continue to innovate at an unprecedented clip and bring ever more value to our users!

FVP Linked Clone Optimizations Part 2


In part 1 of this series, I talked about the replica disk optimizations that FVP provides for your linked clone environment. In part 2 the focus will be on the different use cases for persistent and non-persistent disks and how it relates to the acceleration that FVP can provide to your VDI environment.

I often hear confusing remarks about what some may call a persistent desktop and a non-persistent desktop. I have found that at times this terminology is based on the confusion between a linked clone and a full clone. It also makes a difference what criterion one bases their understanding of a non-persistent or persistent desktop. For example, if you just look at linked clones, you will notice that several disks are non-persistent and persistent, depending on your design decisions. If one looks only at a dedicated linked clone with windows profile persistence then some may articulate (?) this linked clone as a persistent desktop.

The interesting thing is that Horizon View doesn’t refer to a linked clone in this context. The only time Horizon View refers to a persistent or non-persistent desktop is under the context of refreshing a cloned desktop. In other words, it doesn’t mean that having a linked clone makes you a non-persistent or even persistent VDI environment.

I also think some of the confusion revolves around the use of dedicated vs. floating assignment of linked clones. The dedicated configuration assigns each user a dedicated desktop, so, if the user has multiple sessions, they will always reconnect to the same desktop by default. In a floating configuration the user is assigned to a pool of desktops. This means they could login to a different desktop with each new session. The only way to keep windows profile persistence in the floating configuration scenario is to use a persona management solution outside the default configuration of view composer.

So, when an admin decides to use a dedicated linked clone, view composer gives the option to redirect the windows profile to a persistent disk. This will provide user personalization persistence during refresh, recompose, and rebalance operations. This is an optional setting as seen in the screenshot. The default disk size is 2 GB.

When one chooses a floating assignment for linked clones, view composer does not provide an option for a persistent disk, this means that no user personalization will be retained after a refresh, recompose or rebalance operation. If you chose not to redirect the windows profile, then the data would be stored on the non-persistent delta disk. In either case, both read and write I/O will be accelerated with FVP. However, there will be a longer warm up time for read acceleration when using the non-persistent delta disk for user profiles, as this will depend on the frequency of refresh, recompose and rebalance cycles.

Regardless of floating or dedicated assignments and windows profile persistence, FVP will automatically accelerate reads and writes for all disks that are part of the desktop VM. In the past, the choice on when to schedule a recompose, rebalance operation came with varied importance. Now with FVP offloading I/O from the storage array, a refresh, recompose or rebalance operation can provide some breathing room for these tasks to finish without impact to the production environment.

Delta Disk:

The delta disk is probably where most desktop I/O will be seen from a linked clone. The delta disk becomes active as soon as the desktop is booted from the replica disk. Any desktop changes are stored on the delta disk,so the I/O profile could vary drastically depending on the user and the desktop use case. This will not impact FVP negatively, as FVP will keep context on which disk is more active and thus provide the resource intelligence for acceleration no matter the use case.

Disposable Disk:
A default configuration will have a separate non-persistent disposable disk 4 GB in size. Having this as separate disk is recommended since it slows the growth of the delta disk between refresh, rebalance, and powered off tasks. This disk contains temp files, and the paging file, so FVP can help normalize OS operations by accelerating reads and writes associated with the disposable disk. If you choose not to redirect, then this data will reside on the delta disk. There is no negative impact to FVP on either option chosen. However it’s a best practice to help control the growth of the delta disk between refreshes, and so separating the non-persistent disk will help alleviate bloated delta disks.

Internal Disk:

There is an Internal Disk that is created with each cloned desktop. This disk is Thick Provision Lazy Zeroed, with a default size of 20 mb. This disk stores Sysprep, QuickPrep and AD account information, so very little I/O will be realized from this disk. Keep in mind that this disk is not visible in windows, but it still has a SCSI address, so FVP still recognizes the disk and accelerates any I/O that comes from this disk. This is another advantage of being a kernel module, as FVP recognizes disks not mounted to the Windows OS and will still do its magic acceleration. As you can see no matter the configuration, FVP will automatically capture all I/O from all disks that are part of a given desktop clone. Depending on the configuration, a desktop clone can have several disks and knowing when or which disks are active or in need of resources at any given point, is not an easy task. This is exactly why PernixData developed FVP, a solution to take the guesswork out of each disk’s I/O profile. This means the only item you are tasked with, is whether you accelerate the desktop or not! Talk about seamless and transparent, it doesn’t get any better than that!!

Virtual Desktops and Servers, Playground Friends Again


I took a few minutes to update my LinkedIn profile recently and after a little deliberation, finally decided to describe myself as “responsible for many things technical with just a pinch (hopefully) of marketing.” As part of my new(ish) role at PernixData, I develop technical content and work with application and infrastructure partners to create, test, and certify joint solutions and best practices. No, the title didn’t deceive, this isn’t a sappy career post. Rather, the first of many (hopefully) “behind the scenes” looks at the creation of this content. To stay true to my pinch metaphor, you won’t find an executive summary with flashy results here, just a little salt and pepper added “to taste.”

The list of applications that FVP can accelerate is long (is infinite too strong?), but as I embarked on our first technical content release, the choice of application was easy. As a virtualization architect, I had seen firsthand the devastating results of deploying virtual desktops on existing server storage architecture. When I came onboard at PernixData, I saw how FVP made heroes of administrators who hadn’t stopped fielding end-user calls since hauling away physical desktops and going virtual. With such dramatic customer results (check them out here) and an easy target (start with a layup, not a three-pointer, right?), we set our sights on official designation using the two most prolific virtual desktop infrastructures, Citrix XenDesktop and VMware Horizon View.

A Little History

There’s been much written of the I/O pattern disparity (20/80 r/w ratio, really!) between desktop and server workloads, boot storms, antivirus scans and the like so I won’t bore you. As an industry, we collectively became resigned to the fact that if desktop virtualization was going to be successful, it meant deploying on a new dedicated storage array. Not only that, but to fulfill peak demand, this usually meant an expensive array. Flash storage came along, delivered us many thousands of IOPS, and became the solution of choice for those who could afford it. Some that were savvy figured out that they could even put flash storage in their host servers for the lowest latency. This worked great for non-persistent desktops and VMware even published a Horizon View reference architecture using this design. Since desktop VMs were deployed onto host-local only datastores, we lost mobility, but it was a small sacrifice compared to the expensive alternative.

Flash Forward

Today, there are countless published designs for successful XenDesktop and Horizon View deployments. Pick a hardware architecture and there’s a great design with dedicated servers and storage (remember, I said layup). Armed with FVP software, we can do something a bit different though. With this in mind, we did an industry first (I think?) and certified FVP for use with XenDesktop and Horizon View alongside existing virtualized servers (wait, don’t both products require infrastructure servers anyway?). Our hardware configuration was the same for both product tests. First, we chose popular, proven (greybeards?) architectures from Cisco (UCS B230 M2) and NetApp (FAS 3240). We then built out VMware vSphere, Microsoft Windows servers & desktops, and Login VSI components using best practices from each vendor. We knew that based on published reports, when armed with dedicated storage, the two servers in our tests should be capable of running close to 300 Windows 7 desktops before exhausting available CPU resources. We used View Composer and XenDesktop MCS, respectively, to provision 350 non-persistent desktops. For those unfamiliar with Login VSI, its the de-facto standard for testing virtual desktop capacity. Login VSI’s VSImax benchmarkprovides a common language for interpreting and comparing performance test results. Full hardware and software details are available in our PernixData FVP software and VMware Horizon (with View) Reference Architecture.

To accurately represent desktop and server co-existence, we needed a server “playground bully,” so we layered on some storage load via the VMware I/O Analyzer to represent existing virtual servers. This methodology actually deserves a post all on its own (coming soon), but for purposes of this post I’ll summarize the workload as 8K & 16K, 60% random, 75/25 read/write I/Os. With these existing workloads running we then performed Login VSI tests across our two servers and compared VSImax with and without FVP. For the tests with FVP, we used server RAM and FVP’s Distributed Fault Tolerant Memory (DFTM) technology to accelerate the desktops. Because FVP includes optimization for linked clones and master image disks (see esteemed colleague posts here and here for more details), we only dedicated 128 GB of RAM per host for this purpose. Testing showed this to be more than enough to accelerate the unique working set used by the desktop VMs during the Login VSI runs. We also showed our vSphere, Windows, MS SQL, Horizon View, and XenDesktop infrastructure VMs some love by accelerating them using server-side flash with FVP (remember FVP isn’t a desktop-only solution). This additionally ensured that beautiful playground co-existence ensued.

The Results

As we suspected, in our tests without FVP, storage latency caused application response times to reach Login VSI limits and indicate system capacity before neither CPU nor memory saturation had been reached. After all, vendors sell dedicated desktop arrays for a reason. Specifically, in the Horizon View tests, we reached a VSImax of 181 sessions. This was well below the figures reported with a dedicated array for desktop purposes. But could we solve this problem without a dedicated storage array? Yes, in fact, with FVP enabled, we hit a VSImax of 298 sessions before CPU resources were pushed to the point of saturation and application response times hit their usable upper limits. At this point, memory capacity was still available and storage latency was still well within limits. Our customers had seen this many times in real world, so it was actually no surprise. We had just proven it more scientifically. What this means (here comes the salt) is that with FVP in the architecture, the host server now becomes a linearly scalable resource. If I need to run more virtual desktops, I just deploy additional FVP enabled servers. Best of all, because we’ve decoupled storage performance from capacity, virtual desktops and servers can now happily co-exist on the same storage array.

Icing on the Cake

All of our testing was independently validated by Citrix and VMware and ultimately resulted in both Citrix Ready certification and a PernixData FVP software and VMware Horizon (with View) Reference Architecture.

One final note… you won’t find this in our official publications, but as part of our certification process we additionally collected in-depth FVP statistics to share with our engineering team. After reviewing the results and performing a few additional tests we implemented several optimizations due in our next FVP release. Using the exact same configurations with these new software optimizations we were able to extend VSImax to 325 sessions!

FVP Linked Clone Optimizations Part 1


PernixData has some of the best minds in the industry working to provide a seamless experience for all differing types of workloads. Operational simplicity in a platform such as ours doesn’t mean there is a lack of complex functionality. Au contraire, FVP is truly a multiplex system that can dynamically adjust and enhance numerous workload attributes.  One popular example of this is the acceleration and optimization of Linked Clone technology by Horizon View.

The beauty of our linked clone optimizations is that it is completely seamless. This means you will not have to make any configuration changes to your existing VDI environment nor modify any FVP settings. No changes to the Horizon View Manager or other Horizon View products (e.g. ThinApp, Persona Management, Composer or client). It also doesn’t matter if you are using a persistent or non-persistent model for your linked clones, FVP will accelerate and optimize your entire virtual desktop environment.

It’s common to see a virtual desktop with many disks. Which may include an operating system disk, a disk for user/profile data, and a disk for temp data. No matter how many disks are attached to a virtual desktop, it doesn’t affect how FVP accelerates IO from the virtual machine. FVP’s intelligence automatically determines where IO is coming from and which disk is in need of additional resources for acceleration. The admin only decides which desktop (Linked or Full Clone) is part of the FVP cluster. This could comprise a mix of persistent, and non-persistent disks as seen in the diagram. FVP will automatically accelerate all IO coming from any persistent or non-persistent disk that is part of the desktop clone.


As seen in the above diagram a linked clone environment can comprise several disks depending the configuration. This by far is the most confusing part IMHO. When you create a linked clone for the first time, you realize you have all these different disks attached to your clone and you may have no idea what they are for or even why the differing capacities.  Why are some persistent and some non-persistent, and which configuration works best for FVP acceleration? I will save these topics and more for part 2. However the replica (Base) disk is what I’m going to focus on in this post.

In a Horizon View environment a full clone and a linked clone both have an OS disk, except that a linked clone will use a non-persistent delta disk for desktop changes and a replica (base) disk from the parent snapshot for image persistence. This delta disk will be active with any desktop operations, which makes it a prime candidate for acceleration.
In addition to accelerating reads and writes for cloned desktops, FVP will automatically identify the replica disks in a linked clone pool and apply optimizations to leverage a single copy of data across all clones mapped to said replica disk.

Note: Citrix’s XenDesktop technology works essentially the same way with FVP instead of a replica disk, it’s called a personal vDisk. 

As seen in screenshot below, FVP will automatically place the replica disk (Linked Clone base disk) on an accelerated resource when using linked clones. In addition FVP only puts the active blocks of the desktop clone on the accelerated resource, which lowers the capacity required for the replica disk on the accelerated resources. It’s after the first desktops boot in the pool; that all ensuing clones will take advantage of reading the common blocks from the replica disk on the accelerated resource. If any blocks are requested that are not part of the replica disk, FVP will again fetch only the requested block and add it to already created replica disk. The same is true for any newly created linked cloned pools. A new replica disk will be added to the acceleration resource based on the new linked clone pool. This will be visible under FVP’s usage tab as a combined total of all active replica disks for a given acceleration resource. As you can imagine that only adding the active blocks of the replica disk provides a huge advantage for using memory as an acceleration resource. Windows 7 boot times in 5 -7 seconds are not uncommon in this scenario.



FVP maintains these optimizations during clustered hypervisor operations such as vMotion. This means that if desktop clones are migrated from one host to another the desktop clones are able to leverage FVP’s Remote access clustering to read blocks from the replica disk on the host where the desktop clones migrated. This only happens on a temporary basis, as FVP will automatically create a new active block replica disk on the new primary host’s acceleration resource. It’s through FVP’s Remote Access clustering and any new desktop clone reboots that a new replica disk for the desktop clones on the new host will be created or updated for local read access efficiency.
If the desktop clones are in Write Back mode, write acceleration continues automatically once the desktop clones migrate successfully to a new host, irrespective to the replica disk optimizations.

The diagram below outlines the process where a replica disk is first created on a primary host and then the first desktop clones migrate to a new host. This process of creating a new replica disk on the new host happens only one time per linked clone pool, all subsequent cloned desktops matched to designated replica disk will gain the benefit of any future migrations.


When the desktop clones are booted, (1) the clones will request blocks from the assigned replica disk in the pool. FVP will intercept the requested blocks, which will be (2) copied into the acceleration resource that has been assigned to the FVP cluster. All future desktop clone boot processes will read the blocks from the acceleration resource instead of traversing to the assigned datastore where the full replica disk resides. If any changes are made to the original replica disk through a recompose or rebalance operation, then this process will start all over again for the linked clones.  (3) When the desktop clones are migrated to a new host through DRS, HA or a manual vMotion operation, (4) FVP will send read requests to the host where the desktop clones migrated. (5) The blocks are coped back to the new hosts’ acceleration resource, (6) so that any future requests are acknowledged from the new local host. A reboot of any linked clone during this time will also copy all common blocks into the new local host’s acceleration resource.

As you can see from a VDI perspective FVP can truly make a difference in your datacenter. Gone are the days when one had to architect a separate environment to run virtual desktops. FVP can break down the technology silos and allow the virtual admin to truly scale performance on demand without the worry of storage performance constraints.

PernixData FVP & Linked-Clones – The hidden gem


In this post I want to introduce you to a “hidden gem” in FVP that can help you to bring your VDI project on the fast lane. One piece to a successful VDI project is user acceptance and usability which is often tightly coupled to the responsiveness of a virtual desktop and its applications. This responsiveness in turn is defined by the time (measured in milliseconds) I/O operations of those virtual desktops take to complete.

I can remember some early discussions about read intensive VDI workloads but it turned out that VDI workloads are way more write intensive than expected. So to ensure an optimal user experience it’s important to accelerate not only reads but also writes in an efficient way.

Often the answer to this challenge is to add more spinning disks or expensive Flash to the existing storage infrastructure or setup a new silo in form of an All Flash Array or a hyper converged block respectively, just to run the VDI environment.

Using FVP an administrator has the choice between SSDs, PCIe Flash cards or even memory as server side media to speed up those latency sensitive I/O operations leveraging their existing storage infrastructure. This moves the performance directly into the hypervisor and decouples it from the storage capacity.

Sometimes the existing server hardware introduces some design constraints which limits the possible options of an acceleration media. For example blade servers usually can’t take advantage of PCIe Flash cards or VDI hosts often have a rather high memory utilization.

But especially for virtual desktops memory is actually an obvious choice for certain reasons like the ultra-low latency and a consistent performance whatever block size the VM is writing.

So let’s see if memory can be an option despite that fact that you may don’t have tons of memory left.

Linked clones are virtual machines whose virtual disks are linked to a so called “golden image” also known as “replica” which holds the actual operating system, including applications, corporate settings, etc. The golden image of course is read only, but Windows doesn’t work it can’t write to disk. So to fix that the linked clones can write their changes to individual virtual disks. Optimally both, the reads from the replica as well as the writes to the individual disks should be accelerated to ensure an optimal user experience.

That’s exactly what FVP is doing out of the box, there is no need to configure anything to achieve this. This of course includes support for VMware vMotion, VMware HA, etc.

But I would like to point out how efficient FVP deals with those linked clones. FVP out of the box recognizes linked clones and more important the base disk of the replica when accelerating the datastore which are storing those objects.

Instead of building an individual memory (or Flash) footprint for all the reads of every single virtual machine, FVP promotes just individuals blocks. So for example if VM A reads block Z from the replica disk (on the persistent storage) this particular block will be promoted to the FVP layer. If then VM B also reads block Z it is already there and can be served from the local acceleration media. FVP doesn’t promote the same block (from the replica) twice. So in essence, all linked clones can share the read cache content on a host. So you can see this linked-clone optimization as some form of de-duplication.


Writes of every single VM will be accelerated individually as depicted above. Those individual written blocks on the local acceleration media can be used to directly serve subsequent reads.

As you can see on the screenshot below, the footprint of virtual desktops is rather low compared to the “Linked Clone Base Disk”. The allocated Megabyte of the linked clones are the individual writes.FVPLinkedCloneBaseDisk

This reduces the required amount of memory or Flash capacity to accelerate a whole lot of virtual desktops.

So let’s assume the thin provisioned size of the replica disk is 20GB and you have 100 virtual desktops per host, you only need 20GB of memory or Flash footprint to accelerate all reads of those desktops. These 20GB would be sufficient to virtually keep the whole golden image on the local acceleration media to speed up all VMs on a particular host.

Basically this applies to all non-persistent VDI deployments based on linked clone technology. No matter if VMware Horizon View or Citrix XenDesktp where FVP recently has been verified as Citrix XenDesktop ready.

With this hidden gem I would like to close not only this post but also the year 2014. I hope you’ve enjoyed it like I did, so have a great start and hopefully see you in 2015.

Power through Partnership


When your company has developed the premier product in server side storage intelligence, one that deploys seamlessly without disruption to a customer’s environment and decouples storage performance from capacity to revolutionize virtualized data centers, you have two strong methods for conveying that value to end users and the partner community. The first is through demonstration.  The second is through Alliance Partner certifications.

Regarding the first, PernixData FVP software is easily experienced by our prospective customers every day through our free trial offer. In less than 20 minutes, any company can see on a first hand basis how FVP speeds VM performance and maximizes SAN utilization in their own environments

With respect to the second, PernixData has taken great strides to establish marquis partnerships in the storage and virtualization space. For example, since the company’s inception it has been an Elite Level Technology Alliance Partner with VMware. This status has enabled our development team to ensure we are aligned with VMware’s launch cycles. Additionally, we are VMware PVSP (Partner Verified & Supported Product), which  provides PernixData customers assurance that our product is designed with VMware’s demanding support standards in mind, and always interoperable with vSphere as new versions emerge.

Similarly, we developed the PernixDrive Program to create a vast ecosystem to promote decoupled storage architectures. Through rigorous lab testing and joint sales/marketing activities with vendors like Kingston, Intel, Micron, HGST and more, PernixData provides assurances to our end-users that new decoupled storage solutions work as advertised.

In addition, PernixData made two recent announcement that have expanded our partner ecosystem.  We joined both the VCE (EMC) Technology Alliance Program and the Citrix Ready Partner Program  as a Premier Partner. Our product has completed the rigorous interoperability requirements that VCE mandates for achieving the Vblock Ready designation. Likewise, FVP passed the intense interoperability process to achieve the Citrix Ready status. These certifications provide customers and partners alike with the assurance that our products comply with our Alliance Partners’ high standards and deliver the strong value we promise.

As an infrastructure software provider whose offering can accelerate any workload across any VMware virtual server in a fault tolerant manner, these designations provide resellers and end users confidence that FVP software is a datacenter ready solution that can both interoperate with the industry’s leading solutions while yielding high value. Look for more of these types of announcements as we build on and extend our lead as the premier platform for decoupled storage.

FVP Software is Vblock Ready


It was only a matter of time before the Vblock Systems that revolutionized IT infrastructure formally met FVP software, the premier platform for server side storage intelligence. Since FVP software utilizes server resources to become a SAN’s best friend and Vblock Systems improve efficiency, speed, and reliability, it was a match made in heaven. As such, today I’m pleased to announce that FVP software is now Vblock Ready certified!

Technically, VCE describes this certification as “a comprehensive test that assures customers that a partner solution has met all entrance, integration, and interoperability criteria and are technically ready for use with Vblock Systems.” Practically, this means that new and existing Vblock customers (some already using FVP software) have assurance that VCE supports their decision to implement FVP and that they aren’t sacrificing the pre-integration, testing, and validation value that Vblock Systems provide when they install FVP software to create a low latency I/O acceleration tier.

Although an important achievement itself, the Vblock Ready technical designation marks just the beginning of the PernixData/VCE partnership. FVP software not only adds an additional level of performance to existing Vblock Systems, but working through mutual Solution Providers, customers will now have the ability to configure FVP software as part of the modular architecture of new Vblock Systems. Among others in the coming months, we’re looking forward to publishing new reference architectures that use Distributed Fault tolerant Memory (DFTM) to handle the most demanding applications and VCE Vision to perform automated FVP Cluster operations with additional ease. Now with the addition of FVP software, there’s no application that can’t be virtualized on what VCE calls “the world’s most advanced converged infrastructure.”

Want more information?  Here is an FAQ we put together on the Vblock ready certification and a joint FVP/Vblock solution brief.

Distributed Fault Tolerant Memory (DFTM) in FVP 2.0


I have spent a considerable amount of time designing operating systems, databases and other enterprise software to leverage fast media like RAM.. For example, while in graduate school I researched TLB design and evaluated TLB management algorithms for a variety of workloads. After that, I participated in the design and development of operating systems’ virtual memory and large scale Relational Database Management Systems (RDBMS), including in-memory databases.

Through these experiences I’ve learned two key lessons about using RAM:

  1. The cost of navigating the memory hierarchy is prohibitively expensive for most applications. As a result, applications prefer to leverage RAM for data accesses whenever possible. Databases, for example, have used buffer caches from the early days to cache most recently used data.
  2. RAM, in contrast to disk, is a volatile medium. This means that applications that care about data integrity and fault tolerance are forced to use a non-volatile medium, usually disk, in conjunction with RAM. Databases, for example, require that transaction commits happen on disk to meet ACID requirements.

You’ll notice that these two lessons are in direct conflict with each other. On the one hand, the prohibitive cost of navigating the memory hierarchy means applications prefer never touching disk for accesses. Yet, RAM’s volatility and lack of fault tolerance means applications are forced to use disk in conjunction with RAM. This latter point is a difficult one to dismiss, which means RAM has played a limited role in application performance to date.

Recent trends in the industry have begun to address this dichotomy, albeit incompletely. As multi terabyte servers become more common, for example, application vendors are bringing to market purpose-built appliances with large RAM footprints and flash to alleviate the concerns I raised above. Unfortunately, these products create huge operational and management overheads because they introduce new silos in the data center that fly in the face of standardization via virtualization.

As PernixData looked to leverage server side RAM for storage acceleration, we wanted to solve all of these limitations.  More specifically, we wanted to:

  1. Allow applications to leverage RAM as an acceleration medium for all data accesses, both reads and writes.
  2. Allow applications to not worry about fault tolerance when using RAM. In other words, we wanted to provide our customers with the same fault tolerance and data integrity guarantees for RAM that they are used to with non-volatile medium such as disk (or flash).
  3. Fully integrate with a customer’s existing investments in virtualization, servers and storage to avoid operational and management overhead.
  4. Allow our customers to leverage servers with very large amount of RAM in a scalable way.

I am proud to say that we have achieved all these goals with Distributed Fault Tolerant Memory (DFTM).  DFTM is a new feature in release 2.0 of FVP software that lets you cluster server RAM into a fault tolerant acceleration tier.  For the first time ever, enterprises can now use a volatile medium in a non-volatile way with no operational or management overhead. In addition, FVP supports the ability to leverage up to 1 TB of RAM per host for VM acceleration. This means that even the most resource intensive applications, such as very large databases, can now both be virtualized and use RAM for storage performance in a fault tolerant way.

Here’s to tomorrow’s data center!

Database workload characteristics and their impact on storage architecture design – part 2 – Data pipelines


Welcome to part 2 of the Database workload characteristics series. Databases are considered to be one of the biggest I/O consumers in the virtual infrastructure. Database operations and database design are a study upon themselves, but I thought it might be interested to take a small peak underneath the surface of database design land. I turned to our resident Database expert Bala Narasimhan, PernixData’s director of products to provide some insights about the database designs and their I/O preferences.

Question 2: You mentioned data pipelines in your previous podcast, what do you mean by this?

What I meant by data pipeline is the process by which this data flows in the enterprise. Data is not a static entity in the enterprise; it flows through the enterprise continuously and at various points is used for different things. As mentioned in part 1 of this series, data usually enters the pipeline via OLTP databases and this can be from numerous sources. For example, retailers may have Point of Sale (POS) databases that record all transactions (purchases, returns etc.). Similarly, manufacturers may have sensors that are continuously sending data about health of the machines to an OLTP database. It is very important that this data enter the system as fast as possible. In addition, these databases must be highly available, have support for high concurrency and have consistent performance. Low latency transactions are the name of the game in this part of the pipeline.

At some point, the business may be interested in analyzing this data to make better decisions. For example, a product manager at the retailer may want to analyze the Point of Sale data to better understand what products are selling at each store and why. In order to do this, he will need to run reports and analytics on the data. But as we discussed earlier, these reports and analytics are usually throughput bound and ad-hoc in nature. If we run these reports and analytics on the same OLTP database that is ingesting the low latency Point of Sale transactions then this will impact the performance of the OLTP database. Since OLTP databases are usually customer facing and interactive, a performance impact can have severe negative outcomes for the business.

As a result what enterprises usually do is Extract the data from the OLTP database, Transform the data into a new shape and Load it into another database, usually a data warehouse. This is known as the ETL process. In order to do the ETL, customers use a solution such as Informatica (ETL) (3) or Hadoop (4) between your OLTP database and data warehouse. Some times customers will simply suck in all the data of the OLTP database (Read intensive, larger block size, throughput sensitive query) and than do the ETL inside the data warehouse itself. Transforming the data into a different shape requires reading the data, modifying it, and writing the data into new tables. You’ve most probably heard of nightly loads that happen into the data warehouse. This process is what is being referring to!

As we discussed before, OLTP databases may have a normalized schema and the data warehouse may have a more denormalized schema such as a Star schema. As a result, you can’t simply do a nightly load of the data directly from the OLTP database into the data warehouse as is. Instead you have to Extract the data from the OLTP database, Transform it from a normalized schema to a Star schema and then Load it into the data warehouse. This is the data pipeline. Here is an image that explains this:


In addition, there can also be continuous small feeds of data into the data warehouse by trickle loading small subsets of data, such as most recent or freshest data. By using the freshest data in your data warehouse you make sure that the reports you run or the analytics you do is not stale and is up to date and therefore enables the most accurate decisions.

As mentioned earlier, the ETL process and the data warehouse are typically throughput bound. Server side flash and RAM can play a huge role here because the ETL process and the data warehouse can now leverage the throughput capabilities of these server side resources.

Using PernixData FVP

Some specific, key benefits of using FVP with the data pipeline include:

  • OLTP databases can leverage the low latency characteristics of server side flash & RAM. This means more transactions per second and higher levels of concurrency all while providing protection against data loss via FVP’s write back replication capabilities.
  • Trickle loads of data into the data warehouse will get tremendously faster in Write Back mode because the new rows will be added to the table as soon as it touches the server side flash or RAM.
  • The reports and analytics may execute joins, aggregations, sorts etc. These require rapid access to large volumes of data and can also generate large intermediate results. High read and write throughput are therefore beneficial and having this done on the server right next to the database will help performance tremendously. Again, Write Back is a huge win.
  • Analytics can be ad-hoc and any tuning that the DBA have done may not help. Having the base tables on flash via FVP can help performance tremendously for ad-hoc queries.
  • Analytics workloads tend to create and leverage temporary tables within the database. Using server side resources for read enhances performance on these temporary tables and write accesses to them.
  • In addition, there is also a huge operational benefit. We can now virtualize the entire data pipeline (OLTP databases, ETL, data warehouse, data marts etc.) because we are able to provide high performance and consistent performance via server side resources and FVP. This brings together the best of both workloads. Leverage the operational benefits of a virtualization platform, such as vSphere HA, DRS and vMotion, and standardize the entire data pipeline on it without sacrificing performance at all.

Database workload characteristics and their impact on storage architecture design – part 1


Frequently PernixData FVP is used to accelerate databases. Databases are for many a black box solution. Sure we all know they consume resources like there is no tomorrow, but can we make some general statements about database resource consumption from a storage technology perspective? I asked Bala Narasimhan, our director of Products, a couple of questions to get a better understanding about the database operations and how FVP can help to provide the performance the business needs.

BalaThe reason why I asked Bala about databases is because of his rich background in database technology. After spending some time at HP writing kernel memory management software, he moved to Oracle and was responsible for memory SGA and PGA. One of his proudest achievements was to build the automatic memory management in 10G. He then went on and worked at a startup where he rewrote the open source database, Postgres, to be a scale out, columnar relational databases for data warehousing and analytics. Bala recently recorded a webinar eliminate performance bottlenecks in virtualized Databases. Bala’s twitter account can be found here. As the topic databases is an extensive one, the article is split up into a series of smaller articles, making it more digestible.

Question 1: What are the various databases use cases one typically sees?

There is a spectrum of use cases, with OLTP, Reporting, OLAP and analytics being the common ones. Reporting, OLAP (online analytical processing) and Analytics can be seen as a part of the data warehousing family. OLTP (online transaction processing) databases are typically aligned with a single application and acts as an input source for data warehouses. Therefore a data warehouse can be seen as a layer on top of the OLTP database optimized for reporting and analytics.

When you deal with setting up architectures for databases you have to ask yourself, what do you try to solve? What is technical requirement of the workload? Is it latency intensive, do you retrieve or do you want to read a lot of data as fast as possible? Is the application latency sensitive or throughput bound? Meaning that if you go from left to right in the table on average the block size grows. Hint: the larger the block size means that on average you are dealing with a more throughput bound workload instead of a latency sensitive block size. From left to right the database design go from normalized to denormalized.

OLTP Reporting OLAP Analytics
Database Schema Design

OLTP is an excellent example of a normalized schema. A database schema can be seen as a container objects and allows to logically group objects such as tables, views and stored procedures. When using a normalized schema you start to split a table into smaller tables. For example, lets assume a bank database has only one table that logs all activities by all its customers. This means that there are multiple rows in this table for each customer. Now if a customer updates her address you need to update many rows in the database for the database to be consistent. This can have a impact on the performance and concurrency of the database. Instead of this, you could build out a schema for the database such that there are multiple tables and there is only one table that has customer details in it. This way when the customer changes her address you only need to update one row in this table and this improves concurrency and performance If you normalize your database enough every insert, delete and update statement will only hit a single table, very small updates that require fast responds, therefor small blocks, very latency sensitive.

While OLTP databases tend to be normalized, data warehouses tend to be denormalized and therefore have lesser number of tables. For example, when querying the DB to find out who owns account 1234, it needs to join two tables, the Account-table with the Customer-table. In this example it is a two way join but it is possible for data warehousing systems to do many way joins (that is, joining multiple tables at once) and these are generally throughput bound.

Business Processes

An interesting way to look at the databases is its place in a business process. This provides you insight about the availability, concurrency and response requirements of the database. Typically OLTP databases are at the front of the process, customer-facing process, dramatically put they are in the line of fire. You want to have fast response, you want to read, insert and update data as fast as possible therefore the database are heavily normalized for reasons described above. When the OLTP database is performing slow or is unavailable it will typically impact revenue-generating processes. Data warehousing operations generally occur away from customer facing operations. Data is typically loaded into the data warehouse from multiple sources to provide the business insights into its day-to-day operations. For example, a business may want to understand from its data how it can drive quality and cost improvements. While we talk about a data warehouse as a single entity this is seldom the case. Many times you will find that a business has one large data warehouse and many so called ‘data marts’ that hang from it. Database proliferation is a real problem in the enterprise and managing all these databases and providing them the storage performance they need can be challenging.

Let’s dive into the four database types to understand their requirements and the impact on architecture design:


OLTP workloads have a good mix of read and write operations. It is latency sensitive, and it requires the support for high levels of concurrency. When talking about concurrency a good example are ATM machines. Each customer at an ATM machine is generating a connection doing a few simple instructions, however a bank typically has a lot of ATM machines servicing its many customers concurrently. If a customer wants to withdraw money, the process needs to read the records of the customer in the database. It needs to confirm that he or she is allowed to withdraw the money, and then it needs to record (write) the transaction. In DBA jargon that is a SQL SELECT statement followed by an UPDATE statement. A proper OLTP database should be able to handle a lot of users at the same time preferably with a low latency. It’s interactive in nature, meaning that latency impacts user experience. You cannot keep the customer waiting for a long time at the ATM machine or a bank teller. From an availability perspective you cannot afford to have the database go down, the connections cannot be lost, it just needs to be up and running all the time (24×7).

OLTP Reporting OLAP Analytics
Availability +++
Concurrency +++
Latency sensitivity +++
Throughput oriented +
Ad hoc +
I/O Operations Mix R/W

Reporting databases experience predominately read intensive operations and requires more throughput than anything else. Concurrency and availability are not as important for reporting databases as they are for OLTP. Characteristically workload is repeated read of data. Reporting is usually done when the users want to understand the performance of the business, for example how many accounts were opened this week, how many accounts were closed, is the private banking account team hitting it’s quota of acquiring new customers? Think of reporting as predictable requests, the user knows what data he wants to see and has a specific report design that structures the data in order needs to understand these numbers. This means, this report is repetitive which allow the DBA to design and optimize database and schema so that this query gets executed predictable and efficiently. Database design can be optimized for this report. Typical database schema designs for reporting include the Star Schema and the Snow Flake Schema.

As it serves the back office processes, availability and concurrency are not a strict requirement of this kind of database. As long as the database is available when the report is required. Enhanced throughput helps tremendously.

OLTP Reporting OLAP Analytics
Availability +++ +
Concurrency +++ +
Latency sensitivity +++ +
Throughput oriented + +++
Ad hoc + +
I/O Operations Mix R/W Read Intensive

OLAP can be seen as the analytical counterpart of OLTP. Where OLTP is the original source of data, OLAP is the consolidation of data, typically originating from various OLTP databases. A common remark made in database world is that OLAP provides a multi-dimension view, meaning that you drill down the data coming from various sources and then analyze the data amongst different attributes. This workload is more ad-hoc in nature then reporting as you slice and dice the data in different ways depending on the nature of the query. The workload is primarily read intensive and can run complex queries involving aggregations of multiple databases, therefore its throughput oriented. An example of an OLAP query would be the amount of additional insurance services gold credit card customers were signing up for during the summer months.

OLTP Reporting OLAP Analytics
Availability +++ + +
Concurrency +++ + +
Latency sensitivity +++ + ++
Throughput oriented + +++ +++
Ad hoc + + ++
I/O Operations Mix R/W Read Intensive Read Intensive

Analytical workload is truly ad-hoc in nature. Whereas reporting aims to provide perspective of the numbers that are being presented, analytics provide insights in why the numbers are what they are. Reporting provides the how many new accounts where acquired by the private banking account team, analytics aims to provide insights why the private banking account team did not hit their quota in the last quarter. Analytics can query multiple databases and can be multi-step processes. Typically analytic queries write out large temporary results. Potentially it generates large intermediate results before slicing and dicing the temp data again. This means this data needs to be stored as fast as possible, the data is read again for the next query therefor read performance is crucial as well. Output is the input of the next query and this can happen multiple times, requiring both fast read and write performance otherwise your query will slow down dramatically.

Another problem is the sort process, for example you are retrieving data that needs to be sorted however the dataset is so large that you can’t hold everything in memory during the sort process resulting in spilling data to disk.

Because analytics queries can be truly ad-hoc in nature it is difficult to design an effecient schema for it upfront. This makes analytics an especially difficult use case from a performance perspective.

OLTP Reporting OLAP Analytics
Availability +++ + + +
Concurrency +++ + + +
Latency sensitivity +++ + ++ +++
Throughput oriented + +++ +++ +++
Ad hoc + + ++ +++
I/O Operations Mix R/W Read Intensive Read Intensive Mix R/W
Designing and testing your storage architecture in line with DB-workload

By having a better grasp of the storage performance requirements of each specific database you can now design your environment to suits its need. Understanding these requirements helps you to test the infrastructure more focused on the expected workload.
Instead of running “your average db workload” in Iometer this allows you to test more towards latency or throughput oriented workloads when understanding what type of database will be used. The next article of this series dives into understanding whether tuning databases or storage architectures can solve performance.