Power through Partnership


by

When your company has developed the premier product in server side storage intelligence, one that deploys seamlessly without disruption to a customer’s environment and decouples storage performance from capacity to revolutionize virtualized data centers, you have two strong methods for conveying that value to end users and the partner community. The first is through demonstration.  The second is through Alliance Partner certifications.

Regarding the first, PernixData FVP software is easily experienced by our prospective customers every day through our free trial offer. In less than 20 minutes, any company can see on a first hand basis how FVP speeds VM performance and maximizes SAN utilization in their own environments

With respect to the second, PernixData has taken great strides to establish marquis partnerships in the storage and virtualization space. For example, since the company’s inception it has been an Elite Level Technology Alliance Partner with VMware. This status has enabled our development team to ensure we are aligned with VMware’s launch cycles. Additionally, we are VMware PVSP (Partner Verified & Supported Product), which  provides PernixData customers assurance that our product is designed with VMware’s demanding support standards in mind, and always interoperable with vSphere as new versions emerge.

Similarly, we developed the PernixDrive Program to create a vast ecosystem to promote decoupled storage architectures. Through rigorous lab testing and joint sales/marketing activities with vendors like Kingston, Intel, Micron, HGST and more, PernixData provides assurances to our end-users that new decoupled storage solutions work as advertised.

In addition, PernixData made two recent announcement that have expanded our partner ecosystem.  We joined both the VCE (EMC) Technology Alliance Program and the Citrix Ready Partner Program  as a Premier Partner. Our product has completed the rigorous interoperability requirements that VCE mandates for achieving the Vblock Ready designation. Likewise, FVP passed the intense interoperability process to achieve the Citrix Ready status. These certifications provide customers and partners alike with the assurance that our products comply with our Alliance Partners’ high standards and deliver the strong value we promise.

As an infrastructure software provider whose offering can accelerate any workload across any VMware virtual server in a fault tolerant manner, these designations provide resellers and end users confidence that FVP software is a datacenter ready solution that can both interoperate with the industry’s leading solutions while yielding high value. Look for more of these types of announcements as we build on and extend our lead as the premier platform for decoupled storage.

FVP Software is Vblock Ready


by

It was only a matter of time before the Vblock Systems that revolutionized IT infrastructure formally met FVP software, the premier platform for server side storage intelligence. Since FVP software utilizes server resources to become a SAN’s best friend and Vblock Systems improve efficiency, speed, and reliability, it was a match made in heaven. As such, today I’m pleased to announce that FVP software is now Vblock Ready certified!

Technically, VCE describes this certification as “a comprehensive test that assures customers that a partner solution has met all entrance, integration, and interoperability criteria and are technically ready for use with Vblock Systems.” Practically, this means that new and existing Vblock customers (some already using FVP software) have assurance that VCE supports their decision to implement FVP and that they aren’t sacrificing the pre-integration, testing, and validation value that Vblock Systems provide when they install FVP software to create a low latency I/O acceleration tier.

Although an important achievement itself, the Vblock Ready technical designation marks just the beginning of the PernixData/VCE partnership. FVP software not only adds an additional level of performance to existing Vblock Systems, but working through mutual Solution Providers, customers will now have the ability to configure FVP software as part of the modular architecture of new Vblock Systems. Among others in the coming months, we’re looking forward to publishing new reference architectures that use Distributed Fault tolerant Memory (DFTM) to handle the most demanding applications and VCE Vision to perform automated FVP Cluster operations with additional ease. Now with the addition of FVP software, there’s no application that can’t be virtualized on what VCE calls “the world’s most advanced converged infrastructure.”

Want more information?  Here is an FAQ we put together on the Vblock ready certification and a joint FVP/Vblock solution brief.

Distributed Fault Tolerant Memory (DFTM) in FVP 2.0


by

I have spent a considerable amount of time designing operating systems, databases and other enterprise software to leverage fast media like RAM.. For example, while in graduate school I researched TLB design and evaluated TLB management algorithms for a variety of workloads. After that, I participated in the design and development of operating systems’ virtual memory and large scale Relational Database Management Systems (RDBMS), including in-memory databases.

Through these experiences I’ve learned two key lessons about using RAM:

  1. The cost of navigating the memory hierarchy is prohibitively expensive for most applications. As a result, applications prefer to leverage RAM for data accesses whenever possible. Databases, for example, have used buffer caches from the early days to cache most recently used data.
  2. RAM, in contrast to disk, is a volatile medium. This means that applications that care about data integrity and fault tolerance are forced to use a non-volatile medium, usually disk, in conjunction with RAM. Databases, for example, require that transaction commits happen on disk to meet ACID requirements.

You’ll notice that these two lessons are in direct conflict with each other. On the one hand, the prohibitive cost of navigating the memory hierarchy means applications prefer never touching disk for accesses. Yet, RAM’s volatility and lack of fault tolerance means applications are forced to use disk in conjunction with RAM. This latter point is a difficult one to dismiss, which means RAM has played a limited role in application performance to date.

Recent trends in the industry have begun to address this dichotomy, albeit incompletely. As multi terabyte servers become more common, for example, application vendors are bringing to market purpose-built appliances with large RAM footprints and flash to alleviate the concerns I raised above. Unfortunately, these products create huge operational and management overheads because they introduce new silos in the data center that fly in the face of standardization via virtualization.

As PernixData looked to leverage server side RAM for storage acceleration, we wanted to solve all of these limitations.  More specifically, we wanted to:

  1. Allow applications to leverage RAM as an acceleration medium for all data accesses, both reads and writes.
  2. Allow applications to not worry about fault tolerance when using RAM. In other words, we wanted to provide our customers with the same fault tolerance and data integrity guarantees for RAM that they are used to with non-volatile medium such as disk (or flash).
  3. Fully integrate with a customer’s existing investments in virtualization, servers and storage to avoid operational and management overhead.
  4. Allow our customers to leverage servers with very large amount of RAM in a scalable way.

I am proud to say that we have achieved all these goals with Distributed Fault Tolerant Memory (DFTM).  DFTM is a new feature in release 2.0 of FVP software that lets you cluster server RAM into a fault tolerant acceleration tier.  For the first time ever, enterprises can now use a volatile medium in a non-volatile way with no operational or management overhead. In addition, FVP supports the ability to leverage up to 1 TB of RAM per host for VM acceleration. This means that even the most resource intensive applications, such as very large databases, can now both be virtualized and use RAM for storage performance in a fault tolerant way.

Here’s to tomorrow’s data center!

Database workload characteristics and their impact on storage architecture design – part 2 – Data pipelines


by

Welcome to part 2 of the Database workload characteristics series. Databases are considered to be one of the biggest I/O consumers in the virtual infrastructure. Database operations and database design are a study upon themselves, but I thought it might be interested to take a small peak underneath the surface of database design land. I turned to our resident Database expert Bala Narasimhan, PernixData’s director of products to provide some insights about the database designs and their I/O preferences.

Question 2: You mentioned data pipelines in your previous podcast, what do you mean by this?

What I meant by data pipeline is the process by which this data flows in the enterprise. Data is not a static entity in the enterprise; it flows through the enterprise continuously and at various points is used for different things. As mentioned in part 1 of this series, data usually enters the pipeline via OLTP databases and this can be from numerous sources. For example, retailers may have Point of Sale (POS) databases that record all transactions (purchases, returns etc.). Similarly, manufacturers may have sensors that are continuously sending data about health of the machines to an OLTP database. It is very important that this data enter the system as fast as possible. In addition, these databases must be highly available, have support for high concurrency and have consistent performance. Low latency transactions are the name of the game in this part of the pipeline.

At some point, the business may be interested in analyzing this data to make better decisions. For example, a product manager at the retailer may want to analyze the Point of Sale data to better understand what products are selling at each store and why. In order to do this, he will need to run reports and analytics on the data. But as we discussed earlier, these reports and analytics are usually throughput bound and ad-hoc in nature. If we run these reports and analytics on the same OLTP database that is ingesting the low latency Point of Sale transactions then this will impact the performance of the OLTP database. Since OLTP databases are usually customer facing and interactive, a performance impact can have severe negative outcomes for the business.

As a result what enterprises usually do is Extract the data from the OLTP database, Transform the data into a new shape and Load it into another database, usually a data warehouse. This is known as the ETL process. In order to do the ETL, customers use a solution such as Informatica (ETL) (3) or Hadoop (4) between your OLTP database and data warehouse. Some times customers will simply suck in all the data of the OLTP database (Read intensive, larger block size, throughput sensitive query) and than do the ETL inside the data warehouse itself. Transforming the data into a different shape requires reading the data, modifying it, and writing the data into new tables. You’ve most probably heard of nightly loads that happen into the data warehouse. This process is what is being referring to!

As we discussed before, OLTP databases may have a normalized schema and the data warehouse may have a more denormalized schema such as a Star schema. As a result, you can’t simply do a nightly load of the data directly from the OLTP database into the data warehouse as is. Instead you have to Extract the data from the OLTP database, Transform it from a normalized schema to a Star schema and then Load it into the data warehouse. This is the data pipeline. Here is an image that explains this:

ETL

In addition, there can also be continuous small feeds of data into the data warehouse by trickle loading small subsets of data, such as most recent or freshest data. By using the freshest data in your data warehouse you make sure that the reports you run or the analytics you do is not stale and is up to date and therefore enables the most accurate decisions.

As mentioned earlier, the ETL process and the data warehouse are typically throughput bound. Server side flash and RAM can play a huge role here because the ETL process and the data warehouse can now leverage the throughput capabilities of these server side resources.

Using PernixData FVP

Some specific, key benefits of using FVP with the data pipeline include:

  • OLTP databases can leverage the low latency characteristics of server side flash & RAM. This means more transactions per second and higher levels of concurrency all while providing protection against data loss via FVP’s write back replication capabilities.
  • Trickle loads of data into the data warehouse will get tremendously faster in Write Back mode because the new rows will be added to the table as soon as it touches the server side flash or RAM.
  • The reports and analytics may execute joins, aggregations, sorts etc. These require rapid access to large volumes of data and can also generate large intermediate results. High read and write throughput are therefore beneficial and having this done on the server right next to the database will help performance tremendously. Again, Write Back is a huge win.
  • Analytics can be ad-hoc and any tuning that the DBA have done may not help. Having the base tables on flash via FVP can help performance tremendously for ad-hoc queries.
  • Analytics workloads tend to create and leverage temporary tables within the database. Using server side resources for read enhances performance on these temporary tables and write accesses to them.
  • In addition, there is also a huge operational benefit. We can now virtualize the entire data pipeline (OLTP databases, ETL, data warehouse, data marts etc.) because we are able to provide high performance and consistent performance via server side resources and FVP. This brings together the best of both workloads. Leverage the operational benefits of a virtualization platform, such as vSphere HA, DRS and vMotion, and standardize the entire data pipeline on it without sacrificing performance at all.

Database workload characteristics and their impact on storage architecture design – part 1


by

Frequently PernixData FVP is used to accelerate databases. Databases are for many a black box solution. Sure we all know they consume resources like there is no tomorrow, but can we make some general statements about database resource consumption from a storage technology perspective? I asked Bala Narasimhan, our director of Products, a couple of questions to get a better understanding about the database operations and how FVP can help to provide the performance the business needs.

BalaThe reason why I asked Bala about databases is because of his rich background in database technology. After spending some time at HP writing kernel memory management software, he moved to Oracle and was responsible for memory SGA and PGA. One of his proudest achievements was to build the automatic memory management in 10G. He then went on and worked at a startup where he rewrote the open source database, Postgres, to be a scale out, columnar relational databases for data warehousing and analytics. Bala recently recorded a webinar eliminate performance bottlenecks in virtualized Databases. Bala’s twitter account can be found here. As the topic databases is an extensive one, the article is split up into a series of smaller articles, making it more digestible.

Question 1: What are the various databases use cases one typically sees?

There is a spectrum of use cases, with OLTP, Reporting, OLAP and analytics being the common ones. Reporting, OLAP (online analytical processing) and Analytics can be seen as a part of the data warehousing family. OLTP (online transaction processing) databases are typically aligned with a single application and acts as an input source for data warehouses. Therefore a data warehouse can be seen as a layer on top of the OLTP database optimized for reporting and analytics.

When you deal with setting up architectures for databases you have to ask yourself, what do you try to solve? What is technical requirement of the workload? Is it latency intensive, do you retrieve or do you want to read a lot of data as fast as possible? Is the application latency sensitive or throughput bound? Meaning that if you go from left to right in the table on average the block size grows. Hint: the larger the block size means that on average you are dealing with a more throughput bound workload instead of a latency sensitive block size. From left to right the database design go from normalized to denormalized.

OLTP Reporting OLAP Analytics
Database Schema Design

OLTP is an excellent example of a normalized schema. A database schema can be seen as a container objects and allows to logically group objects such as tables, views and stored procedures. When using a normalized schema you start to split a table into smaller tables. For example, lets assume a bank database has only one table that logs all activities by all its customers. This means that there are multiple rows in this table for each customer. Now if a customer updates her address you need to update many rows in the database for the database to be consistent. This can have a impact on the performance and concurrency of the database. Instead of this, you could build out a schema for the database such that there are multiple tables and there is only one table that has customer details in it. This way when the customer changes her address you only need to update one row in this table and this improves concurrency and performance If you normalize your database enough every insert, delete and update statement will only hit a single table, very small updates that require fast responds, therefor small blocks, very latency sensitive.

While OLTP databases tend to be normalized, data warehouses tend to be denormalized and therefore have lesser number of tables. For example, when querying the DB to find out who owns account 1234, it needs to join two tables, the Account-table with the Customer-table. In this example it is a two way join but it is possible for data warehousing systems to do many way joins (that is, joining multiple tables at once) and these are generally throughput bound.

Business Processes

An interesting way to look at the databases is its place in a business process. This provides you insight about the availability, concurrency and response requirements of the database. Typically OLTP databases are at the front of the process, customer-facing process, dramatically put they are in the line of fire. You want to have fast response, you want to read, insert and update data as fast as possible therefore the database are heavily normalized for reasons described above. When the OLTP database is performing slow or is unavailable it will typically impact revenue-generating processes. Data warehousing operations generally occur away from customer facing operations. Data is typically loaded into the data warehouse from multiple sources to provide the business insights into its day-to-day operations. For example, a business may want to understand from its data how it can drive quality and cost improvements. While we talk about a data warehouse as a single entity this is seldom the case. Many times you will find that a business has one large data warehouse and many so called ‘data marts’ that hang from it. Database proliferation is a real problem in the enterprise and managing all these databases and providing them the storage performance they need can be challenging.

Let’s dive into the four database types to understand their requirements and the impact on architecture design:

OLTP

OLTP workloads have a good mix of read and write operations. It is latency sensitive, and it requires the support for high levels of concurrency. When talking about concurrency a good example are ATM machines. Each customer at an ATM machine is generating a connection doing a few simple instructions, however a bank typically has a lot of ATM machines servicing its many customers concurrently. If a customer wants to withdraw money, the process needs to read the records of the customer in the database. It needs to confirm that he or she is allowed to withdraw the money, and then it needs to record (write) the transaction. In DBA jargon that is a SQL SELECT statement followed by an UPDATE statement. A proper OLTP database should be able to handle a lot of users at the same time preferably with a low latency. It’s interactive in nature, meaning that latency impacts user experience. You cannot keep the customer waiting for a long time at the ATM machine or a bank teller. From an availability perspective you cannot afford to have the database go down, the connections cannot be lost, it just needs to be up and running all the time (24×7).

OLTP Reporting OLAP Analytics
Availability +++
Concurrency +++
Latency sensitivity +++
Throughput oriented +
Ad hoc +
I/O Operations Mix R/W
Reporting

Reporting databases experience predominately read intensive operations and requires more throughput than anything else. Concurrency and availability are not as important for reporting databases as they are for OLTP. Characteristically workload is repeated read of data. Reporting is usually done when the users want to understand the performance of the business, for example how many accounts were opened this week, how many accounts were closed, is the private banking account team hitting it’s quota of acquiring new customers? Think of reporting as predictable requests, the user knows what data he wants to see and has a specific report design that structures the data in order needs to understand these numbers. This means, this report is repetitive which allow the DBA to design and optimize database and schema so that this query gets executed predictable and efficiently. Database design can be optimized for this report. Typical database schema designs for reporting include the Star Schema and the Snow Flake Schema.

As it serves the back office processes, availability and concurrency are not a strict requirement of this kind of database. As long as the database is available when the report is required. Enhanced throughput helps tremendously.

OLTP Reporting OLAP Analytics
Availability +++ +
Concurrency +++ +
Latency sensitivity +++ +
Throughput oriented + +++
Ad hoc + +
I/O Operations Mix R/W Read Intensive
OLAP

OLAP can be seen as the analytical counterpart of OLTP. Where OLTP is the original source of data, OLAP is the consolidation of data, typically originating from various OLTP databases. A common remark made in database world is that OLAP provides a multi-dimension view, meaning that you drill down the data coming from various sources and then analyze the data amongst different attributes. This workload is more ad-hoc in nature then reporting as you slice and dice the data in different ways depending on the nature of the query. The workload is primarily read intensive and can run complex queries involving aggregations of multiple databases, therefore its throughput oriented. An example of an OLAP query would be the amount of additional insurance services gold credit card customers were signing up for during the summer months.

OLTP Reporting OLAP Analytics
Availability +++ + +
Concurrency +++ + +
Latency sensitivity +++ + ++
Throughput oriented + +++ +++
Ad hoc + + ++
I/O Operations Mix R/W Read Intensive Read Intensive
Analytics

Analytical workload is truly ad-hoc in nature. Whereas reporting aims to provide perspective of the numbers that are being presented, analytics provide insights in why the numbers are what they are. Reporting provides the how many new accounts where acquired by the private banking account team, analytics aims to provide insights why the private banking account team did not hit their quota in the last quarter. Analytics can query multiple databases and can be multi-step processes. Typically analytic queries write out large temporary results. Potentially it generates large intermediate results before slicing and dicing the temp data again. This means this data needs to be stored as fast as possible, the data is read again for the next query therefor read performance is crucial as well. Output is the input of the next query and this can happen multiple times, requiring both fast read and write performance otherwise your query will slow down dramatically.

Another problem is the sort process, for example you are retrieving data that needs to be sorted however the dataset is so large that you can’t hold everything in memory during the sort process resulting in spilling data to disk.

Because analytics queries can be truly ad-hoc in nature it is difficult to design an effecient schema for it upfront. This makes analytics an especially difficult use case from a performance perspective.

OLTP Reporting OLAP Analytics
Availability +++ + + +
Concurrency +++ + + +
Latency sensitivity +++ + ++ +++
Throughput oriented + +++ +++ +++
Ad hoc + + ++ +++
I/O Operations Mix R/W Read Intensive Read Intensive Mix R/W
Designing and testing your storage architecture in line with DB-workload

By having a better grasp of the storage performance requirements of each specific database you can now design your environment to suits its need. Understanding these requirements helps you to test the infrastructure more focused on the expected workload.
Instead of running “your average db workload” in Iometer this allows you to test more towards latency or throughput oriented workloads when understanding what type of database will be used. The next article of this series dives into understanding whether tuning databases or storage architectures can solve performance.

Storage I/O requirements of SAP HANA on vSphere 5.5


by

During VMworld a lot of attention was going to the support for SAP HANA on vSphere 5.5 for Production Environments. SAP HANA is an In-memory database platform that allows running real-time analytics and real-time apps.

In its white paper “Best practices and recommendations for Scale-Up Deployments of SAP HANA on VMware vSphere” VMware states that vMotion, DRS and HA is supported for virtualized SAP HANA systems. This is amazing and very exciting. Being able to run these types of database platform virtualized is a big deal. You can finally leverage the mobility and isolation benefits provided by the virtual infrastructure and get rid of the rigid physical landscapes that are costly to maintain and a pain to support.

When digging deeper in the architecture of the SAP HANA platform you discover that SAP HANA has to write to disk even though it’s an In-memory database platform. Writing to disk allows HANA to provide ACID guarantees for the database. ACID stands for (Atomicity, Consistency, Isolation, Durability) and these properties guarantee that database transactions are processed reliably.

On a side note, the release of SAP HANA support triggered me in to diving into database structures and architectures, luckily for me our director of Products has an impressive track record on DB designs so I spend a few hours with him to learn more about this. This information will be shared in a short series soon. But I digress.

The document “SAP HANA – Storage Requirements” available at saphana.com provides detailed insight in storage I/O behavior of the platform. On page 4 the following statement is made: SAP HANA uses storage for several purposes:

Data: SAP HANA persists a copy of the in-memory data, by writing changed data in the form of so-called save point blocks to free file positions, using I/O operations from 4 KB to 16 MB (up to 64 MB when considering super blocks) depending on the data usage type and number of free blocks. Each SAP HANA service (process) separately writes to its own save point files, every five minutes by default.

Redo Log: To ensure the recovery of the database with zero data loss in case of faults, SAP HANA records each transaction in the form of a so-called redo log entry. Each SAP HANA service separately writes its own redo-log files. Typical block-write sizes range from 4KB to 1MB”

So it makes sense to use a fast storage platform that can process various types of block sizes real fast. That means low latency and high throughput, which server-side resources can provide very easily.

In the document “SAP HANA Guidelines for being virtualized with VMware vSphere” available on saphana.com the following statement is issued in the section 4.5.3. Storage Requirement:

SAP and VMware recommend to following the VMware Best Practices for SAP HANA virtualized with VMware vSphere with regards to technical storage configuration in VMware. Especially virtual disks created for log volumes of SAP HANA should reside on local SSD or PCI adapter flash devices if present. Central enterprise storage may be used in terms of the SAP HANA tailored data
center intergation approach.

It is an interesting standpoint. SAP recommends using flash and it makes sense because what is the point of running such a platform in memory when your storage platform is slow. When using local flash storage you will introduce a static workload in your virtual infrastructure again. SAP-HANA supports the use of enhanced vMotion, migrating a VM between two hosts and two datastores simultaneously, however at time of writing, DRS does not leverage enhanced vMotion for load-balancing operations. This results in the loss of automatic load balancing and potentially reducing the ability of virtual machine recovery by vSphere High Availability.

Instead of introducing rigid and silo’ed architectures, its makes sense to use PernixData FVP. FVP, Supported by VMware, allows for clustered and fault tolerant I/O acceleration by using flash or (soon) memory. By virtualizing these acceleration resources into one seamless pool, VMs can seamlessly migrate to any host in the cluster while being able to retrieve data throughout the cluster.

SAP HANA accelerates instructions by keeping it in memory, while FVP accelerates the writes by leveraging the available acceleration resources. In vSphere 5.5 SAP HANA is limited to 1TB of memory due to the maximum virtual machine configuration, however vSphere 5.5 supports a host memory configuration of 4TB. With the soon to be released FVP 2.0 with memory support, FVP allows you to leverage the remaining memory to accelerates the writes as well, making it a true in-memory platform.

Virtual machines versus Containers who will win?


by

Ah round X in the battle between who will win, which technology will prevail and when will the displacement of technology happen. Can we stop with this nonsense, with this everlasting tug-of-war mimicking the characteristics of a schoolyard battle. And I can’t wait to hear these conversations at VMworld.

In reality there aren’t that many technologies that completely displaced a prevailing technology. We all remember the birth of the CD and the message of revolutionising music carriers. And in a large way it did, yet still there are many people who prefer to listen to vinyl. Experience the subtle sounds of the medium, giving it more warmth and character. The only solution I can think of that displaced the dominant technology was video disc (DVD & Blue Ray) rendering video tape completely obsolete (VHS/Betamax). There isn’t anybody (well let’s only use the subset Sane people) that prefers a good old VHS tape above a Blue ray tape. The dialog of “Nah let’s leave the blue-ray for what it is, and pop in the VHS tape, cause I like to have that blocky grainy experience” will not happen very often I expect. So in reality most technologies coexist in life.

Fast forward to today. Dockers’ popularity put Linux Containers on the map for the majority of the IT population. A lot of people are talking about it and see the merits of leveraging a container instead of using a virtual machine. To me the choice seems to stem from the layer you present and manage your services. If your application is designed to provide high availability and scalability, then a container may be the best fit. If your application doesn’t than place it in a virtual machine and leverage the services provided by the virtual infrastructure. Sure there are many other requirements and constraints to incorporate in your decision tree, but I believe the service availability argument should be one of the first steps.

Now the next step is, where do you want to run your container environment? If you are a VMware shop, are you going to invest time and money to expand your IT services with containers or are you going to leverage an online PAAS provider? Introducing an APPS centric solution into an organization that has years of experience in managing Infrastructure centric platforms might require a shift of perspective

Just my two cents.

An Amazing Year for PernixData


by

I know it might seem odd to read a “year end wrap-up” in August, but PernixData just closed its first fiscal year and we wanted to share our excitement. To say this was an incredible year for the company would be an understatement.

In the last twelve months, we launched our groundbreaking FVP software, won thirteen industry awards, and enabled approximately 200 companies in 20 countries to build decoupled storage architectures that cost effectively solve VM performance challenges. In fact, PernixData set a record for first year revenue booked in the enterprise infrastructure software space. Yes, you heard that correctly. We sold 40% more than any other software infrastructure company in their first year of shipping!

I am also pleased to say that PernixData just closed an oversubscribed round of funding, raising $35 million from Menlo Ventures, Kleiner Perkins Caufield & Byers, and Lightspeed Ventures, with individual investments from Marc Benioff, Steve Luczo, Jim Davidson, John Thompson, Mark Leslie, and Lane Bess. It is a huge validation that the most respected people and institutions in the industry are willing to invest in the future that PernixData is building.

We are well on our way to becoming the de-facto standard for storage acceleration software. To that end, I wanted to thank each and every one of you for contributing to our extraordinary success. I look forward to sharing many more wonderful milestones with you in the coming year.
Sincerely -
Poojan Kumar

Interesting Facts

  • Revolutionary Product – PernixData first began shipping FVP software in August of 2013. In a short period of time, the product established itself as the premier platform for storage acceleration with unique features like write acceleration, clustering, and installation within the hypervisor. In April 2014, FVP also became the first acceleration solution to run on both flash and RAM, and to support any file, block or direct attached storage.Today, FVP is estimated to be accelerating approximately 120,000 VMs worldwide.
  • Broad Range of Customers – PernixData has sold FVP software to approximately 200 companies in 20 countries, ranging from three to 300+ node deployments. Customers range from small businesses to large enterprises and global service providers, including well-known names like Tata Steel, Virtustream, Sega, Toyota and more. More customer examples can be found in our resource centerExtremely high customer satisfaction resulted in strong follow-on business for PernixData in the first year, with existing customers contributing approximately 30% of total bookings. On average, follow-on orders were twice the size original and came in less than 3 months after the original purchase.
  • The Most Industry Awards – PernixData was distinguished this past year with more industry awards (thirteen) than any other enterprise software company. Specific accolades include two Best New Technology awards at VMworld 2013, Forbes Most Promising Company, Gartner 2014 Cool vendor in Storage, and Infoworld Product of the Year. A full list of awards can be viewed here.
  • World-class Partners – PernixData has signed almost 250 resellers worldwide in approximately 50 countries. In addition, the company has a global resale agreement in place with Dell. On the technology side, the company introduced the PernixDrive program to accelerate the adoption of decoupled software, which has led to various partnerships with leading flash vendors like HGST, Intel, Kingston, Micron, Toshiba, and more.

Infographic

Stop wasting your Storage Controller CPU cycles


by

Typically when dealing with storage performance problems, the first questions asked are what type of disks? What speed? what protocol? However your problem might be in the first port of call of your storage array, the storage controller!

When reviewing storage controller configurations of the most favourite storage arrays, one thing stood out to me and that is the absence of CPU specs. Storage controllers of the storage array are just plain simple servers, equipped with a bunch of I/O ports that establish communication with the back end disks and provide a front-end interface to communicate with the attached hosts. The storage controllers run proprietary software providing data services and specific storage features. And providing data services and running the software requires CPU power! After digging some more, I discovered that most storage controllers are equipped with two CPUs ranging from quad core to eight core. Sure there are some exceptions but lets stick to the most common configurations. This means that the typical enterprise storage array is equipped with 16 to 32 cores in total as they come with two storage controllers. 16 to 32 cores, thats it! What are these storage controller CPU used for? Today’s storage controller activity and responsibility:

  • Setting up and maintaining data paths.
  • Mirror writes for write-back cache between the storage controllers for redundancy and data availability.
  • Data movement and data integrity.
  • Maintaining RAID levels and calculating & writing parity data.
  • Data services such as snapshots and replication.
  • Internal data saving services such as deduplication and compression.
  • Executing Multi-tiering algorithms and promoting and demoting data to appropriate tier level.
  • Running integrated management software providing management and monitoring functionality of the array.

Historically arrays were designed to provide centralised data storage to a handful of servers. I/O performance was not the a pain point as many arrays easily delivered the request each single server could make. Then virtualisation hit the storage array. Many average I/O consuming grouped together on a single server, making that server, as George Crump would call it, an fire-breathing I/O demon. Mobility of virtual machines required an increased of connectivity, such that it was virtually impossible (no pun intended) to manually balance I/O load across the available storage controller I/O ports. The need for performance increased, resulting in larger number of disks managed by the storage controller, different types of disks, different speeds.

Virtualization-first policies pushed all types of servers and their I/O patterns on the storage array, introducing the need of new methods of software defined economics (did someone coin that term?) It became obvious that not every virtual machine requires the fastest resource 24/7, causing interest into multi-tiered solutions. Multi-tiering requires smart algorithms promoting and demoting data when it makes sense, providing the best performance to the workload when required while offering the best level of economics to the organisation. Snapshotting, dedup and other internal data saving services raised the need of CPU cycles even more. With the increase of I/O demand and introduction of new data services its not uncommon for virtualised datacenter to have over-utilised storage controllers.

Rethink current performance architecture
Server side acceleration platforms increases performance of virtual machines by leveraging faster resources (flash & memory) that are in closer proximity to the application than the storage array datastore. By keeping the data in the server layer, storage acceleration platforms, such as PernixData FVP, provides additional benefits to the storage area network and the storage array.

Impact of read acceleration on data movement
Hypervisors are seen as I/O blenders, sending the stream of random I/O of many virtual machines to the I/O ports of the storage controllers. Theses reads and writes must be processed, writes are committed to disks, data retrieved from disks to satisfy the read requests. All these operations consume CPU cycles. When accelerating writes, subsequents reads of that data are serviced from the flash device closest to the application. Typically data is read multiple times, decreasing latency for the application, but also unloading – relieving – the architecture from servicing that load. FVP provides metrics that show how many IO are saved from the datastore by servicing the data from flash. The screenshot below is taken after 6 weeks of accelerating database workloads. More info about this architecture here

8billionIOPS

The storage array does not have to service those 8 billion IOPS, but not only that, 520,63 TB did not traverse across the storage area network occupying the I/O ports of the storage controllers. That means that other workloads, maybe virtualised workload that hasn’t been accelerated yet, or external workload using the same array will not be affected by that I/O anymore. Less I/O hitting the inbound I/O queues on the storage controllers, allowing other I/O to flow more freely into the storage controller, lesser data to be retrieved by disks, lesser I/O going upstream from disk to I/O ports to begin its journey back from the storage controller all the way up to the application again. Saving copious amounts of CPU cycles allowing data services and other internal processes to take advantage of the available CPU cycles increasing the response of the array.

The screenshot is made by one of my favourite customers, but we are running a cool contest see which application the accelerated and how many IOPS other customers have saved.

https://twitter.com/PernixData/status/490234659436253184

Impact of Write acceleration on storage controller write cache (mirroring)

Almost all current storage arrays contain cache structures to speed up both reads and writes. Speeding up writes provide benefit to both the application and the array itself. Writing to NVRAM, where typically the write cache resides, is much faster than writing to (RAID-configured) disk structures allowing for faster write acknowledgements. As the acknowledgment is provided to the application, the array can “leisurely” structure the writes in the most optimum way to commit to data the backend disks.

To avoid a storage controller to be a single point of failure, redundancy is necessary to avoid data loss. Some vendors provide journaled and consistency points for redundancy purposes, most vendors mirror writes between the cache areas of both controllers. Mirrored write cache requires coordination between the controllers to ensure data coherency. Typically messaging is used via the backplane between controllers to ensure correctness. Mirroring data and messaging requires CPU cycles of both controllers.

Unfortunately even with these NVRAM structures, write problems seem to be persisting even today. No matter the size or speed of the NVRAM it’s the back-end disk capability to process writes that is being overwhelmed. Increasing cache sizes at the controller layer just delays the point at which write performance problems begins. Typically this occurs when there is a spike of write I/O. Remember, most ESX environments generate a constant flow of I/O’s adding a spike of I/Os is usually adding insult to injury to the already strained storage controller. Some controllers reserve a static portion for mirrored writes, forcing the controller to flush the data to disk when that portion begins to fill up. As the I/O keeps pouring in, the write cache has to wait to complete the incoming I/O until the current write data is committed to disk resulting in high latency for the application. Storage controller CPUs can be overwhelmed as the incoming I/O has to be mirrored between cache structures and coherency has to be guaranteed. Waisting precious CPU cycles on (a lot of) messaging between controllers instead of using it for other data services and features.

absorbing writes

FVP write back acknowledges the I/O once the data is written to the flash resources in the FVP cluster. FVP does not replace the datastore, therefor writes still have to be written to the storage array. The process of writing data to the array becomes transparent as the application already received the acknowledgement from FVP. This allows FVP to shape write patterns in such a way that are more suitable for the array to process. Typically FVP writes the data as fast as possible, but when the array is heavily utilised FVP time-releases the I/O’s. This results in a more uniform IO pattern. (Datastore write in the performance graph above). By flatting the spike, i.e. writing the same amount of IO’s over a longer period of time, the storage controller can handle the incoming stream much better. Avoiding forced cache flushes and CPU bottlenecks as a result.

FVP allows you to accelerate your workload, the acceleration of reads and writes reduces the amount of I/O’s hitting the array and the workload pattern. Customers who implemented FVP to accelerate their workloads experience significant changes of storage controllers utilisation benefitting external and non-accelerated workloads in the mix.

PernixData FVP Hit Rate Explained


by

I assume most of you know that PernixData FVP provides a clustered solution to accelerate read and write I/O. In light of this I have received several questions around what the “Hit Rate” signifies in our UI. Since we commit every “write” to server-side flash then you obviously are going to have a 100% hit rate. This is one reason why I refrain calling our software a write caching solution!

However the hit rate graph in PernixData FVP as seen below is only referencing the read hit rate. In other words, every time we can reference a block of data on the server-side flash device it’s deemed a hit. If a read request cannot be acknowledged from the local flash device then it will need to be retrieved from the storage array. If a block needs to be retrieved from storage then it will not be registered in the hit rate graph. We do however copy that request into flash, so the next time that block of data is requested then it would then be seen as a hit.

Keep in mind that a low hit rate, doesn’t necessarily mean that you are not getting a performance increase. For example if you have a workload in “Write Back” mode and you have low hit rate, then this could mean that the workload has a heavy write I/O profile. So, even though you may have a low hit rate, all writes are still being accelerated because all the writes are served from the local flash device.