Virtual machines versus Containers who will win?


by

Ah round X in the battle between who will win, which technology will prevail and when will the displacement of technology happen. Can we stop with this nonsense, with this everlasting tug-of-war mimicking the characteristics of a schoolyard battle. And I can’t wait to hear these conversations at VMworld.

In reality there aren’t that many technologies that completely displaced a prevailing technology. We all remember the birth of the CD and the message of revolutionising music carriers. And in a large way it did, yet still there are many people who prefer to listen to vinyl. Experience the subtle sounds of the medium, giving it more warmth and character. The only solution I can think of that displaced the dominant technology was video disc (DVD & Blue Ray) rendering video tape completely obsolete (VHS/Betamax). There isn’t anybody (well let’s only use the subset Sane people) that prefers a good old VHS tape above a Blue ray tape. The dialog of “Nah let’s leave the blue-ray for what it is, and pop in the VHS tape, cause I like to have that blocky grainy experience” will not happen very often I expect. So in reality most technologies coexist in life.

Fast forward to today. Dockers’ popularity put Linux Containers on the map for the majority of the IT population. A lot of people are talking about it and see the merits of leveraging a container instead of using a virtual machine. To me the choice seems to stem from the layer you present and manage your services. If your application is designed to provide high availability and scalability, then a container may be the best fit. If your application doesn’t than place it in a virtual machine and leverage the services provided by the virtual infrastructure. Sure there are many other requirements and constraints to incorporate in your decision tree, but I believe the service availability argument should be one of the first steps.

Now the next step is, where do you want to run your container environment? If you are a VMware shop, are you going to invest time and money to expand your IT services with containers or are you going to leverage an online PAAS provider? Introducing an APPS centric solution into an organization that has years of experience in managing Infrastructure centric platforms might require a shift of perspective

Just my two cents.

An Amazing Year for PernixData


by

I know it might seem odd to read a “year end wrap-up” in August, but PernixData just closed its first fiscal year and we wanted to share our excitement. To say this was an incredible year for the company would be an understatement.

In the last twelve months, we launched our groundbreaking FVP software, won thirteen industry awards, and enabled approximately 200 companies in 20 countries to build decoupled storage architectures that cost effectively solve VM performance challenges. In fact, PernixData set a record for first year revenue booked in the enterprise infrastructure software space. Yes, you heard that correctly. We sold 40% more than any other software infrastructure company in their first year of shipping!

I am also pleased to say that PernixData just closed an oversubscribed round of funding, raising $35 million from Menlo Ventures, Kleiner Perkins Caufield & Byers, and Lightspeed Ventures, with individual investments from Marc Benioff, Steve Luczo, Jim Davidson, John Thompson, Mark Leslie, and Lane Bess. It is a huge validation that the most respected people and institutions in the industry are willing to invest in the future that PernixData is building.

We are well on our way to becoming the de-facto standard for storage acceleration software. To that end, I wanted to thank each and every one of you for contributing to our extraordinary success. I look forward to sharing many more wonderful milestones with you in the coming year.
Sincerely -
Poojan Kumar

Interesting Facts

  • Revolutionary Product – PernixData first began shipping FVP software in August of 2013. In a short period of time, the product established itself as the premier platform for storage acceleration with unique features like write acceleration, clustering, and installation within the hypervisor. In April 2014, FVP also became the first acceleration solution to run on both flash and RAM, and to support any file, block or direct attached storage.Today, FVP is estimated to be accelerating approximately 120,000 VMs worldwide.
  • Broad Range of Customers – PernixData has sold FVP software to approximately 200 companies in 20 countries, ranging from three to 300+ node deployments. Customers range from small businesses to large enterprises and global service providers, including well-known names like Tata Steel, Bank of the West, Virtustream, Sega, Toyota and more. More customer examples can be found in our resource centerExtremely high customer satisfaction resulted in strong follow-on business for PernixData in the first year, with existing customers contributing approximately 30% of total bookings. On average, follow-on orders were twice the size original and came in less than 3 months after the original purchase.
  • The Most Industry Awards – PernixData was distinguished this past year with more industry awards (thirteen) than any other enterprise software company. Specific accolades include two Best New Technology awards at VMworld 2013, Forbes Most Promising Company, Gartner 2014 Cool vendor in Storage, and Infoworld Product of the Year. A full list of awards can be viewed here.
  • World-class Partners – PernixData has signed almost 250 resellers worldwide in approximately 50 countries. In addition, the company has a global resale agreement in place with Dell. On the technology side, the company introduced the PernixDrive program to accelerate the adoption of decoupled software, which has led to various partnerships with leading flash vendors like HGST, Intel, Kingston, Micron, Toshiba, and more.

Infographic

Stop wasting your Storage Controller CPU cycles


by

Typically when dealing with storage performance problems, the first questions asked are what type of disks? What speed? what protocol? However your problem might be in the first port of call of your storage array, the storage controller!

When reviewing storage controller configurations of the most favourite storage arrays, one thing stood out to me and that is the absence of CPU specs. Storage controllers of the storage array are just plain simple servers, equipped with a bunch of I/O ports that establish communication with the back end disks and provide a front-end interface to communicate with the attached hosts. The storage controllers run proprietary software providing data services and specific storage features. And providing data services and running the software requires CPU power! After digging some more, I discovered that most storage controllers are equipped with two CPUs ranging from quad core to eight core. Sure there are some exceptions but lets stick to the most common configurations. This means that the typical enterprise storage array is equipped with 16 to 32 cores in total as they come with two storage controllers. 16 to 32 cores, thats it! What are these storage controller CPU used for? Today’s storage controller activity and responsibility:

  • Setting up and maintaining data paths.
  • Mirror writes for write-back cache between the storage controllers for redundancy and data availability.
  • Data movement and data integrity.
  • Maintaining RAID levels and calculating & writing parity data.
  • Data services such as snapshots and replication.
  • Internal data saving services such as deduplication and compression.
  • Executing Multi-tiering algorithms and promoting and demoting data to appropriate tier level.
  • Running integrated management software providing management and monitoring functionality of the array.

Historically arrays were designed to provide centralised data storage to a handful of servers. I/O performance was not the a pain point as many arrays easily delivered the request each single server could make. Then virtualisation hit the storage array. Many average I/O consuming grouped together on a single server, making that server, as George Crump would call it, an fire-breathing I/O demon. Mobility of virtual machines required an increased of connectivity, such that it was virtually impossible (no pun intended) to manually balance I/O load across the available storage controller I/O ports. The need for performance increased, resulting in larger number of disks managed by the storage controller, different types of disks, different speeds.

Virtualization-first policies pushed all types of servers and their I/O patterns on the storage array, introducing the need of new methods of software defined economics (did someone coin that term?) It became obvious that not every virtual machine requires the fastest resource 24/7, causing interest into multi-tiered solutions. Multi-tiering requires smart algorithms promoting and demoting data when it makes sense, providing the best performance to the workload when required while offering the best level of economics to the organisation. Snapshotting, dedup and other internal data saving services raised the need of CPU cycles even more. With the increase of I/O demand and introduction of new data services its not uncommon for virtualised datacenter to have over-utilised storage controllers.

Rethink current performance architecture
Server side acceleration platforms increases performance of virtual machines by leveraging faster resources (flash & memory) that are in closer proximity to the application than the storage array datastore. By keeping the data in the server layer, storage acceleration platforms, such as PernixData FVP, provides additional benefits to the storage area network and the storage array.

Impact of read acceleration on data movement
Hypervisors are seen as I/O blenders, sending the stream of random I/O of many virtual machines to the I/O ports of the storage controllers. Theses reads and writes must be processed, writes are committed to disks, data retrieved from disks to satisfy the read requests. All these operations consume CPU cycles. When accelerating writes, subsequents reads of that data are serviced from the flash device closest to the application. Typically data is read multiple times, decreasing latency for the application, but also unloading – relieving – the architecture from servicing that load. FVP provides metrics that show how many IO are saved from the datastore by servicing the data from flash. The screenshot below is taken after 6 weeks of accelerating database workloads. More info about this architecture here

8billionIOPS

The storage array does not have to service those 8 billion IOPS, but not only that, 520,63 TB did not traverse across the storage area network occupying the I/O ports of the storage controllers. That means that other workloads, maybe virtualised workload that hasn’t been accelerated yet, or external workload using the same array will not be affected by that I/O anymore. Less I/O hitting the inbound I/O queues on the storage controllers, allowing other I/O to flow more freely into the storage controller, lesser data to be retrieved by disks, lesser I/O going upstream from disk to I/O ports to begin its journey back from the storage controller all the way up to the application again. Saving copious amounts of CPU cycles allowing data services and other internal processes to take advantage of the available CPU cycles increasing the response of the array.

The screenshot is made by one of my favourite customers, but we are running a cool contest see which application the accelerated and how many IOPS other customers have saved.

https://twitter.com/PernixData/status/490234659436253184

Impact of Write acceleration on storage controller write cache (mirroring)

Almost all current storage arrays contain cache structures to speed up both reads and writes. Speeding up writes provide benefit to both the application and the array itself. Writing to NVRAM, where typically the write cache resides, is much faster than writing to (RAID-configured) disk structures allowing for faster write acknowledgements. As the acknowledgment is provided to the application, the array can “leisurely” structure the writes in the most optimum way to commit to data the backend disks.

To avoid a storage controller to be a single point of failure, redundancy is necessary to avoid data loss. Some vendors provide journaled and consistency points for redundancy purposes, most vendors mirror writes between the cache areas of both controllers. Mirrored write cache requires coordination between the controllers to ensure data coherency. Typically messaging is used via the backplane between controllers to ensure correctness. Mirroring data and messaging requires CPU cycles of both controllers.

Unfortunately even with these NVRAM structures, write problems seem to be persisting even today. No matter the size or speed of the NVRAM it’s the back-end disk capability to process writes that is being overwhelmed. Increasing cache sizes at the controller layer just delays the point at which write performance problems begins. Typically this occurs when there is a spike of write I/O. Remember, most ESX environments generate a constant flow of I/O’s adding a spike of I/Os is usually adding insult to injury to the already strained storage controller. Some controllers reserve a static portion for mirrored writes, forcing the controller to flush the data to disk when that portion begins to fill up. As the I/O keeps pouring in, the write cache has to wait to complete the incoming I/O until the current write data is committed to disk resulting in high latency for the application. Storage controller CPUs can be overwhelmed as the incoming I/O has to be mirrored between cache structures and coherency has to be guaranteed. Waisting precious CPU cycles on (a lot of) messaging between controllers instead of using it for other data services and features.

absorbing writes

FVP write back acknowledges the I/O once the data is written to the flash resources in the FVP cluster. FVP does not replace the datastore, therefor writes still have to be written to the storage array. The process of writing data to the array becomes transparent as the application already received the acknowledgement from FVP. This allows FVP to shape write patterns in such a way that are more suitable for the array to process. Typically FVP writes the data as fast as possible, but when the array is heavily utilised FVP time-releases the I/O’s. This results in a more uniform IO pattern. (Datastore write in the performance graph above). By flatting the spike, i.e. writing the same amount of IO’s over a longer period of time, the storage controller can handle the incoming stream much better. Avoiding forced cache flushes and CPU bottlenecks as a result.

FVP allows you to accelerate your workload, the acceleration of reads and writes reduces the amount of I/O’s hitting the array and the workload pattern. Customers who implemented FVP to accelerate their workloads experience significant changes of storage controllers utilisation benefitting external and non-accelerated workloads in the mix.

PernixData FVP Hit Rate Explained


by

I assume most of you know that PernixData FVP provides a clustered solution to accelerate read and write I/O. In light of this I have received several questions around what the “Hit Rate” signifies in our UI. Since we commit every “write” to server-side flash then you obviously are going to have a 100% hit rate. This is one reason why I refrain calling our software a write caching solution!

However the hit rate graph in PernixData FVP as seen below is only referencing the read hit rate. In other words, every time we can reference a block of data on the server-side flash device it’s deemed a hit. If a read request cannot be acknowledged from the local flash device then it will need to be retrieved from the storage array. If a block needs to be retrieved from storage then it will not be registered in the hit rate graph. We do however copy that request into flash, so the next time that block of data is requested then it would then be seen as a hit.

Keep in mind that a low hit rate, doesn’t necessarily mean that you are not getting a performance increase. For example if you have a workload in “Write Back” mode and you have low hit rate, then this could mean that the workload has a heavy write I/O profile. So, even though you may have a low hit rate, all writes are still being accelerated because all the writes are served from the local flash device.

What grade Flash to pick for a POC and test environment?


by

Do I need to buy a specific grade SSD for my test environment or can I buy the cheapest SSDs? Do I need to buy enterprise grade SSDs for my POC? They last longer, but why should I bother for a POC? Do we go for Consumer grade or Enterprise grade flash devices? All valid questions that typically arise after a presentation about PernixData FVP, but I can imagine Duncan and Cormac receive the same when talking about VSAN.

Enterprise flash devices are known for their higher endurance rate, their data protection features and their increased speed compared to consumer grade flash devices and although these features are very nice to have, they aren’t the most important features to have when testing flash performance.

The most interesting features of enterprise flash devices are Wear Levelling (To reduce hot spots), Spare Capacity, Write Amplification Avoidance, Garbage Collection Efficiency and Wear-Out Prediction Management. These lead to I/O consistency. And I/O consistency is the Holy Grail for test, POC and production workloads.

Spare capacity
One of the main differentiators of enterprise grade disks is spare capacity. The controller and disk use this spare capacity to reduce write amplification. Write amplification occurs when the drive runs out of pages to write data. In order to write data, the page needs to be in an erased state. Meaning that if (stale) data is present in that page, the drive needs to erase it first before writing (fresh) data. The challenge with flash is that the controller can erase per block, a collection of pages. It might happen that the block contains pages that have still valid data. That means that this data needs to be written somewhere else before the controller can delete the block of pages. That sequence is called write amplification and that is something you want to keep to a minimum.

To solve this, flash vendors have over provisioned the device with flash cells. The more technical accurate term is “Reduced LBA access”. For example, the Intel DC S3700 flash disk series comes standard with 25 – 30% more flash capacity. This capacity is assigned to the controller and uses this to manage background operations such as garbage collection, NAND disturb rules or erase blocks. Now the interesting part is how the controller handle management operations. Enterprise controllers contain far more advanced algorithms to reduce the wear of blocks by reducing the movement of data, understanding which data is valid and which is stale (TRIM) and how fast and efficient it can redefine logical to physical LBAs after moving valid data to erase the stale data. Please read this article to learn more about write amplification.

Consumer grade flash
Consumer grade flash devices lack in these areas, most of them have TRIM support, but how advanced is that algorithm? Most of them can move data around, but how fast and intelligent is the controller? But the biggest question is how many spare pages does it have to reduce the write amplification. In worst case scenarios, and that usually happens when running test, the disk is saturated and the data keeps on pouring in. Typically a consumer grade has 7 % spare capacity and it will not use all that space for data movement. Due to the limited space available, the drive will allocate new blocks from its spare area first, then eventually using up its spare capacity to end up doing a read-modify-write operation. At that point the controller and the device are fully engaged with household chores instead of providing service to the infrastructure. It’s almost like the disk is playing a sliding puzzle

sliding-puzzles-1

Anandtech.com performed similar tests and witnessed similar behaviour, the publish their results in the article “Exploring the Relationship Between Spare Area and Performance Consistency in Modern SSDs” An excellent read, highly recommended. In this test they used the default spare capacity and run some test. In the test they used one of the best consumer grade SSD device, the Samsung 840 PRO. In this test with a single block size (which is an anomaly in real-life workload characteristics) the results are all over the place.

840pro- default spare capacity

Seeing a scattered plot with results ranging between 200 and 100.000 IOPS is not a good base platform to understand and evaluate a new software platform.

The moment they reduced the user-addressable space (reformat the file system to use less space) the performance goes up and is far more stable. Almost ever result is in the 25.000 to 30.000 range.

840pro-25 spare capacity

Please note that both VSAN as FVP manage the flash devices at their own level, you cannot format the disk to create additional spare capacity.

Latency test show exactly the same. I’ve tested some enterprise disks and consumer grade disks and the results were interesting to say the least. The consumer grade drive performance charts were not as pretty. The virtual machine running the read test was the only workload hitting the drive and yet the drive had trouble providing steady response times.

Consumer-Latency

I swapped the consumer grade for an enterprise disk and ran the same test again, this time the latency was consistent, providing predictable application response time.

Enterprise-latency

Why you want to use enterprise devices
When testing and evaluate new software, even a new architecture, the last thing you want to do is start an investigation why performance is so erratic. Is it the software? Is it the disk, the test pattern, or is the application acting weird? You need to have stable, consistent and predictable hardware layer that acts as the foundation for the new architecture. You need a stable environment that allows you to baseline the performance of the device and you can understand the characteristics of the workload, the software performance and the overall benefit of this new platform in your architecture.

Enterprise flash devices provide these abilities and when doing a price comparison between enterprise and consumer grade the difference is not that extreme. In Europe you can get an Intel DC S3700 100GB for 200 Euros. Amazon is offering the 200 GB for under 500 US dollars. 100 to 200GB is more than enough for testing purposes.

A picture is worth a thousand words


by

I love seeing FVP in action with customers. There is no better way to tell – or show – the benefits of our storage acceleration software than through real live examples.

Here is a great FVP screenshot I just came across today from one customer (running a write intensive workload).

In the graph, you see the purple curve which is the latency that the SAN (datastore) is experiencing. There are two things worth highlighting:

(1) Latency is very high – about 18 ms on average

(2) Latency is all over the place. In other words, a user running an application against this SAN will have unpredictable performance.

FVP in Action Lowering Latency

FVP in Action Lowering Latency 

The blue curve is what the VM is experiencing. Initially this blue curve is tracking the purple curve, which means the VM latency is moving in accordance with SAN latency, which is unfortunate given the issues pointed out above. But checkout the end of the graph, where the blue line begins to trend away from the purple curve. This is when Write Back acceleration is enabled in the PernixData FVP solution. Suddenly, we see the FVP magic:

(1) VM latency is way lower – about 1 ms, which is an 18x improvement over the SAN alone

(2) Latency very predictable and not choppy like it was prior. This means predictable application performance, which users love.

This, my friends, is the benefit of a server-side storage intelligence platform with read and write acceleration. This is the value of FVP – all in one simple chart.

Flash in SAN – Panacea or Placebo?


by

There is no doubt that storage is at a crossroads.  It is exciting to see so many new  technologies emerging to ensure optimal performance in virtual data centers.  And if the investment community is any indication, flash is at the heart of all.

In many respects, flash is a great answer to the commonly seen storage I/O problem – especially when dealing with low latency applications like SQL and VDI.  But there is a lot of confusion as to the best place to implement flash – in the server, in the array, or in a converged version of the two.  

Storage Switzerland decided to help add some clarity to this issue.  They created this new paper. “Choosing the Right Flash Storage Architecture” that helps you navigate the waters a bit, figuring out if decoupled or hyperconverged is right for you, and/or if you need flash in your SAN to solve your application woes.

Regarding this latter topic, we also added our thoughts in this paper “Flash in SAN- Panaceo or Placebo?”.

Both are good reads, so enjoy… 

 

Collateral Benefits of an Acceleration Platform


by

Yesterday I visited a customer to review their experience of implementing FVP. They loved the fast response time and the incredible performance that server flash brings to the table. Placing flash resources in the host, as close to the application as possible, allows you to speed up the workloads you select. Reducing distance between the application and the storage device provides lower latencies and the performance of the flash device allows for great performance. But what is interesting is the “collateral benefit” that the FVP architecture provides to the entire architecture.

During the conversation the customer dropped their hero numbers on me. Hero numbers are the historical data points presented by the U.I. such as IOPS saved from the datastore and bandwidth saved. We like to call these Hero numbers as indicate the impact on the environment and they sure were impressive.

In one-week’s time FVP accelerated 1.2 Billion IOPS in their environment. (IOs saved from the Datastore)

1 Billion IOPS

Please note that these are business workloads, not Iometer workload tests. IO’s generated from Oracle and MS SQL databases. 3 hours later I received a new screenshot; it accumulated 24 Million more IOPS saved during that time. Indicating an average acceleration of 8 Million IOPS per hour.

3Hourslater

That is 8 Million of I/O’s per hour served by server flash and not hitting the array. In total almost 60 TB of data did not traverse the storage area network allowing other workloads to roam freely through the storage network. Other workloads such as virtual machines or physical servers connected to the array or SAN. This reduction of I/O results in lower CPU utilization of the storage controllers, freeing up resources to become available for non-accelerated workloads.

60TB is the amount of read I/O’s saved by FVP hitting the storage area network and the array. When accelerating both reads and writes, we still send the write data to the array, as FVP is not a persistent storage layer (i.e. providing datastore capabilities). When the virtual machine is in Write back FVP try to destage (write uncommitted data to the storage system) as fast as possible. If the storage system is busy FVP destages uncommitted data at a rate the primary storage is comfortable of receiving data. Risk of data-loss is averted by storing multiple replicas on other hosts in the FVP Cluster allowing FVP to destage in a more uniform write pattern. Being able to time-release I/O’s results permits FVP to absorb workload spikes and convert them in write pattern more aligned with the performance capabilities of the entire storage area network

I captured a spiky OLTP workload to show this phenomenon. The workload generated 8800 IOPS. (the green line) The flash device absorbed these writes and completed the I/O instantly, allowing the user to continue generating results. Although the application exhibits a spiky workload pattern does not require FVP to mimic this workload behavior. Data is stored safely on multiple non-volatile devices, therefor the 8800 IOPS are send to the array in such a rate that this does not overwhelm the array. The purple line indicates the number of IOPS send to the array. The highest number of IOPS send to the array is in this example 3800 IOPS, 5000 less than the spike produced by the application.

absorbing writes

This behavior reduces the continuous stress on the storage area network and the array. It allows for customers to get more mileage out of their arrays as the array now becomes focused on providing capacity and data services primarily. When having accumulated enough data points over time, these data points can be used as input for your new array configuration. This generally results in a design comprised of a lower amount of spindles, resulting in advantages such as lower cost, smaller physical footprint and reduced thermal signature.

Being able to accelerate both read and write operations goes beyond improving that specific workload, but generating an overall improvement for the entire datacenter architecture.

Data acceleration, more than just a pretty flash device


by

Sometimes I get the question whether it would make sense to place a flash appliance on the network and use this medium to accelerate data. This pool of flash serves multiple workloads without disrupting any workload when adding it to the infrastructure. Justin Warren, a Storage Field Delegate recently came to the same conclusion. In my opinion this construction leads to an inferior spot solution that does not allow for full leverage of resources and loses a lot of possibilities to grow to a more evolved architecture providing performance where its needed when its needed. Let’s take a closer look what role software has in the act of accelerating data and why you need software to to do this at scale.

Accelerating data is more than adding a faster medium to solve your problem. Just adding a raw acceleration medium will just push out the moment you hit your new performance ceiling. Virtual Datacenters are extremely dynamic. Virtualization isn’t about consolidating workload on a smaller number of servers any more, it’s rapidly moving towards aligning IT to business strategies. Virtualization allows companies to respond to new demand on the fly, being able to rapidly deploy environments that cater to the wishes of the customer while still being able in control of distributing the limited amount of resources that are available.

And in order to do this properly one needs to have full control over the resource. Being able to manage and utilize the resources as efficiently as possible, you need to be able control the stack of resources with the same set of controls, the same granularity and preferably within a single pane of glass. Adding additional resources that require a different set of controls, use a different method of management and distribution of resources reduces efficiency and usually increases complexity. Minimizing time of management AND reducing human touch points is off essence. By using two distinct systems, – inside the hypervisor kernel and outside the hypervisor – chances are that they cannot be integrated into a single policy based management process. Meaning that manual labor needs to take place, which impacts overall lead times of deployment and agility of the services offered. Think availability of human resources, think level of expertise, and think permissions and access of multiple systems. Automation and policy-based management can help you avoid all these uncertainties and dependencies and control automation in a more orchestrated fashion. More and more signals are coming from within the industry that support an overall openness of APIs and frameworks, but unfortunately the industry is not there yet.

Control, integration and automation rely on a very important element and that is identity and in our case VM identity. You cannot distribute resources properly if you don’t know who is requesting the resource. You need to understand who that entity is, what its entitlement to the resource is and what its relative priority is amongst other workloads. When a workload is exiting the ESXi host, it usually is stripped of all its identity and becomes just a random stream of demand and utilization. Many tried to solve this by carving up resources and dedicate it directly to a higher entity. For example, disk groups assigned to a particular cluster, or placing a VM on a separate datastore to dispose all the resources available between the host and the datastore. But in reality this worked for a short amount of time, hogged resources, created a static entity in an architecture that excels when allowing algorithms distribute resource dynamically. In short it does not scale and typically prohibits a more mature method of IT service delivery.

Therefore it’s key to keep the intelligence as close to the application as possible. Harvest the true power of software intelligence. Retain identity of workloads that allows you to distribute resources whenever it’s needed with the correct priority and availability of resources. By using VM identity you can apply your IT services by creating a set of policies, for example RPO and resource availability. Just by selecting the correct availability profile. This is the true power of software! Software can utilize the available resources in the most efficient way. I’ve seen it for example with FVP F-squared, where the performance of the flash device increased by using a better, more intelligent way of presenting the workload of the VMs to the flash resource. Better hardware performance by leveraging VM identity, control of resources and analytics all done in the same domain of control.

You can find the power of software in other industries as well. If you have the chance to talk to an software engineer of any MotoGP racing team that ask him what he can do in his controlled environment with software. By understanding the workload for a particular application (track) they can control the suspension system, throttle control, engine behavior all on the position of the bike on the track, setting up the bike in the most optimal way for the upcoming corner. And its not just A corner, they understand exactly what corner is coming and what the impact it has on the bike and will adjust accordingly. Whether they are allowed to use this in a race is a different debate, but it demonstrates the true power of software, workload analytics, and identity in a controlled system.

That type of analytics and power of resource distribution is exactly what you want for your applications. And the best way to do it is to retain VM identity. Use analytics, distributed resources management, advanced QOS to align the availability of high performance resources to the workload demand. Do it in such a way that it requires a minimal amount of clicks to configure and manage the system and it is my belief that the only place to do this is within the hypervisor kernel. Inside the kernel where multiple schedulers operate in harmony, understand, retain and respect VM identity while being on top of the resource and close to workload as possible.

Adding acceleration resources outside the kernel will not provide you this ability and you have to wonder what you solve with that particular model. vSphere DRS maintenance mode allows migration of workloads seamlessly, transparent and non-disruptively to other hosts in the cluster, not impacting workload in any form and manner. Providing you the ability to install acceleration resources without impacting your IT service level. And if you exercise proper IT hygiene, before connecting any device to an ESXi host, it is recommended (dare I say best practice) to put the host in maintenance mode anyway. Resulting in same host and workload migration behavior.

A new era of server side storage intelligence


by

Today we are announcing several new and exciting features that establish FVP as the industry’s premier enterprise-class platform for server side storage intelligence.

With these new features, FVP is taking an even bigger role in datacenter design by being media proof, workload agnostic, topology aware, and tightly integrated with existing data services.

Satyam Vaghani will be introducing these features (with demos) in his Storage Field day presentation today at 1 PM pst.  (This link will also host recorded versions of his presentation after the event.)  In addition, I will publish a collection of articles in the coming weeks covering these features in more detail.

In the meantime, below is a quick preview:

FVP Clustering™ using RAM (i.e. Distributed Fault Tolerant Memory)

This is perhaps the most exciting feature being announced today.  FVP can now turn RAM into a fault tolerant acceleration tier!  For the first time ever, volatile memory can now be used as part of an enterprise class storage architecture, delivering mind-boggling performance in the form of extremely low latency that is predictable and persistent.

The beauty of DTFM is that it leverages the acceleration media that is connected to the fastest bus inside a computer and the closest resource to the CPU. No hardware and software configuration is needed. FVP integrates directly into the kernel, aggregating memory into a pool of acceleration resources that provides dynamic resizing without reboots or impact on VM operations.  Best of all, there is no need to run and manage a virtual appliance.

Want to increase the pool of resources? Just add more memory.  Want to reduce assigned memory?  FVP lets you do it on the fly.

In reality, you already had the resources available in your servers to get the best storage performance.  However, up until now you didn’t have the software to exploit this raw power.  FVP changes all that, revolutionizing the way you look at your compute layer. With DFTM, you can leverage the full potential of your current datacenter configuration.

Storage protocol agnostic

FVP now supports whatever shared storage is connected to your vSphere environment. This can be block storage (iSCSI, FC, FCoE), local storage, or file (NFS).

NFS, in particular, is a much anticipated feature that will not disappoint. FVP will connect to NFS devices with the same transparency you are already accustomed to with block storage. With full VMkernel integrated functionality, you can attach an NFS datastore without modifying or rebooting virtual machines, changing your hosts, or reconfiguring your storage medium (e.g. IP addresses, VLANs, mount pints, etc).

Network compression

I am often asked how the underlying network impacts the performance of the PernixData solution. Well, PernixData has integrated a compression functionality into FVP software to make network performance a non-issue.  FVP monitors the workload of the virtual machine in real time, compressing network traffic adaptively. More specifically, it looks at the I/O size and uses an advanced cost-benefit analysis algorithm to determine if compressing data will reduce replication traffic while ensuring low CPU overhead. With FVP’s compression capabilities, write data being replicated between hosts can be vastly reduced, allowing more data to be sent over the same network connection. In addition, the IOPS achieved when replicating data remains consistent with the performance achieved when replication is not enabled (see below).

Write back + 0 replica’s (local host only)
Write back + 0 replica’s (local host only)

Write Back +1 replica over 1 Gbps Ethernet (uncompressed)
Write Back +1 replica over 1 Gbps Ethernet (uncompressed)

Write Back + 1 replica over 1 Gbps Ethernet (compressed)
Write Back + 1 replica over 1 Gbps Ethernet (compressed)

Topology aware FVP via User Defined Replica groups

FVP now allows users to define replica groups, providing a topology aware replication design for accurate fault tolerance. By choosing where replica data is stored, FVP’s fault tolerance capabilities can be aligned with assigned failure domains in your virtual datacenter design.  Replica groups can be used to indicate the boundary of a blade enclosure or different physical sites, for example, allowing you to protect against failure scenarios. Another great use case is to configure FVP to keep the data inside a failure domain (for instance a blade enclosure) to keep the latency as low as possible.

Following FVP’s operational simplicity model, once you assign hosts to a replica group, FVP will automatically select an appropriate replica host to receive write data. When the replica host or the network connectivity fails, FVP automatically assigns another host from the same replica group to become the new destination replica host.

I am very excited about these new features, and look forward to explaining them in more detail in the coming weeks.  In the meantime, don’t forget to check out Satyam’s presentation and demos here.