Interesting DataCenter Acceleration Post: http://www.datacenteracceleration.com/author.asp?section_id=2412&doc_id=252221
Storage virtualization appears to be a good way to give key apps greater performance.
The good thing about the computer industry is that every so often, the stars align and the opportunity arises for an explosion of new products in a particular sector. Over the past decade, for instance, all sorts of new data-storage schemes have come into being, each with its own special function or strength...
...The result for many IT operations has been a landscape of disparate storage solutions, not all of which are on such good speaking terms with each other. Can you say, proprietary stacks? Heterogeneity?
And right now, a whole new tier of storage is coming into play: flash-based solid-state drives (SSD), which add still more complexity and difference to the mix.
Fortunately, there's a good solution at hand, a way to effectively pave over many of the differences between disparate storage products and create what amounts to a unified architecture. It's virtualization, the same idea that has been playing out so well in the server world, making it possible to decouple applications and their software stacks from underlying hardware.
Or, put another way, virtualization can effectively hide and insulate applications from many if not all complexities, peculiarities, and incompatibilities by using software to abstract a set of resources and create the illusion of a single resource. That can bring down operational costs.
One of the many companies pushing the idea of cross-vendor storage virtualization has been DataCore Software, which sells to midsized and large enterprises running Windows Server. And lately, it has been pushing storage virtualization as a way to boost performance, too, especially of apps that have been virtualized on the server.
Company COO Steve Houck explains the problem: Once a bunch of apps have been virtualized and made to share a physical host under control of a hypervisor, their varying I/O patterns may easily conflict with each other. One app's fairly random traffic spikes can easily disrupt another's more regular I/Os, for instance. Indeed, it's widely accepted that this I/O problem is one of the most difficult hurdles to face newcomers to server virtualization.
Once storage resources themselves are virtualized, however, the movement and caching of data can be better managed and adjusted on the fly to meet the changing needs of specific apps and maintain or even boost their performance, Houck tells us. In fact, a storage hypervisor like DataCore's can work hand-in-hand with the server hypervisor -- supplied by VMware, for instance -- to resize caches in its own RAM memory, watch I/O patterns with an eye to identifying most-used data for caching closer to or even inside the server, and quickly redirect I/Os away from defective storage devices.
Typically, DataCore's hypervisor runs on a pair of Windows-based servers situated between the enterprise's compute servers and its physical storage devices. These servers manage their own in-memory data caches while also overseeing data mirroring, backups, and other operational chores.
According to Enterprise Strategy Group, an IT consulting outfit, benchmark tests (performed on behalf of DataCore) showed Microsoft SQL Server and Exchange workloads enjoying a nearly fivefold improvement in performance when storage was managed by DataCore's SANSymphony-V product.
Numbers like that -- and I imagine DataCore's not alone in achieving such performance gains through storage virtualization -- are hard to ignore. This clearly is a technology that any enterprise should investigate if it's truly interested in accelerating the performance of critical applications.