One of the magical things about virtualization is that it’s really a kind of invisibility cloak. Each virtualization layer hides the details of those beneath it. The result is much more efficient access to lower level resources. Server virtualization has demonstrated this powerfully. Applications don’t need to know about CPU, memory, and other server details to enjoy access to the resources they need.
Unfortunately, this invisibility tends to get a bit patchy when you move down into the storage infrastructure underneath all those virtualized servers, especially when considering performance management. In theory, storage virtualization ought to be able to hide the details of media, protocols, and paths involved in managing the performance of a virtualized storage infrastructure. In reality the machinery still tends to clank away in plain sight.
The problem is not storage virtualization per se, which can boost storage performance in a number of ways. The problem is a balkanized storage infrastructure, where virtualization is supplied by hardware controllers associated with each “chunk” of storage (e.g., array). This means that the top storage virtualization layer is human: the hard-pressed IT personnel who have to make it all work together.
Many IT departments accept the devil’s bargain of vendor lock-in to try to avoid this. But even if you commit your storage fortunes to a single vendor, the pace of innovation guarantees the presence of end-of-life devices that don’t support the latest performance management features. And the expense of this approach puts it beyond the reach of most companies, who can’t afford a forklift upgrade to a single-vendor storage infrastructure and have to deal with the real-world mix of storage devices that result from keeping up with innovation and competitive pressures.
That’s why many companies are turning to storage hypervisors, which, like server hypervisors, are not tied to a particular vendor’s hardware. A storage hypervisor like DataCore’s SANsymphony-V throws the invisibility cloak over the details of all of your storage assets, from the latest high-performance SAN to SATA disks orphaned by the consolidation of a virtualization initiative. Instead of trying to match a bunch of disparate storage devices to the needs of different applications, you can combine devices with similar performance into easily provisioned and managed virtual storage pools that hide all the unnecessary details. And, since you’re not tied to a single vendor, you can look for the best deals in storage, and keep using old storage longer.
SANsymphony-V helps you boost storage performance in three ways: through caching, tiering, and path management.
Caching. SANsymphony-V can use up to a terabyte of RAM on the server that hosts it as a cache for all the storage assets it virtualizes. Advanced write-coalescing and read pre-fetching algorithms deliver significantly faster IO response: up to 10 times faster than the average 200-300 microsecond time delivered by typical high-end cached storage arrays, and orders of magnitude faster than the 6000-8000 microseconds it takes to read and write to the physical disks themselves.
SANsymphony-V also uses its cache to compensate for the widely different traffic levels and peak loads found in virtualized server environments. It smooths out traffic surges and better balances workloads so that applications and users can work more efficiently. In general, you’ll see a 2X or better improvement in the performance of the underlying storage from a storage hypervisor such as DataCore SANsymphony-V.
Tiering. You can also improve performance with data tiering. SANsymphony-V discovers all your storage assets, manages them as a common pool of storage and continually monitors their performance. The auto-tiering technology migrates the most frequently-used data—which generally needs higher performance—onto the fastest devices. Likewise, less frequently-used data typically gets demoted to higher capacity but lower-performance devices.
Auto-tiering uses tiering profiles that dictate both the initial allocation and subsequent migration dynamics. A user can go with the standard set of default profiles or can create custom profiles for specific application access patterns. However you do it, you get the performance and capacity utilization benefits of tiering from all your devices, regardless of manufacturer.
As new generations of devices appear such as Solid State Disks (SSD) and Flash memories and very large capacity disks; these faster or larger capacity devices can simply be added to the available pool of storage devices and assigned a tier. In addition, as devices age, they can be reset to a lower tier often extending their useful life. Many users are interested in deploying SSD technology to gain performance, however due to the high-cost and their limited write traffic life-cycles there is a clear need for auto-tiering and caching architectures to maximize their efficiency. DataCore’s storage hypervisor can absorb a good deal of the write traffic thereby extending the useful life of SSDs and with auto-tiering only that data that needs the benefits of the high-speed SSD tier are directed there, with the storage hypervisor in place the system self tunes and optimizes the use of all the storage devices. Customers have reported upwards of a 20% savings from device independent tiering alone and in combination with the other benefits of a storage hypervisor, savings of 60% or more are also being achieved.
Path management. Finally, SANsymphony-V also greatly reduces the complexity of path management. The software auto-discovers the connections between storage devices and the server(s) it’s running on, and then monitors queue depth to detect congestion and route I/O in a balanced way across all possible routes to the storage in a given virtual pool. There’s a lot of reporting and monitoring data available, but for the most part, once set up, you can just let it run itself and get a level of performance across disparate devices from different vendors that would otherwise take a lot of time and knowledge to get right.
If you would like an in-depth look at how a storage hypervisor can boost storage performance, be sure to check out the third of Jon Toigo’s Storage Virtualization for Rock Stars white papers: Storage in the Fast Lane—Achieving “Off-the-Charts” Performance Management.
Next time I’ll look at how a storage hypervisor boosts storage data security management.
No comments:
Post a Comment