Saturday, 27 August 2011

What the Heck Is a “Storage Hypervisor?”

Our friends at DataCore ran a press release yesterday
positioning the new release (v8.1) of SANsymphony-V as a “storage hypervisor.” On the surface, that may just sound like some nice marketing spin, but the more I thought about it, the more sense it made – because it highlights one of the major differences between DataCore’s products and most other SAN products out there.

To understand what I mean, let’s think for a moment about what a “hypervisor” is in the server virtualization world. Whether you’re talking about VSphere, Hyper-V, or XenServer, you’re talking about software that provides an abstraction layer between hardware resources and operating system instances. An individual VM doesn’t know – or care – whether it’s running on an HP Server, a Dell, an IBM, or a “white box.” It doesn’t care whether it’s running on an Intel or an AMD processor. You can move a VM from one host to another without worrying about changes in the underlying hardware, bios, drivers, etc. (Not talking about “live motion” – that’s a little different.) The hypervisor presents the VM with a consistent execution platform that hides the underlying complexity of the hardware.

So, back to DataCore. Remember that SANsymphony-V is a software application that runs on top of Windows Server 2008 R2. In most cases, people buy a couple of servers that contain a bunch of local storage, install 2008 R2 on them, install SANsymphony-V on them, and turn that bunch of local storage into full-featured iSCSI SAN nodes. (We typically run them in pairs so that we can do synchronous mirroring of the data across the two nodes, such that if one node completely fails, the data is still accessible.) But that’s not all we can do.

Because it’s running on a 2008 R2 platform, it can aggregate and present any kind of storage the underlying Server OS can access at the block level. Got a fibre channel SAN that you want to throw into the mix? Great! Put fiber channel Host Bus Adapters (HBAs) in your DataCore nodes, present that storage to the servers that SANsymphony-V is running on, and now you can manage the fibre channel storage right along with the local storage in your DataCore nodes. Got some other iSCSI SAN that you’d like to leverage? No problem. Just make sure you’ve got a couple of extra NICs in the DataCore nodes (or install iSCSI HBAs if you want even better performance), present that iSCSI storage to the DataCore nodes, and you can manage it as well. You can even create a storage pool that crosses resource boundaries! And now, with the new auto-tiering functionality of SANsymphony-V v8.1, you can let DataCore automatically migrate the most frequently accessed data to the highest-performing storage subsystems.

Or how about this: You just bought a brand new storage system from Vendor A to replace the system from Vendor B that you’ve been using for the past few years. You’d really like to move Vendor B’s system to your disaster-recovery site, but Vendor A’s product doesn’t know how to replicate data to Vendor B’s product. If you front-end both vendors’ products with DataCore nodes, the DataCore nodes can handle the asynchronous replication to your DR site. Alternately, maybe you bought Vendor A’s system because it offered higher performance than Vendor B’s system. Instead of using Vendor B’s product for DR, you can present both systems to SANsymphony-V and leverage its auto-tiering feature to automatically insure that the data that needs the highest performance gets migrated to Vendor A’s platform.

So, on the back end, you can have disparate SAN products (iSCSI, fibre channel, or both) and local storage (including “JBOD” expansion shelves), and a mixture of SSD, SAS, and SATA drives. The SANsymphony-V software masks all of that complexity, and presents a consistent resource – in the form of iSCSI virtual volumes – to the systems that need to consume storage, e.g., physical or virtual servers.

That really is analogous to what a traditional hypervisor does in the server virtualization world. So it is not unreasonable at all to call SANsymphony-V a “storage hypervisor.” In fact, it’s pretty darned clever positioning, and I take my hat off to the person who crafted the campaign.

DataCore Announces SANsymphony-V Storage Hypervisor

Virtual Strategy Magazine: DataCore Introduces SANsymphony-V Storage Hypervisor

“The sprawl and multiplication of storage systems and the rising number of specialty devices are now the norm. The intelligence of a storage hypervisor provides a strategic advantage to managing the many as one,” said George Teixeira, president and CEO of DataCore Software. “SANsymphony-V enables users to take control of how their storage infrastructure evolves versus being subject to the dictates of tactical point-in-time decisions.”

“DataCore’s storage hypervisor significantly improves system performance and extends the life of our existing storage investments. The infrastructure-wide features fill in for missing device level functionality while efficiently automating and simplifying administration,” said Dave Patel, chief operating officer, CompuTech City. “From the standpoint of future hardware expansion and refreshes, DataCore gives us a powerful bargaining chip to shop for the best value without being locked in by choices we made in the past.”

“Server hypervisors from the likes of VMware and Microsoft have changed the game for users in their selection and exploitation of computing platforms by making them simultaneously more flexible and practically interchangeable. DataCore’s SANsymphony-V storage hypervisor brings those same advantages to storage infrastructures by leveraging the full potential of – and often adding extra capabilities to – the collective strength of existing storage devices, without relying on some unlikely level of co-operation among the storage vendors,” said Mark Peters, senior analyst, Enterprise Strategy Group.

DataCore Adds Storage Hypervisor to Speed Storage Virtualization

DataCore Software
, which specializes in storage virtualization and iSCSI SAN software, has launched the DataCore Storage Hypervisor. The move builds upon DataCore’s auto-tiering offering, according to President, CEO and Co-Founder George Teixeira. DataCore says channel partners are embracing the company’s software to (1) head off storage-related obstacles that stall virtualization deals, (2) speed adoption and standardization of virtual infrastructure and (3) increase share of wallet across storage and virtualization.

In this latest move, DataCore is “bringing out more openness in auto-tiering and its ability to integrate with other features,” said Teixeira. He said the DataCore Storage Hypervisor’s greatest feature is that it can “play across many different platforms,” because today’s data centers require specialized equipment to meet the different nuances of various customer workloads.

“From an MSP standpoint, they have two kinds of storage cases,” added DataCore Software Director of Product and Channel Marketing Augie Gonzalez. “They either buy the same type of storage and have multiples of it, or they have different storage levels because they support a lot of different systems. People are throwing purpose-built storage solutions at the problem.” Gonzalez claims the DataCore Storage Hypervisor can support both scenarios.

Among the challenges facing DataCore: Making sure the company’s storage virtualization strategy stands apart from all the different storage and virtualization claims floating around the IT industry. “The term [storage virtualization] has become so diluted. We are trying to make it clearer,” Teixeira said. Both Teixeira and Gonzalez said DataCore’s definition is more in line with Microsoft, Citrix Systems and VMware, meaning a rich set of features across many platforms.

DataCore has made multiple moves in recent months. In late July 2011, the company was promoting a virtualized storage solution for its SMB and MSP clients. A week later DataCore launched an auto-tiering solution for MSPs, a technology normally reserved for high-end storage needs.

DataCore Introduces SANsymphony-V Storage Hypervisor

DataCore Software, the industry’s premier provider of storage virtualization software, announced today major enhancements to its centrally-managed SANsymphony™- V solution. Combined, these new features elevate SANsymphony-V to the role of storage hypervisor, placing customers in the unique position to fully leverage their existing storage assets, including direct-attached storage (DAS), storage area networks (SAN) and solid state disks (SSD), and negotiate the best deals among competing storage manufacturers without concern for long-standing hardware vendor lock-ins.

More Coverage on SANsymphony-V Storage Hypervisor announcement:

IT Briefing: DataCore introduces SANsymphony-V storage hypervisor

Computer Technology Review: DataCore announces SANsymphony-V storage hypervisor

eChannelLine: DataCore introduces SANsymphony-V storage hypervisor

TechnologyShore: DataCore Adds Storage Hypervisor to Speed Storage Virtualization

Data Storage Connection: DataCore announces Storage Hypervisor

Wednesday, 24 August 2011

Scottish Agricultural College selects DataCore SANsymphony-V and Microsoft for Unified Storage; Adds Performance and HA to NAS Clustered File Sharing

Virtualization World 365 - Top Story: Scottish Agricultural College selects DataCore SANsymphony-V and Microsoft for Unified Storage

Scotland’s Leading Agricultural College announces it has settled on SANsymphony-V to add performance and safeguard the NAS Clustered File Sharing capabilities inbuilt within their Microsoft Windows Server 2008 R2 Platform.

Scottish Agricultural College selects DataCore

DataCore Software says that its SANsymphony™-V software has been deployed at the Scottish Agricultural College (SAC) to fully utilize the Cluster File shares and Network Attached Storage capabilities included within their Microsoft Windows Server 2008 R2 platform.

The next generation of DataCore’s software, SANsymphony-V, was recommended and configured by a leading Scottish solution provider, Tecnica Ltd. This DataCore authorized partner designed the system to manage both NAS file sharing and SAN or directly attached storage (DAS).The system now spans across 3 nodes; one within SAC’s main Edinburgh campus and the other two some 8 miles away within the College’s Research and data center campus. While operational for less than a month, immediately the integrated combination of DataCore and Microsoft has decreased the College’s reliance on costly hardware solutions and added a new level of fault tolerance and data protection to its clustered NFS/CIFS file sharing.

The key driver for change for the College came in Autumn 2010 when their existing environment came to end of life; moving from a complex and costly SUN infrastructure in favor of a Microsoft Windows based platform, while consolidating storage and eliminating bottlenecks and disruptions. They commenced the search for alternatives and Tecnica Ltd responded with a DataCore software based solution that significantly lowered cost and ongoing capital outlays and stressed the importance of being device independent with a software based solution:-

“The DataCore unified storage software layer provides the performance and functionality we require to effectively manage our current and future data requirements and has removed complexity, cost and support skills exposure from our storage infrastructure” - Ronnie McIntyre, IS Infrastructure & Operations Manager, SAC

“Some of the features that attracted us to DataCore’s SANsymphony-V were the transparency and ease of management which it offered along with the flexibility of design configuration to allow us to build a solution that would match our performance and resilience requirements. As with many organizations we required an agile solution to be able to meet the demands of the business whether generated through strategic / legislative requirements or market opportunities.” - Peter Gowler, Infrastructure Systems Architect, SAC

Unified Storage with No Costly Specialty NAS Hardware Required:

Keith Joseph, Regional Manager, DataCore Software, commented, “What the SAC IT team, working closely with Tecnica have achieved here is what data center managers have long been seeking – a simple, cost effective way to easily manage their disk resources and address potential single points of failures in their vital NFS/CIFS files shares. Previously, SAC’s choice would have been restricted to either picking from a few specialty devices that provided the needed redundancy at a steep price, or take the risk that a storage-related disruption would cripple their entire infrastructure.”

DataCore’s design at SAC employs dual copies of the SANsymphony-V software providing high-availability mirroring and performance acceleration layered beneath clustered file shares (NAS) integral to Windows Servers 2008 Enterprise. SAC can now employ and reutilize any standard storage device for disk space, ranging from standard hard drives to existing external disk arrays offered by the popular storage systems vendors. With the solution, splitting the file share cluster remains across two machines, while the DataCore software runs on two separate servers dedicated to managing the block storage virtualization.

Friday, 19 August 2011

New DataCore White Paper: Automated Storage Tiering

Like they say in the housing market, the top 3 factors that determine price are location, location, and location. The same can be said for storage real estate. Poor choices placing data will cost you dearly. Unlike home buying habits, user access patterns change very frequently. Content that was driving interest one day becomes old news the next. There’s no way you could possibly take the time to shuffle information around to keep up with their fickle ways. That’s where insider knowledge and a little automation come in handy.

Read on to learn more about how to save space and money.

AWO Workers' Welfare Association has Implemented a Flexible and Scalable IT Infrastructure with Virtualization Software from DataCore Software and Citrix

“Conventional SAN hardware without DataCore storage virtualization would not have offered us the necessary scalability and flexibility,” states Karsten Frommolt, the IT manager at AWO Düsseldorf. “With DataCore, we were able to resolve our roadblocks in regards to performance bottlenecks, easily providing capacity and reducing costs. We needed to cost-effectively build a redundant and therefore highly available storage environment within our budget constraints. DataCore Software convinced us with the flexibility and scalability of its software solution. It also gave us independence from hardware manufacturers so we have greater control going forward.”

AWO Düsseldorf Case Study -

Wednesday, 10 August 2011

DataCore Software Customers Showcase Why Software-Based Solutions Can Lower Total Cost of Storage and Empower the Full Business Value of Virtualization

DataCore Software today showcased customer examples and announced the availability of a new white paper titled “Achieving the Full Business Value of Virtualization with a Scalable Software-Based Virtualization Solution” authored by International Data Corp (IDC), the premier global provider of market intelligence. The paper details storage efficiency increases due to heterogeneous storage virtualization. It highlights key IDC recommendations for improving utilization and productivity by consolidating assets, automating provisioning, leveraging storage tiers and combining server and storage virtualization to significantly reduce capital and operational expenses associated with facilities and infrastructure.

The paper, combined with DataCore’s growing base of customer testimonials, highlights how the company’s virtualization software solutions overcome budget, performance and business continuity challenges impacting the transition and broader adoption of virtualization. In particular it uncovers how to make “better use of ‘in place’ storage assets while also ensuring that IT organizations can fully achieve a return on their investments.”

Compelling Economic Benefits

“A key business value of DataCore’s SANsymphony-V technology is its device-independent approach: the software removes hardware dependencies that can limit customers’ purchasing power and flexibility by locking them into specific vendor technologies and price points,” said Peter Dobler, assistant vice president, Corporate MIS Dept., Northeast Health. “DataCore brings a rich set of features, including auto-tiering and thin provisioning which can significantly impact cost savings. These features work across many flavors and brands; therefore, they can spread the benefits across the entire infrastructure and not to just a single device. They extend the useful life of past storage investments, and provide more competitive choices when shopping for new equipment resulting in greater ROI, lower total cost of ownership and reduced business risk.”

As noted in the white paper:
  • The DataCore SANsymphony-V solution allows organizations to leverage the amazing processing power of today's multicore x86 server platforms to virtualize installed storage systems while also tuning them to meet specific needs for performance, capacity, availability and cost-effectiveness.

  • The DataCore productivity and cost-saving software solutions:
    • Improve utilization of existing storage assets (in many cases doubling useful capacity), thereby delaying the need to buy new storage systems;
    • Enable more modular and seamless storage expansion, thereby reducing the need to pre-buy capacity that may not be needed for several years; 
    • Optimize and automate storage provisioning, tiering storage resources for optimal performance and capacity, and managing data protection and auto-recovery across installed heterogeneous storage systems, thereby reducing management complexity; and 
    • Enable seamless cross-system data movement, enabling easy and intelligent migration or replication of data between storage classes.

“We help organizations virtualize in half the time and half the expense without wasting either on the need to replace or write off their storage investments,” said Linda Haury, vice president of marketing at DataCore Software. “DataCore customers routinely report 200 percent or more gains in performance, a doubling in administrative productivity, no downtime due to storage failures, and cost savings of 60 percent or more. This is accomplished through greater ‘open market’ purchasing power, better use of storage via auto-tiering and from greatly improving efficiencies of storage capacity utilization and provisioning from 30 percent typical usage rates to as much as 90 percent. Bottom line: DataCore can remove inefficiencies and fully maximize your IT investments.”

A few recent examples of customer gains realized through DataCore include:

  • AcXess: This cloud services provider enables on-demand sales demos, proof-of-concept labs and training sessions in virtualized environments for thousands of employees, partners and customers worldwide. Through the combined use of DataCore and Microsoft Hyper-V, the organization has doubled its number of client users year-over-year and grown their business 300 percent. AcXess also reports a number of economic benefits from the solution that includes a savings of $5 million in hardware costs.
  • Hemer Pulmonary Clinic: A Center for Pneumology and Thoracic Surgery, the German clinic implemented DataCore SANsymphony-V and realized a $425,000 cost savings. More than this, it has seen an increase in performance and greater efficiency in supporting over 50 VMware virtualized database and application servers all while adding a new level of high availability to its hospital operations.
  • ISCO Industries: This worldwide manufacturing and piping distributor has saved at least $100,000 by re-purposing existing equipment and prior investments using DataCore and VMware. For the first time, ISCO’s UNIX system, regarded as its “bread and butter,” is fully redundant and synchronized in real-time to a disaster recovery site. In the past, when ISCO lost data, the main accounting and sales system took 16-24 hours to recover – a process which now takes only minutes.
  • Scottsdale Community College: Based on technology from DataCore and Citrix, the College created an end-to-end virtualization solution that now provides access to more than 220 applications, serving 11,000 students and 1,000 employees. It is now saving $250,000 annually that would have been spent on hardware, accompanied by a marked improvement in storage performance.

To read the full IDC white paper sponsored by DataCore, please visit “Achieving the Full Business Value of Virtualization with a Scalable Software-Based Virtualization Solution.”

Monday, 8 August 2011

Virtual-Strategy Magazine: Q&A with Augie Gonzalez of DataCore Software

VSM: What are the biggest surprises organizations today are encountering in their server and desktop virtualization projects?

AG: From where I’m standing, lack of attention to storage-related matters generates the most turmoil and angst. You see this especially as customers struggle to migrate their line of business applications to virtualized environments. The storage provisioning, maintenance, and tuning practices that they mastered in segregated server configurations largely break down. This collapse ripples across to business continuity procedures and data protection techniques, upsetting even their hardware refresh and purchasing rhythm.

This is understandably so. Versatile virtual machines and virtual desktops lure concentration away from the mundane, seemingly stiff realities of physical storage devices. But today, we can harness a close relative of server virtualization, storage virtualization software, for an operationally and economically attractive solution – if we plan ahead.

VSM: What are the consequences of neglecting storage-related problems?

AG: There’s no turning your back on these issues. Unanticipated storage costs, availability concerns, and performance bottlenecks are the most critical factors bringing server consolidation and desktop virtualization projects to a standstill.

In fact, in DataCore’s recent survey, “The State of Virtualization”, we found that a majority of medium- and large-enterprise IT organizations mistakenly overlook storage when implementing a virtualized operating environment. The data revealed that nearly half of the respondents (43 percent) had not anticipated the impact that storage would have on their server and desktop virtualization costs or had not started a virtualization project because the storage-related costs “seem too high.”

This has moved the storage issue – its high cost, inadequate performance, inflexibility, and vendor lock in – to the front burner in virtualization discussions.

VSM: How does the rise of “Big Data” compound the concern for data center managers? How can they better deal with this issue?

AG: Whereas server and desktop virtualization help consolidate and concentrate storage in common pools, Big Data has the opposite effect. It's widespread – “a lot here, a lot more there” – and there are too many places to keep track of. Add to the location problem, different brands and generations of storage devices that house it, and you can see the predicament.

Some pundits suggest standardizing on specific hardware storage blocks to retain some sanity. But they ignore the insanely fast rate at which hardware becomes obsolete, which leads to healthy storage diversity in the first place. Not to mention the outrageous costs to forklift upgrade everything already in use.

My advice is don’t even try to solve the hardware problem by confining yourself to one piece of gear, it doesn’t stand a chance. Instead, steal a play from the desktop virtualization handbook. Accept and encourage equipment diversity while taking measures to uniformly manage it. In the world of disks, you do this by layering a common control plane across central and scattered storage assets using storage virtualization software.

The software lets you take advantage of the many nuances that help differentiate one disk array from another as well as the extra safeguard that separate sites afford you, without burying you in minutia.

VSM: What additional benefits does device-independent storage virtualization bring to virtualized IT environments and how does it enable those who have approached it properly?

AG: There’s just too much to cover here. Think about how hypervisors liberate you from the constraints of physical server cabinets and you can quickly grasp the beneficial operational and financial impact of storage virtualization. The storage enclosures no longer define you, nor limit your choice or mobility. When one box or one site is out of service, another one seamlessly takes over for it. Achieving cost-effective, highly-available, and fault tolerant disks that traverse hardware boundaries would be on the top of the list.

With regard to economics, enhanced capital asset utilization, increased efficiency, and lower operational costs always get the most praise from DataCore customers. The benefits touch nearly every part of their IT infrastructure. As a result, organizations rapidly expand the scope of their storage virtualization program following the initial rollout to encompass conventional physical servers as well.

VSM: Given the urgency, how does DataCore expedite the successful transition to a fully virtualized storage infrastructure?

AG: First and foremost, DataCore leverages storage and server assets already on the data center floor to get you going right away. There is no waiting to “rip and replace” equipment before you get started. It’s like moving right into the house remodeling project without going through the arduous and dusty demolition phase. None of that uneasy feeling that you may be way over your head.

Instead, the conversion occurs in self-paced stages, with small incremental investments, rather than large upfront capital expenditures. I’d say it’s a very pragmatic cutover matched to your priorities and windows of opportunity. Really, nobody wants to take a big leap and you don’t have to with DataCore.

Wednesday, 3 August 2011

Enterprise Systems Journal: DataCore - The Switzerland of Storage Virtualization Solutions Providers?

Storage managers are often pressured to buy storage from a single vendor. That, in turns, leads to being pinned down to a particular vendor’s virtualization management solution.

I hate having my options limited. I want to pick best-of-breed hardware and solution. Mix-and-match is my motto. When it comes to storage virtualization software, DataCore seems to share my values. It’s new SANsymphony-V dynamically relocates workloads across pools of any type of storage equipment -- including SSDs -- and from any vendor. Because it sits high-enough up on the interface ladder, DataCore’s director of product marketing Augie Gonzalez told me, any new storage device you add can work instantly with SANsymphony -- no updates needed.

“We apply the same device-independent approach to auto-tiering as we do with all our high-value services, including thin provisioning, caching, synchronous mirroring, asynchronous replication, snapshots and CDP. Let’s just say it arms you with a lot more bargaining muscle when the next disk hardware purchase comes around.”

DataCore’s President and CEO, George Teixeira, told me recently that his company’s product neutrality is what gives IT the greatest flexibility to move seldom-used sections of files to a slower tier of storage (read: cheaper disk) and most-in-demand or most-important sections to more expensive (read: faster) storage devices. Although the product’s chief benefit is that it handles all the messy details in the background automatically, there are rules (“policies”) you can define to override its decisions (for example, you can exclude some workloads from auto-tiering). Furthermore, if you no longer have enough capacity in a tier to to meet your requirements, it will tell you so.

The approach has two advantages -- both economic -- and both explained succinctly in their press release:

“This expanded choice gives customers the opportunity to shop for the best value at each tier from competing sources without having to discard what they purchased last year.”

Keeping legacy equipment alive is critical -- so is shopping for the best deal on new equipment.

Hardware-based approaches restrict tiers “to premium-priced trays within a single storage enclosure or frame, DataCore says. Its infrastructure-wide software “spans multiple storage systems from potentially different suppliers.”

Flexibility is paramount. Tiers can be made up of high-capacity but lower-priced SATA drives from Company A (or Company A and B and C) or consist of multiple SAS midrange disk systems from the same or different vendors. You can mix and match capacity, technology, and manufacturers in the same tier -- it all appears as a pool from your control panel. SSDs, which cost up to 10 times more than conventional drives, are likewise supported in a tier.

The company says that more than 50 percent of stored data turns to “inactive” status within just 60 days of its creation. If you’re using high-priced storage for it, you’re wasting precious IT budget dollars.

In its press release, the company says its customers “have reported up to 60 percent cost savings with SANsymphony-V alone, and now with the ability to automate and dynamically optimize tiered storage capacity, incremental savings of 20 percent or more are now possible. The final result is an extremely cost effecting, self-tuning system.” Not to mention one that doesn’t pin you down.