Thursday, 29 December 2011

Premier Auto Insurance Company and International Law Firm Turn to DataCore Software

Direct Line, leading Car Insurance Company, achieves Business Continuity & Disaster Recovery

Achieved with DataCore's SANsymphony-V Storage Hypervisor.

http://www.it-director.com/technology/storage/news_release.php?rel=28889

DataCore Software announced that premier German insurance company, Direct Line Versicherung AG, has implemented its business continuity and disaster recovery architecture on a storage infrastructure built on the DataCore storage hypervisor SANsymphony™-V foundation. The high-speed and highly–available synchronous mirroring functionality of the DataCore solution enables Direct Line to set up a backup and disaster recovery data centre located 40 kilometers away in order to minimise downtime and to ensure speedy recovery in case of a failure at the primary data centre. The combination of a storage hypervisor with server hypervisors provides a flexible, high-performance and fail safe IT infrastructure that virtualises storage hardware with SANsymphony-V and server hardware with VMware vSphere.

"DataCore SANsymphony-V provides Direct Line a cost-effective and efficient business continuity solution supporting high-availability and disaster recovery across multiple sites, without neglecting performance or data protection aspects. The DataCore solution is a strategic component in our daily business,” says Heiko Teichmann, managing director at IT service provider Teserco and project manager on behalf of Direct Line Versicherung.

...Thanks to its hardware independence and the efficient caching algorithms of the DataCore solution, Direct Line can not only use its existing and cost-effective HP P2000 disk subsystems, but with DataCore also remove the functional or performance limitations that would have prevented them from future use. Today, around 75 terabytes of data are now managed in the main data centre, located in Teltow and at the remote data center in Berlin, which meets the requirements set by the highest data protection category (Tier 4).

Stikeman Elliott LLP implements DataCore Software’s SANsymphony-V Storage Hypervisor

http://www.law.com/jsp/lawtechnologynews/PubArticleLTN.jsp?id=1202535402580&Stikeman_Elliot_Adopts_DataCores_SANsymphonyV&slreturn=1

DataCore Software announced that Stikeman Elliott LLP, one of Canada’s leading business law firms, with offices in Montreal, Toronto, Ottawa, Calgary and Vancouver as well as London, New York and Sydney, has deployed DataCore’s SANsymphony-V storage hypervisor at their Montreal and Toronto hubs – eliminating the need for traditional tape backups and providing much greater business continuity through high availability (HA) and improved disaster recovery (DR).

Stikeman Elliott’s IT team has also made great use of SANsymphony-V’s automated-tiering capability – including utilizing the cloud as yet another data storage tier.

One of the things that Marco Magini, network and systems administrator for Montreal-based Stikeman Elliot LLC, worries most about where storage is concerned is downtime. At one of the largest corporate law firms in the world, it may come as a shock, but the cliché “time is money,” is no joke, said Magini.

“Although we started out looking for a solution to a backup problem, we ended up with far more,” says Magini. “We started slowly, but as we became more familiar with the intelligence and stability of the DataCore storage hypervisor, we put more and more mission-critical systems on top of it and now have almost all of our systems behind it. We have every confidence that SANsymphony-V can handle anything we give it.”

Saturday, 24 December 2011

DataCore Software Celebrates Holiday Season By Offering Free Storage Hypervisor To Microsoft Certified And System Center Professionals

The free NFR license keys of SANsymphony-V are available for the new year and this offer will expire on January 31, 2012. The licenses may be used for non-production purposes only and the software can be installed on both physical servers and Hyper-V virtual machines as a virtual storage appliance.

To receive a free license key, please sign up at: http://pages.datacore.com/nfr-for-experts.html. Proof of certification as a Microsoft Certified Architect (MCA), Microsoft Certified Master (MCM), Microsoft Certified IT Professional (MCITP), Microsoft Certified Professional Developer (MCPD) or Microsoft Certified Technology Specialist (MCTS) is required.

DataCore Software offers free Storage Hypervisor

http://www.it-director.com/technology/storage/news_release.php?rel=28967&ref=fd_ita_meta

DataCore Software announced that in the spirit of the holiday season it will offer free license keys of its SANsymphony™ -V storage hypervisor to Microsoft Certified and System Center professionals. The not-for-resale (NFR) license keys – may be leveraged for non-production uses such as course development, proof-of-concepts, training, lab testing and demonstration purposes – are intended to support virtualization consultants, instructors and architects involved in managing and optimizing storage within private clouds, virtual server and VDI deployments. DataCore is making it easy for these certified professionals to benefit directly and learn for themselves the power of this innovative technology – the storage hypervisor – and its ability to redefine storage management and efficiency.

To receive a free license key, please sign up at: http://pages.datacore.com/nfr-for-experts.html.

Friday, 23 December 2011

Storage Strategy Magazine 2012 Predictions: DataCore Software

http://www.storage-strategy.com/2011/12/15/2012-prediction-datacore-software

...The time has come for the storage crisis to be resolved, the last bastion of hardware dependency removed, and the final piece of the virtualization puzzle to fall into place.

It’s also time for all to become familiar with a component fast gaining traction and proving itself in the field: the storage hypervisor. This technology is unique in its ability to provide an architecture that manages, optimizes and spans all the different price-points and performance levels of storage. The storage hypervisor allows hardware interchangeability. It provides important advanced features such as automated tiering that relocates disk blocks among pools of different storage devices - even in the cloud - keeping demanding workloads operating at peak speeds. In this way, applications requiring speed and business-critical data protection can get what they need, while less critical, infrequently accessed data blocks gravitate towards lower costs disks or are transparently pushed to the cloud for “pay as you go” storage.

ESG’s Peters states in the Market Report; “The concept of a storage hypervisor is not just semantics. It is not just another way to market something that already exists or to ride the wave of a currently trendy IT term.” He then goes on to make his main point on the next step forward; “Organizations have now experienced a good taste of the benefits of server virtualization with its hypervisor-based architecture and, in many cases, the results have been truly impressive: dramatic savings in both CAPEX and OPEX, vastly improved flexibility and mobility, faster provisioning of resources and ultimately of services delivered to the business, and advances in data protection.

“The storage hypervisor is a natural next step and it can provide a similar leap forward.”

2012: Virtual Storage Gets Some Love

Mark Peters, senior analyst with Enterprise Strategy Group (ESG), recently authored a Market Report which I believe addresses the industry focus for the year ahead. In The Relevance and Value of a “Storage Hypervisor,” Peters notes: “…buying and deploying servers is a pretty easy process, while buying and deploying storage is not. It’s a mismatch of virtual capabilities on the server side and primarily physical capabilities on the storage side. Storage can be a ball and chain keeping IT shops in the 20th century instead of accommodating the 21st century.”

Tuesday, 13 December 2011

Storage Hypervisors Boost Performance without Breaking the Budget

One of the magical things about virtualization is that it’s really a kind of invisibility cloak. Each virtualization layer hides the details of those beneath it. The result is much more efficient access to lower level resources. Server virtualization has demonstrated this powerfully. Applications don’t need to know about CPU, memory, and other server details to enjoy access to the resources they need.

Unfortunately, this invisibility tends to get a bit patchy when you move down into the storage infrastructure underneath all those virtualized servers, especially when considering performance management. In theory, storage virtualization ought to be able to hide the details of media, protocols, and paths involved in managing the performance of a virtualized storage infrastructure. In reality the machinery still tends to clank away in plain sight.

The problem is not storage virtualization per se, which can boost storage performance in a number of ways. The problem is a balkanized storage infrastructure, where virtualization is supplied by hardware controllers associated with each “chunk” of storage (e.g., array). This means that the top storage virtualization layer is human: the hard-pressed IT personnel who have to make it all work together.

Many IT departments accept the devil’s bargain of vendor lock-in to try to avoid this. But even if you commit your storage fortunes to a single vendor, the pace of innovation guarantees the presence of end-of-life devices that don’t support the latest performance management features. And the expense of this approach puts it beyond the reach of most companies, who can’t afford a forklift upgrade to a single-vendor storage infrastructure and have to deal with the real-world mix of storage devices that result from keeping up with innovation and competitive pressures.

That’s why many companies are turning to storage hypervisors, which, like server hypervisors, are not tied to a particular vendor’s hardware. A storage hypervisor like DataCore’s SANsymphony-V throws the invisibility cloak over the details of all of your storage assets, from the latest high-performance SAN to SATA disks orphaned by the consolidation of a virtualization initiative. Instead of trying to match a bunch of disparate storage devices to the needs of different applications, you can combine devices with similar performance into easily provisioned and managed virtual storage pools that hide all the unnecessary details. And, since you’re not tied to a single vendor, you can look for the best deals in storage, and keep using old storage longer.

SANsymphony-V helps you boost storage performance in three ways: through caching, tiering, and path management.

Caching. SANsymphony-V can use up to a terabyte of RAM on the server that hosts it as a cache for all the storage assets it virtualizes. Advanced write-coalescing and read pre-fetching algorithms deliver significantly faster IO response: up to 10 times faster than the average 200-300 microsecond time delivered by typical high-end cached storage arrays, and orders of magnitude faster than the 6000-8000 microseconds it takes to read and write to the physical disks themselves.

SANsymphony-V also uses its cache to compensate for the widely different traffic levels and peak loads found in virtualized server environments. It smooths out traffic surges and better balances workloads so that applications and users can work more efficiently. In general, you’ll see a 2X or better improvement in the performance of the underlying storage from a storage hypervisor such as DataCore SANsymphony-V.

Tiering. You can also improve performance with data tiering. SANsymphony-V discovers all your storage assets, manages them as a common pool of storage and continually monitors their performance. The auto-tiering technology migrates the most frequently-used data—which generally needs higher performance—onto the fastest devices. Likewise, less frequently-used data typically gets demoted to higher capacity but lower-performance devices.

Auto-tiering uses tiering profiles that dictate both the initial allocation and subsequent migration dynamics. A user can go with the standard set of default profiles or can create custom profiles for specific application access patterns. However you do it, you get the performance and capacity utilization benefits of tiering from all your devices, regardless of manufacturer.

As new generations of devices appear such as Solid State Disks (SSD) and Flash memories and very large capacity disks; these faster or larger capacity devices can simply be added to the available pool of storage devices and assigned a tier. In addition, as devices age, they can be reset to a lower tier often extending their useful life. Many users are interested in deploying SSD technology to gain performance, however due to the high-cost and their limited write traffic life-cycles there is a clear need for auto-tiering and caching architectures to maximize their efficiency. DataCore’s storage hypervisor can absorb a good deal of the write traffic thereby extending the useful life of SSDs and with auto-tiering only that data that needs the benefits of the high-speed SSD tier are directed there, with the storage hypervisor in place the system self tunes and optimizes the use of all the storage devices. Customers have reported upwards of a 20% savings from device independent tiering alone and in combination with the other benefits of a storage hypervisor, savings of 60% or more are also being achieved.

Path management. Finally, SANsymphony-V also greatly reduces the complexity of path management. The software auto-discovers the connections between storage devices and the server(s) it’s running on, and then monitors queue depth to detect congestion and route I/O in a balanced way across all possible routes to the storage in a given virtual pool. There’s a lot of reporting and monitoring data available, but for the most part, once set up, you can just let it run itself and get a level of performance across disparate devices from different vendors that would otherwise take a lot of time and knowledge to get right.

If you would like an in-depth look at how a storage hypervisor can boost storage performance, be sure to check out the third of Jon Toigo’s Storage Virtualization for Rock Stars white papers: Storage in the Fast Lane—Achieving “Off-the-Charts” Performance Management.

Next time I’ll look at how a storage hypervisor boosts storage data security management.

Wednesday, 7 December 2011

Tuesday, 6 December 2011

New Market Report on The Relevance and Value of a “Storage Hypervisor:” Virtualized Management for More Than Just Servers

http://www.enterprisestrategygroup.com/2011/10/the-relevance-and-value-of-a-%e2%80%9cstorage-hypervisor%e2%80%9d-virtualized-management-for-more-than-just-servers/

A few vendors have already begun to start thinking in these terms; there are the industry giants - IBM with SAN Volume Controller, EMC with VPLEX, HDS with its Universal Storage Platform-V - as well as some smaller software-only players such as DataCore. As the capabilities of these various approached not only expand but become known and understood better, the end-user opportunity for improving efficiency and simplifying operations is simply monumental...

Saturday, 19 November 2011

Solve HDD Shortages with DataCore SANsymphony-V Storage Hypervisor Software

Amongst the catastrophic flooding in Thailand lie hard drive manufacturing plants that supply about 40% of the world’s hard drives and analysts are expecting an impact on supply well into 2012. This means that prices are rising, supply is falling and you are stuck buying more storage in the midst of the crisis because you can’t afford to run out of space.

But what if there was a different way?


Save Money by Saving Space

DataCore Software Storage Hypervisor enables you to maximize utilization and minimize capacity consumption. It thinly provisions capacity from the virtual storage pool to hosts only as needed. No longer do you strand capacity by pre-allocating space to applications that may never use it.

New disk shelves and disk arrays can be added to the virtual storage pool without disrupting applications or users, even in the middle of peak workloads. You won’t need to worry about backwards compatibility with your other storage devices. DataCore overcomes those differences, allowing you to mix and match different models and even different brands within the same virtual pool.

This unique capability allows you to shop around for the best value and defer purchases until you really need the capacity.

Interested in reducing costs by getting the most utilization out of your storage? Contact us now.

Tuesday, 15 November 2011

DataCore Storage Virtualization Software Lowers Cost of Ownership and Accelerates Performance at HARTING Technology Group

http://www.dabcc.com/channel.aspx?id=208

DataCore Software, the industry’s premier provider of storage virtualization software, announced today that HARTING Technology Group has deployed its SANsymphony storage virtualization software to realize greater cost efficiency, high-performance and a high-availability enterprise storage environment. A complete case study on the storage challenges that HARTING overcame with DataCore Software is available here: http://www.datacore.com/Testimonials/Harting-Technology-Group.aspx.

HARTING runs on DataCore’s SANsymphony and hardware from Hitachi. The storage solution supports the delivery of business critical applications, such as SAP, MS Exchange, as well as CAD and product lifecycle management software.

HARTING Technology Group is a large global manufacturer and services company specializing in electrical, electronic and optical connection, transmission and networking. It produces technology products and solutions for industries including high-speed rail, automotive and renewable energy such as wind. With over 3,300 employees in 36 countries relying on being able to access the company’s data at all hours of the night and day, a high performance storage environment is paramount.

The company chose a joint solution proposed by solution provider ISO Dataentechnik, which included Hitachi storage hardware and DataCore’s SANsymphony storage virtualization software. By moving from a less-flexible legacy hardware infrastructure to a cost-effective midrange hardware storage system and managing all their storage environment with a DataCore-powered virtualized SAN, HARTING was able to overcome three key challenges:

  • Reduce the overall costs of storage and provide greater options and flexibility for adding storage systems in the future.
  •  Improve reliability through the addition of DataCore’s high availability for critical business systems.
  • Substantially increase enterprise application performance.

A Compelling Combination: High Performance and Low Cost
"Our expectations of the combination of HDS hardware and DataCore software have been exceeded,” said Rudolf Laxa, operations and data center team leader at HARTING Technology Group. “The new HDS midrange systems and the DataCore virtual storage layer have allowed us to lower costs and achieve a significant increase in fail-safety and performance. The excellent interaction between DataCore software and VMware is another reason why we are more than satisfied with the current solution."

According to Laxa, there was initial hesitation to move business-critical SAP applications to the virtualized storage environment – as it represented a significant break from HARTING’s past practices. However, examples of success with similar moves with other DataCore customers and the opportunity to significantly enhance current capabilities ultimately prevailed. Laxa continued, "The benefits of central administration finally provided the impetus for implementation -- a decision we have not had a reason to regret so far.”

In particular, HARTING credits DataCore’s SANsymphony software for an unprecedented level of performance and business agility, especially when combined with the company’s existing VMware-based server virtualization deployment throughout its data center.

"The technical capabilities of DataCore virtualized storage appealed to us almost immediately; it creates high availability, gives us independence from the hardware and makes flexible migration scenarios possible,” said Laxa “The software has proven to be a meaningful extension of our VMware environment and guarantees the highest levels of availability we require from our storage solution."

Saturday, 12 November 2011

Arizona State University Selects DataCore to Manage Data Storage Growth

Ensures higher performance and availability

http://www.datamation.com/storage/managing-data-storage-growth-buyers-guide-1.html

Vincent Boragina, Manager System Administration, W. P. Carey School of Business IT Arizona State University, aimed to reach a 100% in server virtualization. Performance from IT assets was imperative. The advance in server virtualisation over the years, alongside desktop virtualization, led the school to dabble in high-end storage I/O needs with sequel databases and file servers (initially kept off the server virtualisation layer as the products were yet to mature). But when they started to virtualize these platforms, they faced a higher degree of latency. The need for I/O had advanced.

Boragina explains, “The issues with virtualization rests not so much with the storage capacity, as much as with how fast and the low latency it requires, to get the data on and off the disc. What is key, are the controllers and the fiber connectivity, etc., that run the disc, which impact the IOPS (Input/Output Operations Per Second) and the latency of that disc. This is where complexity rises, as it is harder to measure latency. Performance was my key criteria.”

The school implemented DataCore’s SANsymphony-V and XIO storage, where XIO was the disk sub system and DataCore was the hypervisor for the storage and the storage I/O controllers. As a result, the school achieved a 50% reduction in latency time and a 25-30% increase in the overall I/O. With the redundancy and I/O requirements met, the school was able to virtualize any platform.

Importantly, to address issues like high performance, one need not overhaul the existing storage stack, added George Teixeira, CEO at DataCore. DataCore’s SANsymphony-V Storage Hypervisor, for instance, utilizes existing storage assets to boost performance with the adaptive caching. Its auto-tiering enables optimal use of SSDs/Flash, and high-availability for business continuity. “This precludes the investments of purchasing additional IT assets and pre-mature hardware obsolescence,” says Teixeira.

Business continuity was the added benefit for the school, as it came built-in within the DataCore solution. An added effect of this implementation: speedier backup due to a faster I/O.

Thursday, 10 November 2011

Storage Hypervisor Delivers Just-in-Time Capacity Management

Just-in-time (JIT) production practices, which view inventory not as an asset but a cost, have accelerated the delivery and reduced the cost of products in a wide range of industries. But perhaps the biggest benefit to the companies that adopted them was the exposure of widespread manufacturing inefficiencies that were holding them back. Without the cushion of a large inventory, every little mechanical or personnel hiccup in the assembly line had an immediate effect on output.

Virtualization technology is playing a similar role for IT, and nowhere is this more visible than in storage. Server virtualization has been incredibly successful in reducing the processor “inventory” needed to provide agile response to business demands for more and better application performance. Average processor utilization often zooms from the 10% range to 60-70% in successful implementations. But this success exposed serious storage capacity management inefficiencies.

As Jon Toigo of the Data Management Institute points out in the first of his Storage Virtualization for Rock Stars white papers, Hitting the Perfect Chord in Storage Efficiency, between 33 and 70 cents of every IT dollar expended goes for storage, and the TCO of storage is estimated to be as much as 5 to 8 times the cost of acquisition on an annualized basis. However, as illustrated in that paper, on average only 30% of that expenditure is actually used for working data. This isn’t due to carelessness on the part of IT managers. They are doing the same sort of thing manufacturers did before JIT: in this case using large storage inventories to compensate for inefficiencies in storage capacity management that make it impossible to provision storage as fast as they can provision virtual servers.

This is a major factor driving the adoption of storage virtualization, which can abstract storage resources into a single virtual pool to make capacity management far more efficient. (It can do the same for performance management and data protection management, as well—I’ll look at them in future posts.) I say “can” because, given the diverse storage infrastructures that are the reality for most organizations, full exploitation of the benefits of storage virtualization requires the use of a storage hypervisor. This is a portable software program, running on interchangeable servers, that virtualizes all your disk storage resources—SAN, server direct-attached storage (DAS) and even those disks orphaned by the transition to server virtualization—not just the disks controlled by the firmware within a proprietary SAN or disk storage system.

With a storage hypervisor such as DataCore’s SANsymphony-V, the storage capacity management inefficiencies exposed by server virtualization are truly a thing of the past. Rather than laboriously matching up individual storage sources with applications—and likely over-provisioning them just to be sure of having enough, you can draw on a single virtual pool of storage for just the right amount. Thin provisioning permits allocating an amount of storage to an application or end user that is far larger than the actual physical storage behind it, and then provisioning real capacity only as needed based on actual usage patterns. Auto-tiering largely automates the task of matching the right storage resource to applications based on performance level needs.

The result is true just-in-time storage: capacity when, where, and how it’s needed. And, because the capacity management capabilities reside in the storage hypervisor, not the underlying devices, they’re available for existing and future storage purchases, regardless of vendor. You can choose the storage brands and models needed to match your specific price/performance requirements, and when new features and capabilities are added to the hypervisor, they’re available to every storage device.

For more information on how a storage hypervisor can enable effective capacity management from an architectural, operational, and financial standpoint, check out Jon’s second Storage Virtualization for Rock Stars white paper: Capacity Management’s Sweet Notes – Dynamic Storage Pooling, Thin-Provisioning & More.

Next time I’ll look at how a storage hypervisor boosts storage performance management.

Friday, 4 November 2011

ESG reports 59% Have not Deployed Virtualization on Tier-1 Windows Apps

Respondents to a recent ESG survey indicated that increasing the use of virtualization was their number one IT priority over the last two years and will continue to be the top priority for the next 12-18 months. While server virtualization penetration continues to gain momentum, IT organizations still have numerous hurdles to overcome in order to deploy it more widely and move closer to a 100% virtualized data center. ESG found that 59% have yet to employ virtualization where it will provide the most benefit: their mission-critical tier-1 applications. These tier-1 application workloads include Microsoft Exchange 2010, Microsoft SQL Server 2008 R2, and SharePoint 2010. For IT organizations supporting large numbers of users, hesitation to implement virtualization stems from the perception that it adds performance overhead and unpredictable scalability and availability to the tier-1, multi-user, business-critical applications relied upon by the majority of their users.

http://www.enterpriseittools.com/sites/default/files/ESG%20-%20Hyper-V%20R2%20SP1%20Application%20Workload%20Performance%20-%20March%202011.pdf


DataCore STAR HA Solution Adds Resiliency and Performance to Microsoft Hyper-V Environments
http://www.it-analysis.com/technology/storage/news_release.php?rel=28058

The DataCore STAR HA (high availability) solution is primarily aimed at the large installed base of Microsoft servers running lines of business applications as well as Exchange, SQL Server and SharePoint, eager for better data protection and performance.

Many of these IT organisations realise they must move their data from internal server disks to shared storage area network (SAN) to meet growth needs, improve uptime, and enhance productivity. However, some have concluded that this migration could add more risk, disruption, and cost than they can currently afford. Thus, they seek a solution that minimizes these obstacles. They need a simple way to enhance the performance and resiliency of their application servers, while providing easy access and a transition path to the compelling advantages of shared SAN.

The DataCore STAR HA solution is tailored for these IT organisations seeking the benefits of a SAN, while allowing their data to remain safely stored on their servers.

The new business continuity solution enhances the resiliency and performance of Windows server farms enabling those systems to stay up longer, their applications to run faster, and shorten the time to recover from unexpected situations. It is most appealing for customers wishing to keep their data distributed across their server disk drives, while at the same time gaining many of the centralised and advanced services made possible by a SAN.

STAR Topology Leverages Central Server for Faster Recovery from Failures
Rather than migrate server data to an external SAN, the DataCore STAR HA software automatically mirrors the data drives on each Windows server to a central DataCore STAR server for safekeeping. In addition, the Window’s host-resident software speeds up applications by caching repeated disk requests locally from high-speed server memory.

If an application server in the farm goes down, another server can resume its operations by retrieving a current copy of its data drive from the central DataCore STAR server. The DataCore STAR server also offloads replication requests across the farm. It can take centralised snapshots of the data drives and remotely replicate all critical data to a remote disaster recovery site. The solution has the added benefit of automatically redirecting requests to the central DataCore STAR server when a local server data drive is inaccessible.

Thursday, 3 November 2011

DataCore Software Offers SAN Alternative for Windows Server Farms; Targets Microsoft Servers Running Exchange, SQL Server and SharePoint

http://www.thewhir.com/web-hosting-news/110111_DataCore_Software_Offers_SAN_Alternative_for_Windows_Server_Farms

DataCore Software Introduces a New Business Continuity Solution for Windows Microsoft servers running Exchange, SQL Server and SharePoint, eager for better data protection and performance.


The DataCore STAR high availability solution is primarily aimed at the large installed base of Microsoft servers running lines of business applications as well as Exchange, SQL Server and SharePoint, looking for better data protection and performance.

The DataCore STAR HA solution is designed for IT organizations that are seeking the benefits of a SAN, while allowing their data to remain safely stored on their servers.

DataCore also offers alternative, high-availability, high performance packages for architecting solutions using redundant, two node configurations, and scale-out grid designs.

The new business continuity solution improves on the resiliency and performance of Windows server farms enabling those systems to stay up longer, their applications to run faster, and shorten the time to recover from unexpected situations.

It is most ideal for customers that want to keep their data distributed across their server disk drives, while accessing many of the centralized and advanced services made possible by a SAN.

Instead of migrating server data to an external SAN, the DataCore STAR HA software automatically mirrors the data drives on each Windows server to a central DataCore STAR server for safekeeping.

In addition, the Window's host-resident software speeds up applications by caching repeated disk requests locally from high-speed server memory.

If an application server in the farm goes down, another server can resume its operations by retrieving a current copy of its data drive from the central DataCore STAR server.

The DataCore STAR server also offloads replication requests across the farm. It can take centralized snapshots of the data drives and remotely replicate all critical data to a remote disaster recovery site, as well as automatically redirect requests to the central DataCore STAR server when a local server data drive is inaccessible.

Physical machines hosting standalone applications under Windows Server 2008 R2 as well as systems hosting multiple virtual machines under Microsoft Hyper-V can take advantage of the data protection and performance enhancement benefits.

The DataCore STAR HA software is compatible with Windows server applications including Exchange, SQL Server, SharePoint, and line of business applications.

SAN-Averse Customers Can Take Advantage of Business & Financial Benefits

http://vmblog.com/archive/2011/11/01/datacore-software-introduces-a-new-business-continuity-solution-for-windows-server-farms.aspx
“While many IT managers are eager to exploit the benefits of centralized and shared data that SANs deliver, they also often worry that migrating to a central storage system could prove to be risky and disruptive,” said Mark Peters, senior analyst, Enterprise Strategy Group. “The STAR HA software from DataCore provides a solution to this conundrum by combining the comfort and familiarity of distributed data with the attractive attributes of a central SAN in an efficient manner that is both unobtrusive and cost-effective.”

Friday, 21 October 2011

Storage Hypervisors and the Technology of Freedom: No Automation without Virtualization

Two political movements, the Tea Party and Occupy Wall Street (OWS), have been in the headlines lately. Although they come from very different corners of the political ring, they seem to share a perception that there’s a group of people, however defined, that have more than their fair share of power and influence.

They also have in common a debt to what the brilliant communications scholar Ithiel de Sola Pool called “the technologies of freedom.” When he wrote the seminal book by that name in 1983, that phrase referred to the printing press, radio, television, and the telephone, but he clearly foresaw the transformation of these technologies by the computer, a process he called “convergence.” Both the Tea Party and OWS (not to mention the Arab Spring and similar movements) would have been impossible without the new technologies of freedom that resulted from this convergence: email, blogs, Twitter, Facebook, YouTube, and the like, which enable those who can’t afford a printing press or a broadcast station to make their voices heard with a volume unlikely or impossible with earlier technologies.

What’s interesting to me is that the computing industry has its own “technology of freedom,” which has proved just as corrosive of vendor control as applications like Twitter have of political control. I’m talking about virtualization, a technique that pervades computing from top to bottom, forever abstracting function away from hardware into software. The abstraction furnished by communications protocols gave us the Internet, and helped blow up telecomm monopolies in the process. The abstraction of processors gave us server and desktop virtualization, making computing power a commodity and undermining the power of hardware vendors.

Now storage virtualization is eroding the power and influence of storage vendors. The 99%--the IT professionals down in the trenches suffering with the complications of managing a balkanized storage infrastructure—are fed up. They’re ready to throw their growing collection of storage devices into the harbor—any harbor. They’ve seen the promise of storage virtualization: better capacity management, better performance management, and better data protection management. But they don’t see why they shouldn’t be able to get these benefits across all their storage assets. After all, hypervisors like VMware, Hyper-V, and Xen work with any server hardware.

That’s why the “big iron” vendors haven’t been able to keep virtualization safely locked up in their disk arrays. The genie is out of the bottle with the rise of hardware-independent storage hypervisors like DataCore’s SANsymphony-V. A storage hypervisor unifying all your storage assets—from SANs to NAS boxes to SCSI disks orphaned by server virtualization consolidation—into an easily managed, high-performance virtual storage pool. By automating many of the tasks involved in efficient storage management, a storage hypervisor frees IT for more strategic operations and transforms storage from a business cost into a business advantage.

I’ll be looking at each of the management advantages a storage hypervisor delivers in posts to come, and outlining the business benefits they deliver: risk reduction, improved productivity, and cost containment.

For a head start on these topics, and more, please plan to attend a complimentary webcast on October 27, 2011 titled “The Storage Hypervisor: Taking Storage to the Next Level.” This webcast starts at 2:00 PM Eastern Time. Get to know DataCore – and take your storage to the next level. Register today.

Wednesday, 19 October 2011

Jack Spencer Saves Big Bucks with DataCore SANsymphony-V; CIO of American Society Health-System Pharmacists

http://virtualizationreview.com/blogs/the-hoard-facts/2011/10/jack-spencer-datacore-san-symphony-v.aspx

Spencer's IBM storage was failing, but DataCore came to the rescue and saved him some money in the process.

Jack Spencer is VP of operations and CIO of the non-profit American Society Health-System Pharmacists. Last Fall, his IBM Fast T 700 storage server, which had been in place for about five years and upgraded numerous times, started to fail during the registration period of one of the organization's national meetings that produce a large portion of its yearly operating budget.

"We had a lot of pain with IBM trying to get something done with that, and of course it turned into more of a sales call than any kind of a support thing, so it came to the point where we knew there had to be a better way," Spencer said.

Fortunately, he was able to stabilize the Fast T 700--which had a controller that was failing and several bad drives that were being masked by the controller's symptoms--it could complete the meeting's registration duties. He then immediately went to work on crafting a new system, which involved implementing a SANsymphony-V storage virtualization software system that used HP EVA disk arrays. DataCore refers to SANsymphony as a "storage hypervisor."

Even though his new storage configuration was up and running, Spencer didn't want to throw out the Fast T 700 because it was still usable, and DataCore told him he could put that plus some Dell MD series disk arrays behind SANsymphony-V.

"DataCore gave me a solution that allowed me to keep all those resources, and not get them out of the architecture, but still use them, maybe not in production, but in a test environment," he said. "So now we don't see the new system as an individual piece of equipment that has storage--we just see storage, and I'll tell you, the training curve for my LAN crew who manages that has decreased by 80 percent at least."

...ROI-wise, Spencer said he saved between $50,000 and $100,000 alone by being able to keep using his legacy storage systems. He claimed to have saved another $20,000 to $30,000 by slashing his training requirements.


Does he go along with the "storage hypervisor" characterization of SAN Symphony-V?

"Oh absolutely. I can go out and look at whatever storage I want and just throw it in behind," he stated. "It's pretty straight-forward, pretty simple, and pretty elegant.

Tuesday, 18 October 2011

Stikeman Elliott Streamlined its Data with DataCore

ComputerWorld and IT World Canada: http://www.itworldcanada.com/news/stikeman-elliott-streamlined-its-data-with-datacore/144114

Using DataCore’s SANSymphony-V, corporate law firm Stikeman Elliott was able to re-purpose valuable storage devices and add new units into the chain without risking downtime

One of the things that Marco Magini, network and systems administrator for Montreal-based Stikeman Elliot LLC, worries most about where storage is concerned is downtime. At one of the largest corporate law firms in the world, it may come as a shock, but the cliché “time is money,” is no joke, Magini said.

“In terms of business continuity or flow, here, time is money, so data has to be available 24/7. I know everywhere it’s the same, but here, downtime costs a lot. When you do the math for the lawyers’ fee and the legal assistant and everything, we need to keep the data up and running all the time. No disruption at all,” he said.


After using a few bandaid fixes and constantly shopping for storage upgrades, Magini was introduced to Ft. Lauderdale, Fla.-based DataCore Software by TH Consultants.

He’d looked at other products before, but the fact that DataCore was a software implementation with a real emphasis on interchangeability really sold him on it. He and the IT team found themselves asking, “why can’t we pull or stretch the actual hardware that we have (instead of constantly upgrading)? (That’s) why we love the DataCore product; it’s not manufacturer binded. You can throw any type of arrays or storage at it.”

George Teixeira, CEO of DataCore, said that this is one of the key principles his company is known for. “We started off pioneering a lot of the key things that are, today, taken for granted in the whole storage/virtualization space; things like thin provisioning and so forth. The difference is we’ve always done it in software and it’s software that allows hardware interchangeability.”

Teixeira said that DataCore is well positioned because storage isn’t something you can mess around with. You need to utilize whatever you have to keep budgets and downtime from spinning out of control. “As time goes on, people are using different kinds of storage depending on what they need and the cost for these things differ dramatically and the performance differs dramatically. One of the big advantages of DataCore is it’s software that works across all of these,” he said.

He also said it uses “auto-tiering” to intelligently move data between devices by cost and availability. “What (auto-tiering) does is allow you to have a mix of any kind of these purpose-built devices that you already have in your sight or add new ones and we will move the data to where it makes the most economic and pricepoint for your needs.”

Besides this unique take on interchangeability, Magini said his satisfaction in the transition to DataCore’s SANSymphony-V Hypervisor came from the added benefits he hadn’t considered before implementation. “This is a little jewel. Not only does it answer our backup issue, but we can move forward with our disaster recovery and especially high availability.”

Friday, 14 October 2011

My Kingdom for a Storage Hypervisor!... The Latest Woes of Research in Motion (RIM)

The old saying “for want of a nail a kingdom was lost” comes to mind while watching the latest woes of Research in Motion (RIM) and their consumer-focused Blackberry Internet Services (BIS), which apparently went down twice this week. Consumers in EMEA and South America were particularly hard hit, and the outages spread to North America. Early reports indicate that the first outage, at least, may have originated in the company’s data center in Slough, England—network carriers are denying any responsibility.

These outages come less than a month after the company’s stock hit a five-year low, due to investor worries about competition from Apple and Android phones and other concerns. Since consumers aren’t subject to the same lock-in pressures as corporations who have adopted the Blackberry Enterprise Server, this outage could well prompt more consumers to switch, especially with the introduction of the iPhone 4s.

Obviously, until RIM comes clean on the cause of these outages, we can only speculate about how they could have been prevented. But if preliminary reports of them being server-related are true, our customers’ experience suggests that a storage hypervisor might have helped. Storage is the foundation of server operations, so high-availability storage is a must-have for utterly-reliable operation. A storage hypervisor such as DataCore SANsymphony-V can create an easily-provisioned virtual pool of storage that is replicated both synchronously, for immediate, no-interruption failover, and, if desired, asynchronously, for disaster recovery. No change is required to the application servers, making a storage hypervisor an easy and cost-effective way to prevent outages.

For instance, the Ports of Auckland, through which passes 13% of New Zealand’s GDP and almost 40% of its container trade, depends on SANsympony-V to support cargo handling and logistics services on a 24x7, 365-day-a-year basis. As Lead Systems Engineer Craig Beetlestone notes, DataCore let the Ports of Auckland “design out the panic” so that while the port may run 24X7, IT doesn’t have to. “We can suffer a complete site failure and have the systems carry on running without any manual intervention, which is a huge benefit for us,” he says. “Our return to operation is literally only minutes.”

How many customers could RIM have retained if the recent outages had been only minutes? And, would nervous investors even have noticed? The costs of these outages will be nearly incalculable, but one thing seems sure: if a storage hypervisor could have prevented them, it would have paid for itself the moment it was turned on.

Tuesday, 11 October 2011

DataCore: Storage Virtualization for the Rest of Us

http://juku.it/en/articles/datacore-storage-virtualization-for-the-rest-of-us.html

I’ve been playing around with many storage platforms in the last ten years but I admittedly never had first hand experience with Datacore before now.

Datacore is a private company hailing from sunny Ft. Lauderdale, FL. They sport worldwide presence and 13 years in the competitive storage industry with more than 6000 customers.

Their sole product is called SANsymphony-V and it’s a software-based storage virtualization solution which runs on physical or virtual Windows Server 2008 R2 machines. The product can virtualize whatever storage is connected to itself (both direct attached or connected via SAN) and then export it using iSCSI, FC and FCoE.

SANsymphony-V approach to HA is an interesting one, they’re maintaining synchronous copies between nodes to guarantee high availability, with this approach you can separate the nodes as far as 100 KMs apart and still access them as a single entity (a scenario that vaguely resemble NetApp’s metrocluster)...

SANsymphony-V really struck my interest so I decided to give it a spin in our Juku lab.


During my tests I’ve put SANsymphony-V under stress in two different scenarios: physical (with async replication to virtual) and fully virtual (on vSphere 5 beta), I used both direct attached storage (local disks on Dell servers) and SAN attached storage (running on a trusty HDS AMS500 connected via FC to a Brocade fabric) as backend and VMware ESXi5 servers as clients.

The tests were performed with IOmeter inside virtual machines and performances were consistent across all the tests performed, the aggressive caching that Datacore provides (up to 1TB per node can be used for caching purposes) pushed IOps up to 4x in the tests performed (comparing SANsymphony-V performance against the native backend storage) a metric that is definitely interesting if your workload is cache friendly, especially because Datacore uses standard server RAM as caching platform that is usually cheaper than purpose-built cache modules for storage arrays.

...The product is solid, during my tests I had no issue whatsoever with the Datacore software: no install shenanigans, no strange quirks and no unexpected behaviors, everything went as smooth as it can get and performance were always consistent, even when Datacore nodes were virtualized under VMware.

...Even if I’m not too fond of Windows as a storage platform (even if EMC proved me wrong) I must admit that if the storage layer is solid you can achieve great results, as it’s the case with Datacore and even if I’d really like to see the products bundled with a stripped-down version of Windows server (to maximize physical resource utilization) if you’re in the market for a good software-based storage virtualization solution, Datacore is definitely something to put on your shortlist.

Monday, 10 October 2011

Cloud Storage Partnership: DataCore and TwinStrata

http://www.mspmentor.net/2011/10/05/cloud-storage-patnership-datacore-and-twinstrata/

DataCore Software
and TwinStrata are connecting the dots between storage virtualization and cloud storage. Specifically, DataCore’s storage virtualization software will now come bundled with a free TwinStrata CloudArray virtual appliance. How do managed service providers potentially benifit? Here are some thoughts.

The new solution allows enterprises to optimize their on-site data and leverage a pay-as-you-go storage center in a cloud environment, the companies say. DataCore and TwinStrata have been strategic partners for a number of years, according to DataCore Director of Product Marketing Augie Gonzalez. And according to DataCore President, CEO and Co-Founder George Teixeira, the partnership with TwinStrata is really the secondary news here.

“This announcement is not so much about TwinStrata. The bigger picture is that DataCore has been focused on managing and tiering diverse devices,” explained Teixeira. “We are allowing the consumer to have hardware interchangeability. And on a higher level, our architecture lets you get the most out of each level and get better performance out of any device.”

According to Teixeira, the DataCore-TwinStrata solution will “feel” like you’re using on-premise iSCSI disks. But the storage will actually occur in a cloud provider of their choice. That means less critical tiered storage, data backups and archives that can grow on pay-as-you-go-tiers and savings when it comes to space, power, cooling, and ultimately, money, the companies claim.

“Our auto-tiering function lets you promote or demote the storage or data to whatever makes sense economically or whatever meets storage needs,” Teixeira continued. “The one thing that was missing was a low cost storage from a cloud provider that you can get pretty simply today as a pay-as-you-go storage model. So we’ve added a function.”

According to Gonzalez, DataCore reviewed more than 100 different cloud solutions on the market before deciding on TwinStrata’s CloudArray. “They have an outstanding engineering and support staff,” Gonzalez said. “MSPs are trying to offer a complete on-premise and off-premise cloud model. To do that today, they would have to go to a fragmented approach. We are bringing to market a uniform way to do this.”

Saturday, 8 October 2011

The Boss Just Asked Me – “Why aren't we using Cloud Storage?”

The US government (the big boss) has enacted a 'Cloud First' policy, which is easy to state but has proven more difficult to do. Likewise CIOs (IT bosses) and businesses of all types are dealing with the challenge of how to effectively use “pay to go” Cloud capabilities to save money and improve productivity.

One especially critical pain point in IT is the growth of storing data and its cost (upfront capital expenditures, space, power, cooling, maintenance, etc.). IT managers who have too much work and too little time are being forced to look for new ways to get the most out of their on-premise storage assets while exploring ways to integrate new Cloud models. The bosses want more...

What to do? Recently, Gartner Group’s Gene Ruth, Research Director and Senior Storage Analyst, published a set of key findings and recommendations on Cloud storage in a recent report titled: Use heterogeneous storage virtualization as a bridge to the cloud. The opening page to the report makes an interesting statement: “Moving a storage infrastructure into a cloud paradigm can be a struggle for data center operators. The use of a heterogeneous storage virtualization solution provides a transition path that preserves investments and adds cloud operating semantics."

Storage virtualization and bridges all sound good, but how can we do it practically is the real question?

The simple answer to the question being asked is…it has to be easy, non-disruptive to operations and cost-effective. DataCore just announced it has extended its storage hypervisor tiering to Cloud storage and of course we believe the combination of storage virtualization and a Cloud gateway makes for a practical and compelling answer. The solution empowers users to manage, offload and augment on-premise storage environments with space and power saving storage located in the cloud.

For more background on the issue, the article in Cloud Computing by Nicos Vekiarides, a real Cloud expert, is a good read and does a good job of outlining how storage virtualization software, smart management to tier storage, and a seamless Cloud gateway can let companies gain the advantages of Cloud storage. You can read the full article here –

Cloud Computing: Combining Storage Virtualization with Unprecedented Cloud Storage Flexibility


The business case for storage virtualization software isn’t new. System administrators have always sought best-of-breed data storage from a choice of storage vendors and a centralized management framework. However, the business case for storage virtualization is currently even more profound for several key reasons.
  1. Now, with a variety of data storage devices available today, ranging from high-performance SSD and Flash cards to low-cost SATA arrays, a seamless way to automatically tier data across the diversity of storage devices and purpose-built arrays can maximize utilization and cost-efficiency.
  2. Moreover, hardware interchangeability and auto-tiering empower consumers with greater cost controls and buying power.
  3. In addition, there are numerous benefits to an enterprise-class software stack that includes data replication and disaster recovery in a footprint that persists across storage system upgrades – with no need to ever change data management interfaces and policies.

Read the full article.

Thursday, 6 October 2011

More Coverage on DataCore Extends Storage Hypervisor Tiering to Cloud Storage

Combining Storage Virtualization with Unprecedented Cloud Storage Flexibility
http://cloudcomputing.sys-con.com/node/2007782


The business case for storage virtualization software isn’t new. System administrators have always sought best of breed data storage from a choice of storage vendors and a centralized management framework. With a variety of data storage devices available today, ranging from high-performance SSD and Flash cards to low-cost SATA arrays, a seamless way to automatically tier data across the diversity of storage devices and purpose-built arrays can maximize utilization and cost-efficiency. Hardware interchangeability and auto-tiering empower consumers with greater cost controls and buying power. In addition, there are numerous benefits to an enterprise-class software stack that includes data replication and disaster recovery in a footprint that persists across storage system upgrades, with no need to ever change data management interfaces and policies.

DataCore SANsymphony-V
is offered by the pioneer and established leader in the storage virtualization segment and is the industry’s first storage hypervisor that provides all of the above benefits and much more. This week, DataCore’s storage virtualization software added the flexibility of Cloud Storage to its list of features with the announcement that every copy of SANsymphony-V now bundles a TwinStrata Cloudarray virtual appliance. The combination enables companies to achieve true ‘open market’ buying power across their portfolio of storage investments; TwinStrata extends the cost-saving value propostion to a broad and impressive selection of public and private cloud storage providers. Read More...


InfoStor:DataCore Extends SANsymphony-V to the Cloud
http://www.infostor.com/backup-and_recovery/cloud-storage/datacore-extends-sansymphony-v-to-the-cloud.html

DataCore Software announced Tuesday that it has enhanced its SANsymphony-V storage hypervisor product to enable users to store less-important data to be automatically in the cloud, leaving more space on local storage infrastructure for high-priority data.

SANsymphony-V storage hypervisor provides a comprehensive set of storage control and monitoring functions that provide a transparent, virtual layer across consolidated disk pools.

It manages and optimizes different levels of storage, including memory, solid-state drives (SSD), and hard-disk drives (HDD) so that the most appropriate data is stored in the most appropriate tiers -- and now SANsymphony-V also works with cloud storage tiers, according to DataCore.

"Basically, DataCore's SANsymphony-V storage hypervisor has a function called auto-tiering which can set and prioritize data storage tiers ... "The problem, until now, [has been the lack of] the ease in doing this and storage costs ... The cloud is an ideal option because of its low storage costs"...


Enterprise Systems: DataCore Extends Storage Hypervisor Tiering to Cloud Storage

http://esj.com/articles/2011/10/04/datacore-storage-hypervisor.aspx

Architecture manages, optimizes tiers of storage; provides cloud gateway for backup, archive, and low-cost, off-premise tiered cloud storage.

DataCore Software has extended its storage hypervisor architecture with functionality that empowers enterprise customers to optimize the utilization and performance of their on-site assets and take full advantage of low cost, virtually unlimited “pay-as-you-go” storage in the cloud. The DataCore SANsymphony -V storage hypervisor now integrates seamlessly with the CloudArray cloud storage gateway from TwinStrata, a cloud-based data storage provider, allowing a simple, transparent, and cost-effective way to manage, offload, and augment on-premise storage environments with space- and power-saving storage located in the cloud.


Computer Technology Review: DataCore extends storage hypervisor tiering to provide enterprises with cloud storage
http://www.wwpi.com/index.php?option=com_content&view=article&id=13750:datacore-extends-storage-hypervisor-tiering-to-provide-enterprises-with-cloud-storage&catid=320:breaking-news&Itemid=2701739


WebHost: Storage Virtualization Provider DataCore Adds TwinStrata CloudArray to Software Package

http://www.thewhir.com/web-hosting-news/100411_Storage_Virtualization_Provider_DataCore_Adds_TwinStrata_CloudArray_to_Software_Package

“SANsymphony-V was designed to help customers get the maximum performance, availability and utilization from the storage devices in their data centers,” Carlos Carreras, VP of alliances at DataCore said in a statement. “By bundling the TwinStrata CloudArray appliance with our SANsymphony-V, we give our customers even more flexibility and greater cost control by enabling them to include cloud storage in their virtual storage architecture.”

According to the press release, the bundle includes 1TB TwinStrata Cloud Array Virtual Appliance (a $2995 value), data compression/deduplication, at-rest/in-flight encryption, snapshotting, disaster recovery, bandwidth optimization and scheduling and choice of cloud service providers.

The package is also upgradable to handle perabytes of data and includes 30 days free cloud storage.


SNSeurope: TwinStrata and DataCore deliver industry-first storage virtualization to incorporate Cloud storage

DataCore, the premier provider of storage virtualization software, is bundling a free, one-terabyte TwinStrata CloudArray virtual appliance with the DataCore SANsymphony™-V storage hypervisor software to provide a fast and secure way for organizations to scale their storage infrastructure more cost effectively using cloud storage.

http://www.snseurope.com/news_full.php?id=19700


“TwinStrata and DataCore are making it possible for users to easily include cloud storage as a transparent extension of their existing data center storage environment,” said Terri McClure, senior analyst at Enterprise Strategy Group. “This gives companies greater control over storage performance and helps control costs by making it easy to tier their storage across the absolute widest range of resources, from high-performance SSD arrays and legacy disk arrays, to scalable, cost-effective cloud storage.”


ChannelEMEA: DataCore provides seamless access to cost effective Cloud Storage


DataCore’s SANsymphony-V storage hypervisor is unique in its ability to provide an architecture that manages, optimises and spans all the different price-points and performance levels of electronic and mechanical storage, including electronic memory, SSDs, disk devices and cloud storage tiers. Amongst its comprehensive features is advanced automated tiering of disks. This option automatically relocates disk blocks among pools of different storage devices, keeping demanding workloads operating at peak speeds. Less critical and infrequently accessed disk blocks naturally gravitate towards lower cost disks.

With the transparent integration of CloudArray, DataCore customers use what they believe are on-premise iSCSI disks whose contents are located remotely on a cloud storage provider of their choice. This allows less critical tiered storage, backups and archive data to be cost-effectively placed in cloud storage that can grow almost limitlessly and with pay-as-you-go tiers, while offering additional savings in space, power and cooling.

Additional features include:

Seamless integration between Cloud Array and SANsymphony-V in minutes;
Easily downloadable virtual software appliance;
Data deduplication and compression;
Data backup and recovery - on-site, off-site or in the cloud;
Secure AES 256-bit advanced encryption standard for both at-rest and in-flight data protection;
Disk caching to boost performance and enhance speed-matching to off-site cloud storage;
Bandwidth optimisation and scheduling controls;
Snapshotting;
and Choice of cloud storage providers, including such leaders as Amazon Web Services, AT&T Synaptic Storage, Windstream Hosted Solutions, PEER 1 Hosting, Nirvanix and OpenStack.


Blog Posts:

Combining Storage Virtualization with Unprecedented Cloud Storage Flexibility

DataCore Extends Storage Hypervisor Tiering to Provide Enterprises With Seamless Access to Cost Effective Cloud Storage


DataCore adds support for cloud as storage tier

DataCore Extends Storage Hypervisor Tiering to Provide Enterprises With Seamless Access to Cost Effective Cloud Storage

http://www.enterprisecommunicate.com/datacore-extends-storage-hypervisor-tiering-to-provide-enterprises-with-seamless-access-to-cost-effective-cloud-storage/

Offers Comprehensive Architecture to Manage and Optimize All Tiers of Storage. Partnership with TwinStrata Integrates Cloud Gateway for Backup, Archive and Low Cost Off-Premise Tiered Cloud Storage.


DataCore Software today announced it has extended its storage hypervisor architecture with functionality that empowers enterprise customers to optimize the utilization and performance of their on-site assets and take full advantage of low cost, virtually unlimited ‘pay as you go’ storage in the cloud. The DataCore™SANsymphony™ -V storage hypervisor now integrates seamlessly with the CloudArray® cloud storage gateway from TwinStrata, the leading innovator of cloud-based data storage, allowing a simple, transparent and cost-effective way to manage, offload and augment on-premise storage environments with space and power saving storage located in the cloud.

“Moving a storage infrastructure into a cloud paradigm can be a struggle for data center operators. The use of a heterogeneous storage virtualization solution provides a transition path that preserves investments and adds cloud operating semantics,” said Gene Ruth, research director and senior storage analyst for Gartner. “This particular solution combines storage virtualization, tiering of diverse devices and a bridge capability, allowing organizations to choose and offload less critical storage, backup and archive data onto cloud storage providers.”




Wednesday, 5 October 2011

TwinStrata and DataCore Deliver Industry-First Storage Virtualization Solution to Incorporate Cloud Storage

http://www.stockrants.com/2011/10/04/twinstrata-and-datacore-deliver-industry-first-storage-virtualization-solution-to-incorporate-cloud-storage.html

“TwinStrata and DataCore are making it possible for users to easily include cloud storage as a transparent extension of their existing data center storage environment,” said Terri McClure, senior analyst at Enterprise Strategy Group. “This gives companies greater control over storage performance and helps control costs by making it easy to tier their storage across the absolute widest range of resources, from high-performance SSD arrays and legacy disk arrays, to scalable, cost-effective cloud storage.”

“SANsymphony-V was designed to help customers get the maximum performance, availability and utilization from the storage devices in their data centers,” said Carlos Carreras, VP of alliances at DataCore. “By bundling the TwinStrata CloudArray appliance with our SANsymphony-V, we give our customers even more flexibility and greater cost control by enabling them to include cloud storage in their virtual storage architecture.”

“DataCore storage virtualization software enables us to maximize storage pool resource utilization to ensure high performance and availability for our clients,” said Jason Schuerhoff, vice president of sales at Sublime Solution, a global IT reseller and services company specializing in virtualization solutions. “Using it with CloudArray to integrate cloud storage into the mix will enable us to create some exciting new storage configurations that further reduce costs and increase storage scalability for customers.”

Friday, 30 September 2011

Storage Hypervisor Coverage in IT Business Edge: Is Storage Virtualization for Real? DataCore Says Yes

http://www.itbusinessedge.com/cm/community/features/interviews/blog/is-storage-virtualization-for-real-datacore-says-yes/?cs=48680

by Arthur Cole, IT Business Edge


Arthur Cole spoke with Augie Gonzalez, director of product marketing for DataCore.

The concept of storage virtualization has been drawing a lot of heat lately. Can you, in fact, virtualize a resource that can only accommodate a finite amount of data at any given time? Server virtualization proved so successful because most enterprises were sitting on a lot of untapped capability. A storage cell is either occupied or it's not. End of story. Yet many firms continue to tout storage virtualization, with DataCore even going so far as describe itself as a "storage hypervisor company." The company's Augie Gonzalez explains the meaning behind the phrase.

“Many of the bottlenecks encountered in virtual environments today can be directly attributed to the mechanical characteristics of spinning disk drives. We’ve found two effective ways to address the performance and resulting cost problems that they create.” - Augie Gonzalez

Cole: Storage has always been the slow-poke in the modern data environment, a problem that only seems to have accelerated now that everything is going virtual. What are some of the best ways to improve storage performance in virtual environments short of a wholesale rebuild?
Gonzalez: Many of the bottlenecks encountered in virtual environments today can be directly attributed to the mechanical characteristics of spinning disk drives. We’ve found two effective ways to address the performance and resulting cost problems that they create. Both are made possible by having our storage hypervisor intercept disk requests in an infrastructure-wide software layer slotted between the server hypervisors and the disk subsystems. From that unique vantage point we have visibility to all read-and-write requests generated by the cluster of virtual machines across the pool of disks. We can then adaptively cache the requests in very fast electronic memories to yield a many-fold performance boost.

The second turbo-charging benefit comes from sensing the most demanding disk block requests and automatically directing them to the fastest tier in the pool according to priorities established by the customer. The tiers range from high-speed Solid State Disk (SSD) flash cards all the way down to inexpensive internal Serial Advanced Technology Attachment (SATA) drives. The pool incorporates the diverse set of storage resources available to the data center, rather than what a manufacturer may have chosen to package in a disk array enclosure.

Cole: Some people have questioned your use of the term "storage hypervisor" to describe the new SANsymphony-V platform. What's your take?
Gonzalez: Controversy generates awareness, so we encourage the IT community to weigh in. Such healthy dialogues help expand the lexicon of technologists to encompass inventions while drawing parallels from familiar concepts.

Visualize a stack of rich, hardware-independent services spread across the IT environment. You’ll immediately spot three hypervisor layers. Most IT pros are well-versed with the server hypervisor floating above server-class machines. Those with a virtual desktop infrastructure (VDI) slant will clearly pick out the desktop hypervisor atop a collection of thin clients and thick PCs. The DataCore storage hypervisor soars above the diverse disk infrastructure to supplement device-level capabilities with extended provisioning, automated storage tiering, replication and performance acceleration services. It even offers a seamless ramp to the cloud for low price storage of less critical data.

Cole: Is it possible, then, for storage architectures to gain the same degree of fluidity and dynamism that virtualization brings to servers and networking?
Gonzalez: It’s not just possible; that’s exactly what SANsymphony-V customers experience day in and day out. As importantly, they realize the collective value of their storage resources in ways that they could not when used in isolation. The results can be more concretely measured in terms of higher availability, faster performance and maximum disk utilization.

The storage hypervisor also removes device-specific compatibility constraints, which had tied the buyer’s hands. Their newfound interchangeability grants customers the clout to negotiate the best value from among competing suppliers at every capacity expansion, hardware refresh and maintenance renewal event. Pretty compelling, don’t you think?

Thursday, 29 September 2011

Storage Hypervisors Mean High Availability for Virtual Desktops

Back in March I wrote about VMware’s conviction that 2011 is the year when desktop virtualization gets real, and Jon Toigo’s assertion that the big roadblock for a Virtual Desktop Infrastructure is the cost of the storage needed. Our partner Pano Logic found a related one: high availability. How they cleared this roadblock and boosted performance at the same time is a good illustration of how DataCore’s SANsymphony-V storage hypervisor works as a team player to solve virtualization problems.

Pano Logic offers a Zero Client Computing desktop virtualization infrastructure for VMware, Microsoft Hyper-V, Citrix XenDesktop, and other hypervisors. Figure 1 shows the Pano Logic endpoint: just attach a keyboard, one or two monitors, and a mouse, connect to a back-end server running the Pano Logic software, and up comes your desktop.

Of course, that assumes you have a back-end infrastructure to connect to, and the expertise to configure and run it. Pano Logic found that for many customers, such as small businesses and even enterprise branch offices and departments, this wasn’t a given. So the company packaged up a complete virtual desktop infrastructure, from servers to endpoints, and offered it as Pano Express. All the user has to do is unpack some boxes, follow the color-coded instructions, and push a button. It’s easier than most home theater setups.

Pano Express, which supports up to thirty users, is a hit, especially in the SMB and educational markets. But the company found that enterprise branch office and departmental customers more often demanded 100% 24X7 operation as well. Downtime would have serious financial consequences.

Pano Logic knew that storage virtualization was the key to high availability. Storage is the heart of a VDI. In fact, Toigo describes a virtual desktop as the ability to “call a desktop image up on demand, do some work, and then park it back on disk.” If your storage is highly-available, your virtual desktops are as well. That’s one role of a storage hypervisor, whether built into a SAN or furnished as software for heterogeneous storage infrastructures.

Rather than ask its customers to amortize an expensive SAN across thirty desktops to supply that high availability, Pano Logic partnered with DataCore Software. Their new Pano Express HA virtual desktop infrastructure incorporates the DataCore SANsymphony-V storage hypervisor along with VMware software to provide real-time virtual machine replication, assuring that server failure causes no downtime. SANsymphony-V also increased virtual desktop performance for Pano Logic. With the storage hypervisor’s high-speed cache, Pano Express HA can support up to 60 users.

SANsymphony-V’s synchronous replication capabilities can unite any combination of disk storage devices into a single virtual storage pool maintained across two active mirrored servers. If either half of the storage infrastructure is taken down for maintenance or fails, the other server picks up the load without any user impact. Resynchronization is automatic, non-disruptive, and requires no special host scripts.

This is just one example of how DataCore’s storage hypervisor is helping a growing number of partners solve specific virtualization problems and deliver greater value to their customers. In future posts, I’ll share more such stories, to illustrate the wide range of benefits available from storage virtualization delivered by a software solution that works with any storage hardware.

Wednesday, 21 September 2011

DataCore Storage Hypervisor [Video Interview]

http://www.youtube.com/watch?v=XCGp2wtqSFM

GeorgeTexeira, CEO of Datacore, explains why DataCore "created" a storage hypervisor, and why auto tiering, the latest feature, is better in a software layer.

Tuesday, 20 September 2011

Pano Logic Incorporates DataCore Storage Hypervisor in its 'one step VDI' Pano Express HA solution

Pano Logic Hits SMB Sweet Spot with Two Zero-Client Packages

Virtualization and zero-client company Pano Logic is hitting the scene with two new all-inclusive virtualization packages: servers, clients and software, all pre-packaged, pre-configured, bundled and ready to go right out of the box. They represent the next evolution of Pano Logic’s Pano Express, specifically geared for the SMB and remote offices. Both Pano Express SMB and Pano Express HA are built to support 30 and 60 power users respectively, in addition to the zero-client endpoints. Pano Logic calls them a “one-step VDI solution” since all the software and servers are preloaded with VMware vSphere and Microsoft Windows Sever 2008. But the HA version stands out in that it will allow for a storage protection layer via DataCore Software for VDI failover.

Based on Pano Zero Clients with VMware vSphere Essentials and DataCore’s SANsymphony™-V storage hypervisor software running in a redundant fault-tolerant configuration on enterprise-grade servers.

More info at: http://www.panologic.com/pano-express-ha


Pano Logic Simplifies VDI Deployments


...The HA version includes a Fujitsu blade server and also includes Datacore Software's SAN Symphony V storage management software and a redundant storage array to handle failovers and high availability situations.

"While a number of joint partners have deployed our solutions in the past, the new bundle simplifies the process for both the partner and the customer," said Carlos Carreras, the VP of alliances at Datacore. "By delivering the HA bundle as an appliance, the user experience is simplified and they are able to immediately take advantage of the benefits offered."


Pano Logic woos SMBs with 'one-step' VDI


...The company introduced Pano Express SMB and Pano Express HA, servers that arrive with all the software you need to operate those little silver cubes, including Microsoft Windows Server 2008 and VMware vSphere 4 virtualization. The HA incarnation – that's "high availability" – also offers virtual storage software from DataCore that provides replication and failover.

Monday, 12 September 2011

Defining a Storage Hypervisor

After reading Sid Herron's recent blog post (see below), I thought it would be useful to share his post and additional information links on the topic.

Storage Hypervisor Defintions:
  1. Wikipedia: Wikipedia entry on Storage Hypervisor
  2. Wiktionary: Wiktionary entry on Storage Hypervisor
  3. DataCore: Introducing the DataCore Storage Hypervisor and the Benefits of Hardware Interchangeability
  4. DataCore's Storage Hypervisor Definition and Features-at-a-glance
  5. Mooselogic's Sid Herron post:

What the Heck Is a “Storage Hypervisor?”

DataCore has recently issued a press release
positioning their new software release (v8.1) of SANsymphony-V as a “storage hypervisor.” On the surface, that may just sound like some nice marketing spin, but the more I thought about it, the more sense it made – because it highlights one of the major differences between DataCore’s products and most other SAN products out there.

To understand what I mean, let’s think for a moment about what a “hypervisor” is in the server virtualization world. Whether you’re talking about VSphere, Hyper-V, or XenServer, you’re talking about software that provides an abstraction layer between hardware resources and operating system instances. An individual VM doesn’t know – or care – whether it’s running on an HP Server, a Dell, an IBM, or a “white box.” It doesn’t care whether it’s running on an Intel or an AMD processor. You can move a VM from one host to another without worrying about changes in the underlying hardware, bios, drivers, etc. (Not talking about “live motion” – that’s a little different.) The hypervisor presents the VM with a consistent execution platform that hides the underlying complexity of the hardware.

So, back to DataCore. Remember that SANsymphony-V is a software application that runs on top of Windows Server 2008 R2. In most cases, people buy a couple of servers that contain a bunch of local storage, install 2008 R2 on them, install SANsymphony-V on them, and turn that bunch of local storage into full-featured iSCSI SAN nodes. (We typically run them in pairs so that we can do synchronous mirroring of the data across the two nodes, such that if one node completely fails, the data is still accessible.) But that’s not all we can do.

Because it’s running on a 2008 R2 platform, it can aggregate and present any kind of storage the underlying Server OS can access at the block level. Got a fibre channel SAN that you want to throw into the mix? Great! Put fiber channel Host Bus Adapters (HBAs) in your DataCore nodes, present that storage to the servers that SANsymphony-V is running on, and now you can manage the fibre channel storage right along with the local storage in your DataCore nodes. Got some other iSCSI SAN that you’d like to leverage? No problem. Just make sure you’ve got a couple of extra NICs in the DataCore nodes (or install iSCSI HBAs if you want even better performance), present that iSCSI storage to the DataCore nodes, and you can manage it as well. You can even create a storage pool that crosses resource boundaries! And now, with the new auto-tiering functionality of SANsymphony-V v8.1, you can let DataCore automatically migrate the most frequently accessed data to the highest-performing storage subsystems.

Or how about this: You just bought a brand new storage system from Vendor A to replace the system from Vendor B that you’ve been using for the past few years. You’d really like to move Vendor B’s system to your disaster-recovery site, but Vendor A’s product doesn’t know how to replicate data to Vendor B’s product. If you front-end both vendors’ products with DataCore nodes, the DataCore nodes can handle the asynchronous replication to your DR site. Alternately, maybe you bought Vendor A’s system because it offered higher performance than Vendor B’s system. Instead of using Vendor B’s product for DR, you can present both systems to SANsymphony-V and leverage its auto-tiering feature to automatically insure that the data that needs the highest performance gets migrated to Vendor A’s platform.

So, on the back end, you can have disparate SAN products (iSCSI, fibre channel, or both) and local storage (including “JBOD” expansion shelves), and a mixture of SSD, SAS, and SATA drives. The SANsymphony-V software masks all of that complexity, and presents a consistent resource – in the form of iSCSI virtual volumes – to the systems that need to consume storage, e.g., physical or virtual servers.

That really is analogous to what a traditional hypervisor does in the server virtualization world. So it is not unreasonable at all to call SANsymphony-V a “storage hypervisor.” In fact, it’s pretty darned clever positioning, and I take my hat off to the person who crafted the campaign.

Friday, 2 September 2011

DataCore to Offer Free Storage Hypervisor to VMware Experts

http://vmblog.com/archive/2011/08/25/vmworld-2011-datacore-to-offer-free-storage-hypervisor-to-vmware-vexperts-vmware-certified-design-experts-vmware-certified-professionals-and-vmware-certified-instructors.aspx

DataCore Software, the industry’s premier provider of storage virtualization software, announced today that during VMworld 2011 it will offer free license keys of its breakthrough SANsymphony™ - V 8.1 storage hypervisor to VMware vExperts, VMware Certified Design Experts (VCDX), VMware Certified Professionals (VCP) and VMware Certified Instructors (VCI). The not-for-resale (NFR) license keys – available for non-production uses such as course development, training, lab testing and demonstration purposes – are intended to support virtualization consultants, instructors and architects involved in efforts aimed at managing and fully leveraging storage assets.

“Storage has been the forgotten tier of virtualization, routinely stalling projects and adding unforeseen complications and costs,” said Linda Haury, vice president worldwide marketing at DataCore Software. “By providing the SANsymphony-V storage hypervisor to these experts for home and office labs, they’ll be able to educate others about the most effective means for leveraging storage assets, a move which will advance virtualization adoption and benefit the industry as a whole.”

The free NFR license keys of SANsymphony-V 8.1 are available during VMworld, being held August 29 – September 1 at the Venetian in Las Vegas. The licenses may be used for non-production purposes only.

To receive a free license key, please visit the DataCore’s booth (#1321) or sign up next week at: www.datacore.com/freeNFR; proof of VMware vExpert, VCDX, VCP and VCI certification is required.

DataCore to offer free storage hypervisor to VMware Experts

http://www.wwpi.com/index.php?option=com_content&view=article&id=13600:datacore-to-offer-free-storage-hypervisor-to-vmware-vexperts&catid=320:breaking-news&Itemid=2701739


DataCore introduces SANsymphony-V Storage Hypervisor

http://www.virtualizationworld365.net/news_full.php?id=19287&title=DataCore-introduces-SANsymphony-V-Storage-Hypervisor

DataCore Software has announced major enhancements to its centrally-managed SANsymphony™- V solution. Combined, these new features elevate SANsymphony-V to the role of storage hypervisor, placing customers in the unique position to fully leverage their existing storage assets, including direct-attached storage (DAS), storage area networks (SAN) and solid state disks (SSD), and negotiate the best deals among competing storage manufacturers without concern for long-standing hardware vendor lock-ins.

http://www.snseurope.com/news_full.php?id=19287&title=DataCore-introduces-SANsymphony-V-Storage-Hypervisor

“The sprawl and multiplication of storage systems and the rising number of specialty devices are now the norm. The intelligence of a storage hypervisor provides a strategic advantage to managing the many as one,” said George Teixeira, president and CEO of DataCore Software. “SANsymphony-V enables users to take control of how their storage infrastructure evolves versus being subject to the dictates of tactical point-in-time decisions.”

Saturday, 27 August 2011

What the Heck Is a “Storage Hypervisor?”

http://www.mooselogic.com/blog/what-the-heck-is-a-storage-hypervisor

Our friends at DataCore ran a press release yesterday
positioning the new release (v8.1) of SANsymphony-V as a “storage hypervisor.” On the surface, that may just sound like some nice marketing spin, but the more I thought about it, the more sense it made – because it highlights one of the major differences between DataCore’s products and most other SAN products out there.

To understand what I mean, let’s think for a moment about what a “hypervisor” is in the server virtualization world. Whether you’re talking about VSphere, Hyper-V, or XenServer, you’re talking about software that provides an abstraction layer between hardware resources and operating system instances. An individual VM doesn’t know – or care – whether it’s running on an HP Server, a Dell, an IBM, or a “white box.” It doesn’t care whether it’s running on an Intel or an AMD processor. You can move a VM from one host to another without worrying about changes in the underlying hardware, bios, drivers, etc. (Not talking about “live motion” – that’s a little different.) The hypervisor presents the VM with a consistent execution platform that hides the underlying complexity of the hardware.

So, back to DataCore. Remember that SANsymphony-V is a software application that runs on top of Windows Server 2008 R2. In most cases, people buy a couple of servers that contain a bunch of local storage, install 2008 R2 on them, install SANsymphony-V on them, and turn that bunch of local storage into full-featured iSCSI SAN nodes. (We typically run them in pairs so that we can do synchronous mirroring of the data across the two nodes, such that if one node completely fails, the data is still accessible.) But that’s not all we can do.

Because it’s running on a 2008 R2 platform, it can aggregate and present any kind of storage the underlying Server OS can access at the block level. Got a fibre channel SAN that you want to throw into the mix? Great! Put fiber channel Host Bus Adapters (HBAs) in your DataCore nodes, present that storage to the servers that SANsymphony-V is running on, and now you can manage the fibre channel storage right along with the local storage in your DataCore nodes. Got some other iSCSI SAN that you’d like to leverage? No problem. Just make sure you’ve got a couple of extra NICs in the DataCore nodes (or install iSCSI HBAs if you want even better performance), present that iSCSI storage to the DataCore nodes, and you can manage it as well. You can even create a storage pool that crosses resource boundaries! And now, with the new auto-tiering functionality of SANsymphony-V v8.1, you can let DataCore automatically migrate the most frequently accessed data to the highest-performing storage subsystems.

Or how about this: You just bought a brand new storage system from Vendor A to replace the system from Vendor B that you’ve been using for the past few years. You’d really like to move Vendor B’s system to your disaster-recovery site, but Vendor A’s product doesn’t know how to replicate data to Vendor B’s product. If you front-end both vendors’ products with DataCore nodes, the DataCore nodes can handle the asynchronous replication to your DR site. Alternately, maybe you bought Vendor A’s system because it offered higher performance than Vendor B’s system. Instead of using Vendor B’s product for DR, you can present both systems to SANsymphony-V and leverage its auto-tiering feature to automatically insure that the data that needs the highest performance gets migrated to Vendor A’s platform.

So, on the back end, you can have disparate SAN products (iSCSI, fibre channel, or both) and local storage (including “JBOD” expansion shelves), and a mixture of SSD, SAS, and SATA drives. The SANsymphony-V software masks all of that complexity, and presents a consistent resource – in the form of iSCSI virtual volumes – to the systems that need to consume storage, e.g., physical or virtual servers.

That really is analogous to what a traditional hypervisor does in the server virtualization world. So it is not unreasonable at all to call SANsymphony-V a “storage hypervisor.” In fact, it’s pretty darned clever positioning, and I take my hat off to the person who crafted the campaign.