Direct Line, leading Car Insurance Company, achieves Business Continuity & Disaster Recovery
Achieved with DataCore's SANsymphony-V Storage Hypervisor.
http://www.it-director.com/technology/storage/news_release.php?rel=28889
DataCore Software announced that premier German insurance company, Direct Line Versicherung AG, has implemented its business continuity and disaster recovery architecture on a storage infrastructure built on the DataCore storage hypervisor SANsymphony™-V foundation. The high-speed and highly–available synchronous mirroring functionality of the DataCore solution enables Direct Line to set up a backup and disaster recovery data centre located 40 kilometers away in order to minimise downtime and to ensure speedy recovery in case of a failure at the primary data centre. The combination of a storage hypervisor with server hypervisors provides a flexible, high-performance and fail safe IT infrastructure that virtualises storage hardware with SANsymphony-V and server hardware with VMware vSphere.
"DataCore SANsymphony-V provides Direct Line a cost-effective and efficient business continuity solution supporting high-availability and disaster recovery across multiple sites, without neglecting performance or data protection aspects. The DataCore solution is a strategic component in our daily business,” says Heiko Teichmann, managing director at IT service provider Teserco and project manager on behalf of Direct Line Versicherung.
...Thanks to its hardware independence and the efficient caching algorithms of the DataCore solution, Direct Line can not only use its existing and cost-effective HP P2000 disk subsystems, but with DataCore also remove the functional or performance limitations that would have prevented them from future use. Today, around 75 terabytes of data are now managed in the main data centre, located in Teltow and at the remote data center in Berlin, which meets the requirements set by the highest data protection category (Tier 4).
Stikeman Elliott LLP implements DataCore Software’s SANsymphony-V Storage Hypervisor
http://www.law.com/jsp/lawtechnologynews/PubArticleLTN.jsp?id=1202535402580&Stikeman_Elliot_Adopts_DataCores_SANsymphonyV&slreturn=1
DataCore Software announced that Stikeman Elliott LLP, one of Canada’s leading business law firms, with offices in Montreal, Toronto, Ottawa, Calgary and Vancouver as well as London, New York and Sydney, has deployed DataCore’s SANsymphony-V storage hypervisor at their Montreal and Toronto hubs – eliminating the need for traditional tape backups and providing much greater business continuity through high availability (HA) and improved disaster recovery (DR).
Stikeman Elliott’s IT team has also made great use of SANsymphony-V’s automated-tiering capability – including utilizing the cloud as yet another data storage tier.
One of the things that Marco Magini, network and systems administrator for Montreal-based Stikeman Elliot LLC, worries most about where storage is concerned is downtime. At one of the largest corporate law firms in the world, it may come as a shock, but the cliché “time is money,” is no joke, said Magini.
“Although we started out looking for a solution to a backup problem, we ended up with far more,” says Magini. “We started slowly, but as we became more familiar with the intelligence and stability of the DataCore storage hypervisor, we put more and more mission-critical systems on top of it and now have almost all of our systems behind it. We have every confidence that SANsymphony-V can handle anything we give it.”
Information, commentary and updates from Australia / New Zealand on virtualization, business continuity solutions, FC SAN, iSCSI, high-availability, remote replication, disaster recovery and storage virtualization and SAN management solutions.
Thursday, 29 December 2011
Saturday, 24 December 2011
DataCore Software Celebrates Holiday Season By Offering Free Storage Hypervisor To Microsoft Certified And System Center Professionals
The free NFR license keys of SANsymphony-V are available for the new year and this offer will expire on January 31, 2012. The licenses may be used for non-production purposes only and the software can be installed on both physical servers and Hyper-V virtual machines as a virtual storage appliance.
To receive a free license key, please sign up at: http://pages.datacore.com/nfr-for-experts.html. Proof of certification as a Microsoft Certified Architect (MCA), Microsoft Certified Master (MCM), Microsoft Certified IT Professional (MCITP), Microsoft Certified Professional Developer (MCPD) or Microsoft Certified Technology Specialist (MCTS) is required.
DataCore Software offers free Storage Hypervisor
http://www.it-director.com/technology/storage/news_release.php?rel=28967&ref=fd_ita_meta
DataCore Software announced that in the spirit of the holiday season it will offer free license keys of its SANsymphony™ -V storage hypervisor to Microsoft Certified and System Center professionals. The not-for-resale (NFR) license keys – may be leveraged for non-production uses such as course development, proof-of-concepts, training, lab testing and demonstration purposes – are intended to support virtualization consultants, instructors and architects involved in managing and optimizing storage within private clouds, virtual server and VDI deployments. DataCore is making it easy for these certified professionals to benefit directly and learn for themselves the power of this innovative technology – the storage hypervisor – and its ability to redefine storage management and efficiency.
To receive a free license key, please sign up at: http://pages.datacore.com/nfr-for-experts.html.
To receive a free license key, please sign up at: http://pages.datacore.com/nfr-for-experts.html. Proof of certification as a Microsoft Certified Architect (MCA), Microsoft Certified Master (MCM), Microsoft Certified IT Professional (MCITP), Microsoft Certified Professional Developer (MCPD) or Microsoft Certified Technology Specialist (MCTS) is required.
DataCore Software offers free Storage Hypervisor
http://www.it-director.com/technology/storage/news_release.php?rel=28967&ref=fd_ita_meta
DataCore Software announced that in the spirit of the holiday season it will offer free license keys of its SANsymphony™ -V storage hypervisor to Microsoft Certified and System Center professionals. The not-for-resale (NFR) license keys – may be leveraged for non-production uses such as course development, proof-of-concepts, training, lab testing and demonstration purposes – are intended to support virtualization consultants, instructors and architects involved in managing and optimizing storage within private clouds, virtual server and VDI deployments. DataCore is making it easy for these certified professionals to benefit directly and learn for themselves the power of this innovative technology – the storage hypervisor – and its ability to redefine storage management and efficiency.
To receive a free license key, please sign up at: http://pages.datacore.com/nfr-for-experts.html.
Friday, 23 December 2011
Storage Strategy Magazine 2012 Predictions: DataCore Software
http://www.storage-strategy.com/2011/12/15/2012-prediction-datacore-software
...The time has come for the storage crisis to be resolved, the last bastion of hardware dependency removed, and the final piece of the virtualization puzzle to fall into place.
It’s also time for all to become familiar with a component fast gaining traction and proving itself in the field: the storage hypervisor. This technology is unique in its ability to provide an architecture that manages, optimizes and spans all the different price-points and performance levels of storage. The storage hypervisor allows hardware interchangeability. It provides important advanced features such as automated tiering that relocates disk blocks among pools of different storage devices - even in the cloud - keeping demanding workloads operating at peak speeds. In this way, applications requiring speed and business-critical data protection can get what they need, while less critical, infrequently accessed data blocks gravitate towards lower costs disks or are transparently pushed to the cloud for “pay as you go” storage.
ESG’s Peters states in the Market Report; “The concept of a storage hypervisor is not just semantics. It is not just another way to market something that already exists or to ride the wave of a currently trendy IT term.” He then goes on to make his main point on the next step forward; “Organizations have now experienced a good taste of the benefits of server virtualization with its hypervisor-based architecture and, in many cases, the results have been truly impressive: dramatic savings in both CAPEX and OPEX, vastly improved flexibility and mobility, faster provisioning of resources and ultimately of services delivered to the business, and advances in data protection.
“The storage hypervisor is a natural next step and it can provide a similar leap forward.”
2012: Virtual Storage Gets Some Love
Mark Peters, senior analyst with Enterprise Strategy Group (ESG), recently authored a Market Report which I believe addresses the industry focus for the year ahead. In The Relevance and Value of a “Storage Hypervisor,” Peters notes: “…buying and deploying servers is a pretty easy process, while buying and deploying storage is not. It’s a mismatch of virtual capabilities on the server side and primarily physical capabilities on the storage side. Storage can be a ball and chain keeping IT shops in the 20th century instead of accommodating the 21st century.”
...The time has come for the storage crisis to be resolved, the last bastion of hardware dependency removed, and the final piece of the virtualization puzzle to fall into place.
It’s also time for all to become familiar with a component fast gaining traction and proving itself in the field: the storage hypervisor. This technology is unique in its ability to provide an architecture that manages, optimizes and spans all the different price-points and performance levels of storage. The storage hypervisor allows hardware interchangeability. It provides important advanced features such as automated tiering that relocates disk blocks among pools of different storage devices - even in the cloud - keeping demanding workloads operating at peak speeds. In this way, applications requiring speed and business-critical data protection can get what they need, while less critical, infrequently accessed data blocks gravitate towards lower costs disks or are transparently pushed to the cloud for “pay as you go” storage.
ESG’s Peters states in the Market Report; “The concept of a storage hypervisor is not just semantics. It is not just another way to market something that already exists or to ride the wave of a currently trendy IT term.” He then goes on to make his main point on the next step forward; “Organizations have now experienced a good taste of the benefits of server virtualization with its hypervisor-based architecture and, in many cases, the results have been truly impressive: dramatic savings in both CAPEX and OPEX, vastly improved flexibility and mobility, faster provisioning of resources and ultimately of services delivered to the business, and advances in data protection.
“The storage hypervisor is a natural next step and it can provide a similar leap forward.”
2012: Virtual Storage Gets Some Love
Mark Peters, senior analyst with Enterprise Strategy Group (ESG), recently authored a Market Report which I believe addresses the industry focus for the year ahead. In The Relevance and Value of a “Storage Hypervisor,” Peters notes: “…buying and deploying servers is a pretty easy process, while buying and deploying storage is not. It’s a mismatch of virtual capabilities on the server side and primarily physical capabilities on the storage side. Storage can be a ball and chain keeping IT shops in the 20th century instead of accommodating the 21st century.”
Tuesday, 13 December 2011
Storage Hypervisors Boost Performance without Breaking the Budget
One of the magical things about virtualization is that it’s really a kind of invisibility cloak. Each virtualization layer hides the details of those beneath it. The result is much more efficient access to lower level resources. Server virtualization has demonstrated this powerfully. Applications don’t need to know about CPU, memory, and other server details to enjoy access to the resources they need.
Unfortunately, this invisibility tends to get a bit patchy when you move down into the storage infrastructure underneath all those virtualized servers, especially when considering performance management. In theory, storage virtualization ought to be able to hide the details of media, protocols, and paths involved in managing the performance of a virtualized storage infrastructure. In reality the machinery still tends to clank away in plain sight.
The problem is not storage virtualization per se, which can boost storage performance in a number of ways. The problem is a balkanized storage infrastructure, where virtualization is supplied by hardware controllers associated with each “chunk” of storage (e.g., array). This means that the top storage virtualization layer is human: the hard-pressed IT personnel who have to make it all work together.
Many IT departments accept the devil’s bargain of vendor lock-in to try to avoid this. But even if you commit your storage fortunes to a single vendor, the pace of innovation guarantees the presence of end-of-life devices that don’t support the latest performance management features. And the expense of this approach puts it beyond the reach of most companies, who can’t afford a forklift upgrade to a single-vendor storage infrastructure and have to deal with the real-world mix of storage devices that result from keeping up with innovation and competitive pressures.
That’s why many companies are turning to storage hypervisors, which, like server hypervisors, are not tied to a particular vendor’s hardware. A storage hypervisor like DataCore’s SANsymphony-V throws the invisibility cloak over the details of all of your storage assets, from the latest high-performance SAN to SATA disks orphaned by the consolidation of a virtualization initiative. Instead of trying to match a bunch of disparate storage devices to the needs of different applications, you can combine devices with similar performance into easily provisioned and managed virtual storage pools that hide all the unnecessary details. And, since you’re not tied to a single vendor, you can look for the best deals in storage, and keep using old storage longer.
SANsymphony-V helps you boost storage performance in three ways: through caching, tiering, and path management.
Caching. SANsymphony-V can use up to a terabyte of RAM on the server that hosts it as a cache for all the storage assets it virtualizes. Advanced write-coalescing and read pre-fetching algorithms deliver significantly faster IO response: up to 10 times faster than the average 200-300 microsecond time delivered by typical high-end cached storage arrays, and orders of magnitude faster than the 6000-8000 microseconds it takes to read and write to the physical disks themselves.
SANsymphony-V also uses its cache to compensate for the widely different traffic levels and peak loads found in virtualized server environments. It smooths out traffic surges and better balances workloads so that applications and users can work more efficiently. In general, you’ll see a 2X or better improvement in the performance of the underlying storage from a storage hypervisor such as DataCore SANsymphony-V.
Tiering. You can also improve performance with data tiering. SANsymphony-V discovers all your storage assets, manages them as a common pool of storage and continually monitors their performance. The auto-tiering technology migrates the most frequently-used data—which generally needs higher performance—onto the fastest devices. Likewise, less frequently-used data typically gets demoted to higher capacity but lower-performance devices.
Auto-tiering uses tiering profiles that dictate both the initial allocation and subsequent migration dynamics. A user can go with the standard set of default profiles or can create custom profiles for specific application access patterns. However you do it, you get the performance and capacity utilization benefits of tiering from all your devices, regardless of manufacturer.
As new generations of devices appear such as Solid State Disks (SSD) and Flash memories and very large capacity disks; these faster or larger capacity devices can simply be added to the available pool of storage devices and assigned a tier. In addition, as devices age, they can be reset to a lower tier often extending their useful life. Many users are interested in deploying SSD technology to gain performance, however due to the high-cost and their limited write traffic life-cycles there is a clear need for auto-tiering and caching architectures to maximize their efficiency. DataCore’s storage hypervisor can absorb a good deal of the write traffic thereby extending the useful life of SSDs and with auto-tiering only that data that needs the benefits of the high-speed SSD tier are directed there, with the storage hypervisor in place the system self tunes and optimizes the use of all the storage devices. Customers have reported upwards of a 20% savings from device independent tiering alone and in combination with the other benefits of a storage hypervisor, savings of 60% or more are also being achieved.
Path management. Finally, SANsymphony-V also greatly reduces the complexity of path management. The software auto-discovers the connections between storage devices and the server(s) it’s running on, and then monitors queue depth to detect congestion and route I/O in a balanced way across all possible routes to the storage in a given virtual pool. There’s a lot of reporting and monitoring data available, but for the most part, once set up, you can just let it run itself and get a level of performance across disparate devices from different vendors that would otherwise take a lot of time and knowledge to get right.
If you would like an in-depth look at how a storage hypervisor can boost storage performance, be sure to check out the third of Jon Toigo’s Storage Virtualization for Rock Stars white papers: Storage in the Fast Lane—Achieving “Off-the-Charts” Performance Management.
Next time I’ll look at how a storage hypervisor boosts storage data security management.
Unfortunately, this invisibility tends to get a bit patchy when you move down into the storage infrastructure underneath all those virtualized servers, especially when considering performance management. In theory, storage virtualization ought to be able to hide the details of media, protocols, and paths involved in managing the performance of a virtualized storage infrastructure. In reality the machinery still tends to clank away in plain sight.
The problem is not storage virtualization per se, which can boost storage performance in a number of ways. The problem is a balkanized storage infrastructure, where virtualization is supplied by hardware controllers associated with each “chunk” of storage (e.g., array). This means that the top storage virtualization layer is human: the hard-pressed IT personnel who have to make it all work together.
Many IT departments accept the devil’s bargain of vendor lock-in to try to avoid this. But even if you commit your storage fortunes to a single vendor, the pace of innovation guarantees the presence of end-of-life devices that don’t support the latest performance management features. And the expense of this approach puts it beyond the reach of most companies, who can’t afford a forklift upgrade to a single-vendor storage infrastructure and have to deal with the real-world mix of storage devices that result from keeping up with innovation and competitive pressures.
That’s why many companies are turning to storage hypervisors, which, like server hypervisors, are not tied to a particular vendor’s hardware. A storage hypervisor like DataCore’s SANsymphony-V throws the invisibility cloak over the details of all of your storage assets, from the latest high-performance SAN to SATA disks orphaned by the consolidation of a virtualization initiative. Instead of trying to match a bunch of disparate storage devices to the needs of different applications, you can combine devices with similar performance into easily provisioned and managed virtual storage pools that hide all the unnecessary details. And, since you’re not tied to a single vendor, you can look for the best deals in storage, and keep using old storage longer.
SANsymphony-V helps you boost storage performance in three ways: through caching, tiering, and path management.
Caching. SANsymphony-V can use up to a terabyte of RAM on the server that hosts it as a cache for all the storage assets it virtualizes. Advanced write-coalescing and read pre-fetching algorithms deliver significantly faster IO response: up to 10 times faster than the average 200-300 microsecond time delivered by typical high-end cached storage arrays, and orders of magnitude faster than the 6000-8000 microseconds it takes to read and write to the physical disks themselves.
SANsymphony-V also uses its cache to compensate for the widely different traffic levels and peak loads found in virtualized server environments. It smooths out traffic surges and better balances workloads so that applications and users can work more efficiently. In general, you’ll see a 2X or better improvement in the performance of the underlying storage from a storage hypervisor such as DataCore SANsymphony-V.
Tiering. You can also improve performance with data tiering. SANsymphony-V discovers all your storage assets, manages them as a common pool of storage and continually monitors their performance. The auto-tiering technology migrates the most frequently-used data—which generally needs higher performance—onto the fastest devices. Likewise, less frequently-used data typically gets demoted to higher capacity but lower-performance devices.
Auto-tiering uses tiering profiles that dictate both the initial allocation and subsequent migration dynamics. A user can go with the standard set of default profiles or can create custom profiles for specific application access patterns. However you do it, you get the performance and capacity utilization benefits of tiering from all your devices, regardless of manufacturer.
As new generations of devices appear such as Solid State Disks (SSD) and Flash memories and very large capacity disks; these faster or larger capacity devices can simply be added to the available pool of storage devices and assigned a tier. In addition, as devices age, they can be reset to a lower tier often extending their useful life. Many users are interested in deploying SSD technology to gain performance, however due to the high-cost and their limited write traffic life-cycles there is a clear need for auto-tiering and caching architectures to maximize their efficiency. DataCore’s storage hypervisor can absorb a good deal of the write traffic thereby extending the useful life of SSDs and with auto-tiering only that data that needs the benefits of the high-speed SSD tier are directed there, with the storage hypervisor in place the system self tunes and optimizes the use of all the storage devices. Customers have reported upwards of a 20% savings from device independent tiering alone and in combination with the other benefits of a storage hypervisor, savings of 60% or more are also being achieved.
Path management. Finally, SANsymphony-V also greatly reduces the complexity of path management. The software auto-discovers the connections between storage devices and the server(s) it’s running on, and then monitors queue depth to detect congestion and route I/O in a balanced way across all possible routes to the storage in a given virtual pool. There’s a lot of reporting and monitoring data available, but for the most part, once set up, you can just let it run itself and get a level of performance across disparate devices from different vendors that would otherwise take a lot of time and knowledge to get right.
If you would like an in-depth look at how a storage hypervisor can boost storage performance, be sure to check out the third of Jon Toigo’s Storage Virtualization for Rock Stars white papers: Storage in the Fast Lane—Achieving “Off-the-Charts” Performance Management.
Next time I’ll look at how a storage hypervisor boosts storage data security management.
Wednesday, 7 December 2011
New Videos on DataCore
Check out the full library at: http://www.youtube.com/user/DataCoreVideos
Dennis Publishing –Why I use DataCore
Video: http://snseurope.info/video/749/Dennis-Publishing---Why-I-Use-DataCore
DataCore Storage Hypervisor
Video: http://snseurope.info/video/756/Datacore-Storage-Hypervisor
Dennis Publishing –Why I use DataCore
Video: http://snseurope.info/video/749/Dennis-Publishing---Why-I-Use-DataCore
DataCore Storage Hypervisor
Video: http://snseurope.info/video/756/Datacore-Storage-Hypervisor
Tuesday, 6 December 2011
New Market Report on The Relevance and Value of a “Storage Hypervisor:” Virtualized Management for More Than Just Servers
http://www.enterprisestrategygroup.com/2011/10/the-relevance-and-value-of-a-%e2%80%9cstorage-hypervisor%e2%80%9d-virtualized-management-for-more-than-just-servers/
A few vendors have already begun to start thinking in these terms; there are the industry giants - IBM with SAN Volume Controller, EMC with VPLEX, HDS with its Universal Storage Platform-V - as well as some smaller software-only players such as DataCore. As the capabilities of these various approached not only expand but become known and understood better, the end-user opportunity for improving efficiency and simplifying operations is simply monumental...
A few vendors have already begun to start thinking in these terms; there are the industry giants - IBM with SAN Volume Controller, EMC with VPLEX, HDS with its Universal Storage Platform-V - as well as some smaller software-only players such as DataCore. As the capabilities of these various approached not only expand but become known and understood better, the end-user opportunity for improving efficiency and simplifying operations is simply monumental...
Subscribe to:
Posts (Atom)