Friday, 23 January 2015

Software-defined Storage in action at Transdev

Please watch our three minute video below, with Transdev's IT Infrastructure Manager, Mark Naidoo, speaking about his decision to deploy DataCore Software and the improvements DataCore has made to Transdev's storage infrastructure. 

Since putting DataCore in three years ago, Transdev has had zero downtime. Transdev not only increased availability– but also satisfied its DR requirements. Plenty more benefits are relayed below :)





To read Trasndev's case study click here.


SUMMARY of what Mark says...

High-performance Storage 
Transdev has seen a sixfold improvement in performance for its storage with DataCore. DataCore supports new, highspeed, solid state storage that was required for a mission-critical Microsoft SQL database that serves Microsoft Dynamics AX. 

Five Nines (99.999%) Reliability
 Since putting DataCore in three years ago, Transdev has had zero downtime. Transdev not only increased availability – but also satisfied its DR requirements.

Reduced Storage Costs
The ability to choose best-of-breed storage and combine that using one platform has decreased costs significantly. 

Resiliency and Flexibility 
DataCore has enabled Transdev to combine its existing storage with new, flash-based storage very easily. Transdev’s IT infrastructure also consists primarily of HP hardware, a virtualised platform based on VMware & Citrix XenApp for application deployment. 

Monday, 19 January 2015

How Software-defined Storage, Converged Virtual SANs and Hybrid Clouds Will Disrupt the Storage Industry

How Software-defined Storage, Virtual SANs and Hybrid Clouds Will Disrupt the Storage Industry in 2015

article by George Teixeira, President and CEO, DataCore Software

In 2015, IT decision makers will need to wrestle with a number of difficult questions dealing with data storage: What solutions can be put in place to reduce disruptive changes and make IT more adaptable to dynamic business needs? How can they easily add flexibility and scalability to existing IT infrastructure in order to meet growing storage demands? What can be done to better safeguard business information wherever it resides and provide continuous availability to the data for critical applications?  How to meet the responsiveness levels and performance demands needed to sustain critical business applications and next-generation workloads? What can be done to reconcile the use of cloud storage, flash and converged virtual SANs with existing investments? And how can this all be accomplished cost-effectively without disrupting current IT systems and taking on greater business risk?

Prediction #1: Software-defined storage will break through and finally go mainstream in 2015.
Software-defined everything and software-defined storage (SDS) will break through in 2015, driven by the productivity benefits it can provide to customers and the need to prepare IT infrastructures to meet next-generation needs. Software-defined storage will continue to disrupt traditional storage hardware system vendors and, especially, their outdated yearly ‘rip and replace' model of doing storage business.
Interestingly, while SDS will greatly disrupt the industry and shake up traditional storage vendors, it actually benefits users significantly by reducing disruption, costs, and risks for those implementing and managing IT environments, as these SDS customers will attest.
Software-defined storage promises to commoditize underlying storage devices and raise the scope and span of storage features and services to a higher, more productive level versus being locked down to specific devices. True software-defined storage platformswill allow these devices to ‘do more with less' by increasing utilization and working cross-platform and infrastructure-wide. Software-defined storage can reconcile the use of flash and converged virtual SANs with existing investments, and provide the essential foundation for hybrid cloud deployments. It is the solution for a range of use cases, managing data placement according to cost, compliance, availability, and performance requirements. It can be deployed on different hardware platforms and extend to cloud architectures as well. Bottom-line, the compelling economic benefits, better productivity, and the need for greater agility to meet future requirements will drive SDS to become mainstream in 2015.

Other storage-related predictions include:

#2: Servers in 2015 will increasingly displace traditional storage arrays.
Servers will combine with software-defined storage and continue to build momentum for a new class of ‘storage servers' and hyper-converged virtual SANs. The latest generation of servers is powerful and will continue to support even larger amounts of storage. Software-defined storage software solutions such as DataCore's Virtual SAN software are designed to eliminate the hassle and complexity of traditional storage networks while still providing a growth path. Virtual SAN software is maturing rapidly, and it will further drive the transformation of servers into powerful storage systems that go beyond today into full blown enterprise-class virtual SANs.  The Dell Enterprise Blog post, "Dell PowerEdge Servers Make Great Software-Defined Storage Solutions,"  and the  "Software-Defined Virtual SAN Solutions Powered by FUJITSU PRIMERGY Servers" are examples of this trend. 
As a footnote, I should point out that the world is not all virtual and cloud. Major refresh cycles are underway and on-premise physical servers will continue to do well in 2015. Both trends will impact SDS and the use of converged virtual SANs. How much is still unclear, but both will stress migrations and open up rethinking on use cases and how to better deploy servers. More than 300 million PCs are sold every year. Despite the hype around the cloud, there will always be a need for a dependable local server that does not need to rely on an external Internet connection, provides total control, enables a greater deal of customization, and protects sensitive data. Also, major refresh cycles are underway in 2015. Windows Server 2003 goes end-of-life in July, 2015 and yet, according to Microsoft, organizations worldwide are still running more than 24 million physical and virtual instances of Windows Server 2003. Other studies indicate that as much as 45 percent of on-premise servers are still running WS2003 and 41 percent of servers are seven years or older. Software-defined storage can help work across generations and ease the transitions and migrations to refreshed server systems or to converged virtual SANs.

#3: Disk and flash must play nicely together in 2015; software stacks must span both worlds.
This past year saw the continuation of the "flash everywhere" trend, with flash devices rapidly moving along the growth path from being utilized in servers to being used across the board. This brought a lot of new storage companies into the market initially, but has now also brought on a high degree of consolidation, as evidenced by SanDisk's acquisition of Fusion-io and the number of start-ups that have disappeared. As flash finds its place as a key technology and is further commoditized, the market can't sustain the number of companies that were introduced in the early stages of the market, so further consolidation will happen.
The foreseeable future - despite the hype - is not all flash. Cost matters, and SSDs will likely be 10 times more expensive than the least expensive SATA disks through the end of the decade.
The coming year will show us that flash can be used as a more practical technology, and that it needs to be reconciled with how it will work with existing disk technologies.  Flash is excellent for specialized ‘hot data' workloads that require high speed reads such as databases. However, it is not a cost-effective solution for all workloads and still makes up a very small fraction of the installed storage base overall. On the other side of the spectrum are low-cost SATA disk drives that continue to advance and use new technologies like helium to support huge capacities(up to 10 TB per drive), but they are not highly performant and are slow.  Write-heavy transaction workloads also need to be addressed differently. (See New Breakthrough Random Write Acceleration and Impact on Disk Drives and Flash technologies). The industry would have us believe that customers will shift 100% to all flash, but it is not practical due to the costs involved and the large installed base of storage that must be addressed.
In 2015, we will need smart software that has the feature stack that can optimize the cost and performance trade-offs and migrate workloads to the right resources needed whether flash or disk. Software-defined storage done right can help unify the new world of flash with the existing and still-evolving world of disks. Both have a future.

#4: Hybrid clouds and cloud-based disaster recovery solutions will thrive in 2015.   
Businesses are continuing to struggle to figure out how best to use the cloud. Increasingly, enterprises are dealing with managing both on-premise storage and off-site cloud storage (hybrid cloud). This will become a much bigger issue in 2015 as customers become smarter about which workloads are practical to plan in the cloud and which are not. On-premise storage is usually allocated for active data such as databases and transaction-oriented business. The cloud continues to be used typically for back up, archive, and disaster recovery versus production workloads because of the speed of the internet.  Disaster recovery remains one of the more costly and critical projects for IT, which makes the cloud a particularly attractive alternative to in-house deployments. 
The ability to integrate off-premise capacity is a critical capability for SDS according to analyst firms such as Neuralytix. They predict that in 2015, all software-defined storage solutions will need to manage both on-premise and cloud-based capacity as a unified data storage architecture. 
New solutions are emerging such as DataCore and Microsoft StorSimple, which combine to allow data from any storage to be seamlessly migrated from on-premise to a cloud such as Microsoft Azure. This will fuel the larger trend, which is for enterprises to do a mix of on-premise and cloud. In addition, while doing disaster recovery from the cloud remains complex, new integration tools and more automated processes are well on the way to make this a more practical solution.

#5: Managing internal investments with a cloud model will become a bigger trend in 2015. 
Enterprises want to emulate the productivity of what cloud providers can achieve. However, to do so, they need to move to a Quality of Services (QoS) model providing comprehensive virtual data services. For example, as storage needs continue to grow, enterprises must be able to manage, regulate and create a segregation of storage resources to match the utilization patterns of different departments. A case in point might be finance, which may need a higher level of performance than those simply doing word processing. Quality-of-Service settings are needed to ensure that high-priority workloads competing for access to storage can meet their service level agreements (SLAs) with predictable I/O performance. These QoS controls enable IT organizations to efficiently manage their shared storage infrastructure. Storage resources can be logically segregated, monitored, and regulated on a departmental basis.
Software-defined storage must continue to evolve to address these needs. New capabilities like support for VMware VVOLs and OpenStack capabilities will be helpful and will make an impact in 2015. However, more powerful virtual data services need to be created that provide data services without regard to the storage devices; providing capabilities to regulate how resources are being utilized across the infrastructure while making the overall process as automated as possible.

#6: Business continuity, multi-site data protection and workload management demands grow in 2015.
Virtual servers, clustered systems and new workloads will further increase the demands on data storage. Therefore, the fundamental capabilities- provisioning, performance and data protection - must be simple and transparent to users and must be rock solid. Users don't want to think or worry about their data storage; they just want their applications to run, without disruption. New generation workloads will demand continuous data availability protection that can stretch distances and work across multiple sites and multiple platforms, and non-disruptively fail-over and failback without any impact or disruption to critical applications. System admins will want an ‘easy' button that delivers powerful but simple continuous data protection solutions that allow systems to be rolled-back in time, allowing for recovery prior to ‘bad events' happening. In addition, the ability to scale, regulate and align performance to business workloads will drive the need for more management tools and capabilities such as ‘Heat maps' for performance monitoring,adaptive cachingmulti-tiered data migrationflash and disk performance optimizers, powerful capabilities that will set the new criteria for success.

Final thoughts on 2015: Convergence or storage islands?
VMware announced Virtual SANs that only work with VMware, Microsoft's Azure StorSimple allows easy migration to the cloud but only for iSCSI storage, ignoring the base of fibre-channel based storage. Each new storage array comes out with new device-specific features that don't work across other platforms, each flash device has its own unique feature stack, and hyper-converged systems may be simple to set up -- but what happens when they need to work with other vendors or with existing investments?
The talk is convergence but, in reality, everyone seems to be selling proprietary divergent solutions. The result is leading to a sprawl of disjointed software stacks that create "isolated islands of storage."  Virtual SANs, converged systems, and flash devices have continued to proliferate, creating more separately managed machines and ultimately resulting in discrete islands of storage in the IT organization. The capability to unify and federate these isolated storage islands by treating each of these scenarios as use cases under a unifying SDS architecture can help solve this by driving a higher level of management and functional convergence across the enterprise. 


Time to transform the disharmony of different solutions into a fluid data storage symphony  
As a result, once-isolated storage systems -- from flash and disks located within servers, to external SAN arrays and public cloud storage -- can now become part of an enterprise-wide accessible virtual pool, classified into tiers according to their unique characteristics. Different brands of storage, standalone converged systems and hypervisor dependent Virtual SANs and external storage systems no longer need to exist as ‘islands' -- they can be integrated within an overall storage infrastructure. The system administrator can easily provision capacity, maximize utilization, and set high-level policies to allow the software to dynamically select the most appropriate storage tier and paths to achieve the desired levels of performance and availability.  
Software-defined storage is disruptive to traditional storage models which are static and broken. But for IT decision makers, it offers a way to ‘do more with what they have,'  gain more control, and achieve true convergence, greater adaptability and improved productivity in a dynamic world. DataCore developed its SANsymphony-V and Virtual SAN software solutions with this thinking in mind and is now shipping its 10th generation solution. DataCore was founded on the premise of software-defined storage being a disruptive force and its time has come. It is DataCore's primary focus for 2015.  

Sunday, 4 January 2015

Dell PowerEdge Servers and DataCore Make Great Software-Defined Storage Solutions

by Steve Houck, Chief Operating Officer of DataCore - Post from Dell4Enterprise Blog

As applications and data become more important, companies are transforming their data centers into private cloud infrastructure. This move will enable companies to provide high levels of availability, reliability and performance to applications, while dramatically reducing costs and simplifying management of the infrastructure.

Servers and, to a large extent, networks have moved to a private cloud model where the underlying infrastructure has been abstracted, pooled and automated.

But sometimes companies take a different approach to storage. Walk into any data center and you’re sure to find different types of storage systems, each bought to satisfy specific requirements and sometimes for different projects. For customers with disparate storage environments or lacking a strategically implemented hybrid SAN environment, often times high-performance storage was justified for latency-sensitive applications, while lower-cost, high-capacity storage made better sense for less-critical applications.

Unfortunately, sometimes companies find themselves with different storage systems, each in an “island” with different functionality and managed separately. Such diversity can complicate how storage resources are allocated and managed. Storage is the next big step in private cloud infrastructure.

DataCore has created a solution that virtualizes, pools and manages all storage from any vendor, regardless of model. Once-isolated storage systems can now become part of an enterprise-wide capacity pool, classified into tiers according to their unique characteristics. The system administrator can easily provision capacity and set high-level policies to allow the software to dynamically select the most appropriate storage tier and paths to achieve the desired levels of performance and availability.


This solution utilizes the latest generation Dell PowerEdge servers and DataCore SANsymphony™-V software, transforming isolated storage systems into a centrally managed storage infrastructure, helping you derive the most value from your storage investments, present and future. Using DataCore’s storage virtualization technology, Dell Storage MD Series, SC Series and PS Series arrays can be easily integrated with existing storage.

  • Dramatically improves performance of all applications: High-performance caching algorithms intelligently anticipate reads, evaluate usage patterns and transform random writes into sequential writes. In addition, auto-tiering software dynamically matches workloads to the most appropriate class of storage from the virtual pool.
  • Instantly reduces storage costs by increasing capacity utilization and reducing management complexity: Eliminates wasted storage capacity by pooling all of your storage, regardless of make/manufacturer. Each storage device is used in a way that best matches its capacity and performance capabilities.
  • Prevents storage outages from affecting applications: Synchronous mirroring with “zero touch” failover and failback between any types of storage ensures that applications are not disrupted by storage or site outages. Easily migrate data between unlike systems, during production, with zero impact to applications.
  • Quickly meets the needs of the business:  Centralized management using a common set of commands across disparate systems, together with extensive automation and orchestration, simplifies administration and allows IT to respond to business needs easily and without disrupting other applications.
Bottom line: DataCore’s SANsymphony-V Software-defined Storage platform together with Dell PowerEdge servers combine to empower your storage to new levels of performance, availability, automation and orchestration.