Tuesday, 29 October 2013

DataCore chairman Ziya Aral on Software-defined storage and SSD performance

DataCore Software Chairman Ziya Aral, a founder of the company specializing in storage virtualization software, defines software-defined storage in its simplest terms, as a way of breaking the hardware layer from the software. Aral claims his company has been doing this for years, long before software-defined storage became such a widely used phrase.
In this interview with SearchSolidStateStorage, Aral describes why this approach is a good fit for solid-state drives (SSDs) and how it can help SSD performance.
You have called SSDs a revolutionary, yet complicated, technology. Why is it so important, and why is it so complicated?
Ziya Aral: SSD is important because it is fast. For most of our time in this industry, storage has meant disks. Disks are mechanical and, as a result, they're slow. There's an entire complicated fabric for making them fast -- caching and all kinds of lore that we apply to the devices. All of a sudden, here come SSDs, and they're transparent. A customer can simply stick one in a box -- and all of a sudden, it's faster. You boot Windows. You boot Linux. It happens in seconds. Whoa! This is great. Fast storage is very important.
Storage is the black sheep of the computer family. It's always the one lagging behind, because it's fundamentally mechanical. It's the one that slows down applications. Once upon a time, a third of all applications were storage-bound. Now it's probably 95%. Everything else got faster. We didn't.
The result is that any innovation that speeds up mass storage is wonderful, and SSDs speed up mass storage in a practical, commercially viable way. That's the good news. The bad news is they don't work like disk drives. They are complicated.
So if you want to read from an SSD, that's wonderful, but if you try to write to an SSD, it's not as fast. Worse, SSDs don't like being written to. I'm sorry to all my friends in the industry, but they don't. You know it. I know it. They burn out, and they burn out pretty quickly by disk standards.
So now, here's the complexity. SSDs aren't perfect for everything. They're still expensive -- to carry the bulk of your storage. You've still got to think about what you want to do with them in a larger architecture.
Now there are people in the industry, we try to partner with those people, people like Fusion-io, who I think are brilliant. They focus SSDs in a specific application, databases in this case, and they moderate SSDs with two technologies that actually make SSDs perform. One is software. They run caching software in front of their SSD. Second is DRAM. DRAM has a great advantage in that it doesn't care how many times you write to it...
...And how big of an impact will software-defined storage have on the SSD market and the performance of solid-state drives?
Aral: Huge. It's huge because it's hard enough to make disk drives work [laughs] and keep the application going, and the whole server virtualization market seems to have gotten stuck on tier-one applications and on virtual desktops.
So along comes this great storage technology. It supplements the conventional disk drive industry in a profound way. Integrated with DRAM, it seems to be able to work, but all of it defeats the schemes that we have been working on for years and years and years.
For example, to get the proper advantage of SSDs, you want the SSD to be local to the application. Most SSDs work that way. Now people are trying to build hybrid arrays and arrays of SSDs. That really sort of loses the advantage.
There's a physics to going over a wire … a lot of the advantage of the SSD disappears in that process. So now we are talking about moving a lot of the software infrastructure into the server and into the network. Well, why [move it] into the network? Because you don't want SSDs written to.
You want the longest delays possible, but long write delays mean that the data has to be in at least two locations, which means that the data has to go over a wire. Two ends of the cycle -- direct reads and asynchronous writes -- are happening at two different locations, and the software is infinitely portable. The software that glues those two phases together has to sit on both sides of the wire.

The Register on DataCore: "Whoo-whoo. What's that sound? It's the DataCore Express passing '10k' Customer Sites"

That's 10,000 customer sites using DataCore Storage Virtualization Software...

Storage-area-network (SAN) software supplier DataCore has, we're told, notched up 10,000 customer sites. DataCore is based in Florida and ships SANsymphony and allied SAN software products that turn standalone and clustered X86 servers and their storage into SANs.

The latest, ninth, version of SANsymphony has increased the cluster node count from 8 to 16, effectively doubling the SAN size. CEO George Teixeira said the plan is to increase it stepwise to 128. Version nine also has a next-generation replication service with faster initialisation of its engine, supports Microsoft's ODX and VAAI, and provides logging and configuration alerts.

Teixeira says DataCore has been about software-defined storage for a long, long time. EMC's ViPR is no real advance on that front, merely being an API translator, and looking like the latest version of Invista to him. In effect, he asserts, EMC is saying to its customers that they should keep on buying separate storage devices and now there's an API translator sitting on top of them.

He decries the notion that array hardware suppliers such as NetApp can do software-defined storage: "Are you kidding me? SANsymphony is portable and general."

Teixeira told us: "We're seeing a lot more sales and interest in DataCore products after EMC's ViPR launch and we're now in more than 10,000 customer sites." That's after fifteen years of being in business.

Wednesday, 16 October 2013

VMworld Europe 2013: DataCore Previews Latest SANsymphony-V Software-Defined Storage Platform Release Optimized for VMware® Infrastructures and Larger Scale Virtualization Deployments

At VMworld Europe, DataCore's long term customer SES Engineering, the worlds largest satellite company, presents the strategic advantages of a DataCore SANsymphony-V Software-defined Storage Platform
DataCore at vmworld
VMworld Sponsor DataCore showcases thousands of real-world customer successes, updates storage and VDI solutions and previews latest enhancements at VMworld Europe 2013, Stand S112

DataCore Software, the leader in software-defined storage architecture and premier provider of storage virtualization software, today provided a sneak-preview of its latest enhancements to the SANsymphony™-V storage virtualization software at VMworld 2013 (Oct. 15-17, Fira Barcelona, Stand S112). The company also is featuring its recently released DataCore VDS – Virtual Desktop Server, an affordable and high-speed VDI software solution. Visit Stand S112 to learn more about the new SANsymphony-V release, available on November 4, 2013. It will provide much greater scalability, faster next-generation replication services for disaster recovery, automatic self-healing storage capabilities, expanded platform integration with VMware VAAI space reclamation services and an updated vCenter™ plug-in. In addition, the new release of SANsymphony-V supports infrastructure-wide performance tiering and acceleration services that extend across scale-out grids of flash, SSDs and spinning disks from any vendor to accelerate Tier-1 applications and virtualization workloads.

DataCore and VMware Customers Speak Out on Software-defined Storage in a Virtual World
DataCore customer SES Engineering, the world-leading satellite operator and satellite communications company, will be available at the DataCore stand and will be featured in the Solutions Exchange Theatre on Oct. 15th at 4:40 p.m. SES Engineering will share its experience with DataCore storage virtualization software and explain why they see a software-defined architecture as the only logical solution to gain the flexibility and adaptability needed to address their growth and constantly changing needs.

VMware Users “Stop Fighting Your Storage Hardware”
DataCore Software recently launched a European-wide education and information campaign to spread the word to VMware users on the benefits of storage virtualization and software-defined storage architectures. Information resources including informative webcasts, links to video series on storage virtualization, white papers, “How To” guides and regional case studies are now available on DataCore’s “Stop Fighting Your Storage Hardware” site. The site showcases how real-world customers have achieved a software-defined data center that can maximize business productivity and capitalize on their current hardware investments.

Thousands of data centers around the globe deploy DataCore to cost-effectively meet the high-performance and high-availability storage requirements of their physical and virtual servers. "By making use of the DataCore solution within the virtual infrastructure created by VMware vSphere and VDI, we can not only ensure that we meet these corporate requirements, but also guarantee optimal cost-efficiency as a result of the hardware independence of the solution. This affects both the direct investment and the indirect and long-term cost of refreshes, expansions and added hardware acquisitions. We have thus created the technical basis for our external IT services, and within this framework we are creating the most flexible and varied range of cloud services possible," states Dr. Karl Manfredi CEO at Brennercom, a leading ICT company.

DataCore SANsymphony-V is Optimized for VMware Virtual Infrastructure Deployments
DataCore SANsymphony-V delivers a simple and scalable high-availability solution to meet vSphere™ shared storage requirements. The hardware-agnostic storage virtualization software abstracts and pools internal and external disks along with flash/SSDs to yield lightning-fast response, non-stop access and optimal use of capacity. Notable features include enhanced support for fast in-memory optimizations using DRAM caching and auto-tiering flash technologies, metro-wide failsafe synchronous mirroring and enhanced higher speed replication and new services for disaster recovery and remote site recovery. An updated plug-in for VMware vCenter allows users to non-disruptively provision, share, clone, replicate and expand virtual disks among physical servers and virtual machines. SANsymphony-V is the ideal software-defined storage platform to meet the demanding availability and performance needs required by VMware Virtual Infrastructures.

New SANsymphony-V Features Preview
The SANsymphony-V release available on November 4, 2013 will include: 
  • Twice the scaling with support for up to 16 federated nodes
    – Scales performance and HA; federated nodes can span Metro-wide distances
  • Next generation remote replication services
    – Speeds up performance for disaster recovery
  • New self-healing storage services
    – Reduces business downtime from hardware failures
  • New failsafe non-disruptive data mobility services between storage pools
    – Increases overall resiliency
  • New Hypervisor integration services
    – Offloads hosts to accelerate performance and data mobility operations (VAAI, ODX)
  • New space reclamation services
    – Enables IT organizations to reclaim unused capacity they already own
  • New change-control management and audit trail logs
    – Helps prevent avoidable errors and simplifies troubleshooting 
“The growing momentum for software-defined storage has encouraged VMware customers to consider storage virtualization software solutions that address performance and availability concerns. But while hypervisor-based and device-dependent storage virtualization solve some problems, agnostic software-only storage virtualization like DataCore SANsymphony-V have been proven to deliver  the highest value by working across any physical or virtual server and any collection of spinning disks and flash technology storage devices. We are pleased to showcase our customer proven value proposition and our SANsymphony-V software-defined storage platform at VMworld Europe 2013,” says Christian Hagen, vice president EMEA at DataCore Software.

DataCore VDS 2.0 – The VDI Solution
Visit the DataCore stand (S112) to find out more about the recently announced DataCore VDS 2.0 software.

"DataCore VDS overcomes the VDI adoption issues and addresses the major market need for affordable desktop virtualization solutions in a climate where smaller budgets and the European financial crisis are impacting all IT decisions. DataCore VDS 2.0 removes many of the most painful implementation obstacles while significantly reducing the cost per virtual desktop instance, states Mr. Hagen.

Special Offer to VMware vExperts, VCPs and VCIs
DataCore Software is offering a free one-year, Not-for-resale (NFR) license to VMware vExperts, VMware Certified Professionals and VMware Certified Instructors. The NFR licenses can be used in non-production environments, for evaluations, demonstrations and training purposes. VMware vExperts, VCPs and VCIs can access and download the software from the DataCore website at http://pages.datacore.com/Free_NFR_Software.html

Tuesday, 8 October 2013

Software-defined storage and choices; Automated tiering is the hottest storage technology in 2013 – TheInfoPro

Interesting to see the recent results from TheInfoPro, a service of 451 Research LLC. Obviously, it was great to read that "software-defined storage transforming provisioning and capacity choices,"  this is why I wrote 36 Reasons To Adopt DataCore Storage Virtualization and Software-defined Storage Architectures .  However, what struck me the most was how Automated tiering has become the Hot technology of 2013.

I believe this is also tied to the growing use of Flash/SSD technologies and many vendors are offering what I consider a simple 2 level version of auto-tiering that works between flash and disk drives but it is almost always restricted in its use to storage that they control in their proprietary arrays.

At DataCore we see auto-tiering as a key architectural element of software-defined storage and therefore it is not limited to running within a single box of disks or on a vendor specific storage array system. Unlike the current offerings out there (e.g. HP 3PAR, Dell Compellent, etc.) DataCore takes auto-tiering to the infrastructure-wide level, our smart software spans up to 15 tiers of storage from Flash, to spinning disk drives to Cloud storage and we work across a wide mix of different incompatible platforms -basically any model or vendor storage device.

Yes we can auto-tier but we do it across all the storage assets, for example it can run over a Fusion-io card on the app sever as well as over HP 3PAR, Dell Compellent, SATA capacity disk systems and even to Cloud storage...thousands of customers are using it so it is proven technology.

Check out the DataCore Auto-tiering page for more information.

Anyway, here is a quote and a summary of the latest report that got me motivated to do this blog post:

"There are two major forces working on storage today - solid-state transforming storage architectures in datacenters, and software-defined storage transforming provisioning and capacity choices," said Marco Coulter, VP, TheInfoPro. "As enterprises move from solution designers to service brokers, the conversations with business partners are evolving from bits and bytes to services and APIs."

TheInfoPro released their latest Storage Study, revealing that enterprise storage capacity is more than doubling every two years, exceeding the rate of Moore's Law.

Consequently, automated tiering is the hottest storage technology in 2013, as it helps keep budgets under control by enabling the use of lower-cost capacity. As enterprises struggle to define an external cloud strategy, the on-premises cloud model is gaining favor.

Conducted during the first half of 2013, the study identifies the storage priorities of leading organizations to provide business intelligence about technological roadmaps, budgets and vendor performance. This semiannual study is based on live interviews with IT professionals and primary decision-makers at large and midsize enterprises in North America and Europe.

Highlights from the Storage Study include:
  • Enterprise Storage Capacity More Than Doubled Past Two Years
  • Solid-state or flash is mainly a hybrid array choice for enterprises, with 37% in use, compared with only 6% for all-flash arrays.
  • Decoupling of the storage hardware from the software controller presents a real market opportunity for software vendors looking to capitalize on enterprise interest in software defined storage. 31% of respondents viewed the coupling of storage controller hardware and software as very important or extremely important.
  • Architecting for performance is often reactive, as 48% of large and midsize enterprises have no specific IO/s targets for applications.
  • Internal cloud storage is the second most likely technology to be added to 2013 storage budgets as enterprises remain cautious about external cloud storage, which they accept as useful for email but not for general capacity. The increased demand for internal cloud storage solutions helps storage vendors as they seek to compete with Amazon S3.
  • Object storage (at the heart of many service-provider cloud storage offerings) faces an education barrier in enterprises, since most still see it as a compliance solution.
  • Enterprises are staying with FC for their core storage networks, with FCoE seen as an 'edge' solution and IB as a niche for select HPC deployments (12% in use)

Thursday, 19 September 2013

The quest for software-defined storage has branched into three distinct approaches to storage virtualization...

From Enterprise Storage Forum -The Virtual San Buying Guide: http://www.enterprisestorageforum.com/san-nas-storage/virtual-san-buying-guide.html

“The quest for software-defined storage has branched three distinct approaches to storage virtualization; hypervisor-based, device-dependent, and agnostic, software-only,” states Steve Houck, COO DataCore Software. “vSAN is an example of hypervisor-based, EMC ViPR and IBM SVC are device-dependent, and DataCore SANsymphony–V is an example of software-only storage virtualization.”
DataCore has been in the virtual storage space for a long time. Its virtual SAN platform is called SANsymphony-V, which is hardware-agnostic storage virtualization software that abstracts and pools internal and external disks along with flash/SSDs. The company said thousands of data centers around the globe deploy DataCore.

Features include in-memory DRAM caching, auto-tiering, synchronous mirroring, asynchronous replication, thin-provisioning, snapshots, and CDP. Pricing for an HA (high-availability) cluster of 2 SANsymphony-V nodes is under $10,000, including 24x7 annual support.

“The SANsymphony-V platform is designed for mission-critical, latency-sensitive applications in companies large and small, where downtime is not tolerated,” said Steve Houck, COO, DataCore Software.

He believes that VMware vSAN takes advantage of inexpensive internal disk drives, which drives more customers, especially SAN-averse ones, to virtualize their apps. However, he added that it requires the use of more expensive flash memory, a minimum of three hosts, and doesn’t work on physical machine that are also be in the mix, or other hypervisors such as Microsoft Hyper-V.

SANsymphony-V, said Houck, is transparent to applications, so no code changes are necessary. It installs on a virtual machine in two or more nodes to create a tiered storage pool. The software can also run on dedicated x86 machines to offload all storage-intensive tasks from the application hosts

“The quest for software-defined storage has branched three distinct approaches to storage virtualization; hypervisor-based, device-dependent, and agnostic, software-only,” said Houck. “vSAN is an example of hypervisor-based, EMC ViPR and IBM SVC are device-dependent, and DataCore SANsymphony–V is an example of software-only storage virtualization.”

Wednesday, 18 September 2013

Software Update Notice: VMware vSphere Plug-in v1.3 for SANsymphony-V released

The following software update is now generally available for download from the DataCore Software Customer Support website:

Software: VMware vSphere Plug-in
Version: 1.3 (Replaces version 1.2)
Enhancements: Several important bug fixes and support for Internet Explorer 10.0

To download the updated plug-in from the DataCore Software Customer Support website:
Go to Software Downloads and select the following options from the pull down menus when prompted:
Select a Product Line to View Related Software Downloads: SANsymphony-V
Select a Version: 9.0
Select Software: Plug-in: VMware vSphere Plug-in for SANsymphony-V

Please download the Release Notes for VMware vSphere 1.3 Plug-in from the same page for more details on the Fixes and Enhancements included in this update.

You may also download the updated plug-in from www.datacore.com
Select SOFTWARE > Closer Technical Look > DOWNLOADS & TEST DRIVE

http://www.datacore.com/Software/Closer-Look/software-downloads.aspx
Scroll down to VMware vSphere Management Plug-in for SANsymphony-V

The direct link to the download request form is:
http://pages.datacore.com/VMware-vCenter-Management-Plugin.html

Tuesday, 17 September 2013

36 Reasons To Adopt DataCore Storage Virtualization and Software-defined Storage Architectures

Learn the Reasons Why 1000’s of Customers Have Chosen DataCore Software


“When asked for 10 reasons, I just couldn’t stop…”
–George Teixeira, CEO & President, DataCore Software

1. No Hardware Vendor Lock-In Enables Greater Buying Power
With DataCore, you gain the freedom to buy from any hardware vendor, and aren’t forced to pick and stick with one. This is a major advantage in negotiating your best deal and “right sizing” your purchase to buy what you need instead of what just one vendor has to offer. It’s easy to add, replace or migrate across storage platforms with a minimum of IT pain and best of all you can avoid disrupting users and business applications. Purchasing flexibility is a major advantage of a software-defined infrastructure.

2. Future-Proof Your Storage with Software-defined Flexibility; Be Ready for Whatever is Next
A flexible DataCore software defined storage infrastructure also lets you bring in new storage and new storage technologies non-disruptively, so you are ready for whatever comes next. Whether it is incorporating new flash storage or cloud storage into your infrastructure or gaining the competitive advantage that the next new storage innovation brings, your infrastructure is ready to bring it in and put it to work. On the other hand, hardware vendors have no interest in allowing you to bring in new technology to work with the storage you’ve have already purchased from them. Their reason for being is to sell you, each year, year after year, on ripping and replacing your last purchase from them with the newer model.

3. Get the Most from What You Already Own Before Buying More
Virtualization = Highly Efficient Utilization. Flexible and smart, DataCore virtualization software lets you optimize the utilization of your storage resources, extending the time to hardware refresh. This is another huge advantage over a hardware-defined infrastructure that forces you to over provision, oversize and buy more hardware to meet unknown demands before you have fully utilized what you already have. When you do buy new storage hardware, the DataCore advantage is that there is no waste—you can fully utilize what you buy so you buy only what you need.

Read more: Reasons and Benefits of Storage Virtualization and Software-defined Storage Architectures.

Monday, 2 September 2013

A Virtual Discovery Delivers Ongoing: Australian municipality grows and saves with DataCore Storage Virtualization Software

"The real beauty of DataCore is that it fulfills the full range of storage requirements - such as management, high-availability and disaster recovery - with hardware independent software that runs on any standard Intel or AMD based system."
- Anand Karan Managing Director, Lincom Solutions
 

To learn more, please read the Full Case Study on Kinston City Council: http://www.datacore.com/Libraries/Case_Study_PDFs/KingstonCouncil.sflb.ashx


DataCore Partner Lincom Designed and Implemented the Kingston City Council Solution:

Lincom Solutions holds DataCore implementation/engineering certifications and have worked with DataCore for nearly 7 years in designing and implementing DataCore solutions for various organisations, ranging from not for profit to global airlines.
 
One example of such an implementation is Kingston City Council.
For more information on Lincom, please see: http://www.lincom.net.au/25-DataCore.htm

Friday, 30 August 2013

Vancouver School Board and Tiering; DataCore Automatically Triages Storage For Performance

The Vancouver School Board (VSB) in Canada has been using storage virtualization for over 10 years.

...As the start of a new school season looms, the Vancouver School Board (VSB) is looking to automated storage tiered technology to help its systems cope with spikes in workload and storage demand.

Serving a staff of 8,000 and students numbering more than 50,000, the VSB network infrastructure is subjected to storage and workload demands that can rapidly peak at any given time during the school season. The IT department supports the email accounts of employees, human resources applications, financial systems, student records, the board’s Web site, as well as applications needed for various subjects and school projects.

“Getting optimum use of our server network is critical in providing optimum system performance and avoiding system slowdowns or outages,” said Peter Powell, infrastructure manager at VSB.

Rapidly moving data workload from one sever that may be approaching full capacity to an unused server in order to prevent an overload is one of the crucial tasks of the IT team, he said.

“The biggest single request we get is IOs for storage, getting data on and off disks,” said Powell. “We normally need one person to spend the whole day monitoring server activity and making sure workload is assigned to the right server. That’s one person taken away from other duties that the team has to do.”

Recently, however, the VSB IT team has been testing the new auto tiering feature that was introduced by DataCore Software in its SANsymphony-V storage virtualization software.

The VSB has been working with DataCore for more than 12 years in other projects. The board based its storage strategy on SANsymphony-V back in 2002 when the IT department found that it was getting harder to keep up with the storage demands of each application. SANSymphony-V’s virtualization technology allowed the IT team to move workload from servers that were running out of disk space to servers that had unavailable an unused capacity.

“In recent test runs we found that the system could be configured to monitor set thresholds and automatically identify and move workloads when ascribed limits are reached,”Powell said. “There’s no longer any need for a person to monitor the system all the time.”

Automatic Triage for Storage: Optimize Performance and Lower Costs
George Teixiera, CEO of DataCore, separately stated that the Auto-tiering feature acts “like doing triage for storage.”

“SANsymphony V can help organizations optimize their cost of storage for performance trade-offs,” he said. “Rather than purchase more disks to keep up with performance demand, auto tiering helps IT identify available performance and instantly moves the workload there. DataCore prioritizes the use of your storage to maximize performance within your available levels of storage."

In terms of capacity utilization and cost, DataCore's ability to auto-provision and pool capacity from different vendors is key. He said, some surveys indicate that most firms typically use only 30 per cent of their existing storage, but DataCore can help businesses use as much as 90 per cent of their existing storage.

Post based on the recent article in ITworld: Vancouver school board eyes storage auto-tiering

Wednesday, 28 August 2013

Financial services EOS benefits from a storage virtualization environment

EOS Group turned to Fujitsu and DataCore to modernize their storage infrastructure.
Also please see: The Fusjitsu case study on EOS Group posted in German.
Fujitsu DataCore

The global EOS Group, one of Europe's leading providers of financial and authorization services for the banking and insurance business sectors, employs about 5,000 people and has experienced rapid growth over the last few years. And at its headquarters in Hamburg, to meet their increasing business demands they faced a number of IT challenges: These include the processing and verification of sensitive customer data, more automation of how to acquire new clients, tracking and maintaining accounting information and maintaining and alerting their clients on important follow-ups and reminders, to processing payables and collections of receivables and the buying and selling of portfolio items - all these services are top priority jobs for the IT headquarters team.

The Challenge:
To get control of their ever-growing amounts of data, EOS decided it needed to find a flexible, extensible and easy-to-manage solution to improve their storage and storage area network (SAN). They also need a very high availability solution and this aspect would play a critical role in their decision, obviously they also needed to find a way to save money on operational costs and were looking for a simultaneous reduction in energy and maintenance costs.

They needed to do a massive expansion of their data centers to keep up with growth and they knew they needed to modernize their systems to achieve greater productivity. They decided a completely new solution was a necessity.

The Solution:
The financial services EOS opted for Storage Virtualization Software solution from DataCore to virtualize, manage and work across their Fujitsu ETERNUS DX80 storage systems and PRIMERGY servers within the SAN.

The result: The EOS Group benefited from increased performance, the highest levels of data protection and reduced maintenance costs.



ETERNUS DX 80

Eight Fujitsu ETERNUS DX80 storage systems, each with 36 terabytes of net disk space and four Fujitsu PRIMERGY RX600 servers allowed the EOS Group to scale up well beyond its original 30 terabytes of capacity. After the expansion they now manage 288 terabytes of net capacity under DataCore storage virtualization control - and if they have additional data growth it will no longer be a problem since they can easily scale the architecture. And thanks to the DataCore virtualization solution, they now all can manage all their storage in a common way and scale independently of all the underlying physical hardware.

What else did EOS want? Greater business continuity, resiliency from major failures and disaster recovery, of course. Therefore Fujitusu and DataCore combined to achieve an even higher level of protection and business continuity and deal with disaster events. The system was set up with mirrored data to a remote site that was reflected fully and kept in sync with the main site. The remote site was located across town and took advantage of DataCore's ability to stretch mirrors at high-speed and the software automatically updates and mirrors the data to create identical virtual disks that reside on different drives and can be arbitrarily far apart. With DataCore, EOS has achieved its goal for greater business continuity and disaster recovery.

Tuesday, 13 August 2013

Improving application performance and overcoming storage bottlenecks are the top business priorities for virtualized environments

Some interesting reports:

Enterprise Strategy Group recently comments on how organizations are resisting the move to virtualize their tier-1 applications due to poor performance:

"With the increasing demands on IT, users have less tolerance for poor application performance," said Mike Leone, ESG Lab engineer, Enterprise Strategy Group. "Many organizations resist moving tier-1 applications to virtual servers for fear that workload aggregation will slow performance. As a poor attempt to combat the problem, organizations add more hardware and software to the IT infrastructure, but with that comes higher costs and increased complexity. ESG research on virtualization revealed that after budget concerns and lack of legacy application support, performance issues were the key concern preventing organizations from expanding their virtualization deployments."

Storage and I/O bottlenecks appear to be the major obstacles. Gridstor recently published survey highlights the top priorities surrounding mid-market enterprises' virtual infrastructure requirements.



The industry survey revealed that improving application performance (51%) was the top business priority for virtualized environments, followed by the need to reduce I/O bottlenecks between VMs and storage (34%), the need for increased VM density (34%), the need to decrease storage costs (27%), and the need for improved manageability for virtualized systems (24%).



Respondents in the survey demonstrated agreement that storage resources have a direct correlation to the application performance and, as a result, ROI derived from virtualization projects. When asked about the top five factors they consider when choosing storage systems for virtualization projects, some of the highest priority responses included theability for storage to scale performance as needed (47%), the ability for storage to scale capacity as needed (47%), and the ability for storage to scale I/O as needed (37%).

The survey was conducted across 353 organizations representing multiple industry categories particularly in the areas of technology (14%), healthcare (13%), education (11%), government (8%), and finance (6%). The majority of responding companies had over 1,000 employees (93%) and more than 100 servers (59%).

http://www.storagenewsletter.com/news/marketreport/gridstore-application-performance

Friday, 9 August 2013

Virtualized Databases: How to strike the Right Balance between Solid State Technologies and Spinning Disks


Originally published in DataBase Trends and Applications Magazine, written by Augie Gonzalez
http://www.dbta.com/Articles/ReadArticle.aspx?ArticleID=90460

If money was not an issue, we wouldn’t be having this conversation. But money is front and center in every major database roll-out and optimization project, and even more so in the age of server virtualization and consolidation. It often forces us to settle for good enough, when we first aspired for swift and non-stop.

The financial tradeoffs are never more apparent than they have become with the arrival of lightning fast solid state technologies. Whether solid state disks (SSDs) or flash memories, we lust for more of them in the quest for speed, only to be moderated by silly constraints like shrinking budgets.

You know too well the pressure to please those pushy people behind the spreadsheets. The ones keeping track of what we spent in the past, and eager to trim more expenses in the future. They drive us to squeeze more from what we have before spending a nickel on new stuff. But if we step out of our technical roles and take the broader business view, their requests are really not that unreasonable. To that end, let’s see how we can strike a balance between flashy new hardware and the proven gear already on the data center floor. By that, I mean arriving at a good mix between solid state technologies and the conventional spinning disks that have served us well in years’ gone.

On the face of it, the problem can be rather intractable.  Even after spending tedious hours of fine tuning you’d never really be able to manually craft the ideal conditions where I/O intensive code sections are matched to flash, while hard disk drives (HDDs) serve the less demanding segments. Well, let me take that back - you could when databases ran on their own private servers.  The difficulty arises when the company opts to consolidate several database instances on the same physical server using server virtualization. And then wants the flexibility to move these virtualized databases between servers to load balance and circumvent outages.

Removing the Guesswork
When it was a single database instance on a dedicated machine, life was predictable. Guidelines for beefing up the spindle count and channels to handle additional transactions or users were well-documented. Not so when multiple instances collide in incalculable ways on the same server, made worse when multiple virtualized servers share the same storage resources. Under those circumstances you need little elves running alongside to figure out what’s best. And the elves have to know a lot about the behavioral and economic differences between SSDs and HDDs to do what’s right.

Turns out you can hire elves to help you do just that. They come shrink-wrapped in storage virtualization software packages. Look for the ones that can do automated storage tiering objectively - meaning, they don’t care who makes the hardware or where it resides.

On a more serious note, this new category of software really takes much of the guesswork, and the costs, out of the equation. Given a few hints on what should take priority, it makes all the right decisions in real time, keeping in mind all the competing I/O requests coming across the virtual wire. The software directs the most time-sensitive workloads to solid state devices and the least important ones to conventional drives or disk arrays. You can even override the algorithms to specifically pin some volumes on a preferred class of storage, say end-of-quarter jobs that must take precedence.

Better Storage Virtualization Products
The better storage virtualization products go one better. They provide additional turbo charging of disk requests by caching them on DRAM. Not just reads, but writes as well. Aside from the faster response, write caching helps reduce the duty cycle on the solid state memories to prolong their lives. Think how happy that makes the accountants. The storage assets are also thin provisioned to avoid wasteful over-allocation of premium-priced hardware.

This brings us to the question of uptime. How do we maintain database access when some of this superfast equipment has to be taken out of service? Again, device-independent storage virtualization software has much to offer here. Particularly those products which can keep redundant copies of the databases and their associated files on separate storage devices, despite model and brand differences. What’s written to a pool of flash memory and HDDs in one room is automatically copied to another pool of flash/HDDs. The copies can be in an adjacent room or 100 kilometers away. The software effectively provides continuous availability using the secondary copy while the other piece of hardware is down for upgrades, expansion or replacement. Same goes if the room where the storage is housed loses air conditioning, suffers a plumbing accident, or is temporarily out of commission during construction/remodeling.

The products use a combination of synchronous mirroring between like or unlike devices, along with standard multi-path I/O drivers on the hosts to transparently maintain the mirror images. They automatically fail-over and fail-back without manual intervention. Speaking of money, no special database replication licenses are required either. The same mechanisms protecting the databases also protect other virtualized and physical workloads, helping to converge and standardize business continuity practices.

And for the especially paranoid, you can keep distant replicas at disaster recovery (DR) sites as well. For this, asynchronous replication occurs over standard IP WANs.

If you follow the research from industry analysts, you’ve already been alerted to the difficulties of introducing flash memories/SSDs into an existing database environment with an active disk farm. Storage virtualization software can overcome many of these complications and dramatically shorten the transition time. For example, the richer implementations allow solid state devices to be inserted non-disruptively into the virtualized storage pools alongside the spinning disks. In the process, you simply classify them as your fastest tier, and designate the other storage devices as slower tiers. The software then transparently migrates disk blocks from the slower drives to the speedy new cards without disturbing users. You can also decommission older spinning storage with equal ease or move it to a DR site for the added safeguard.

Need for Insight
Of course, you’d like to keep an eye on what’s going on behind the scenes. Built-in instrumentation in the more comprehensive packages provides that precious insight. Real-time charts reveal fine grain metrics on I/O response and relative capacity consumption. They also provide historical perspectives to help you understand how the system as a whole responds when additional demands are placed on it, and anticipate when peak periods of activity are most likely to occur. Heat maps display the relative distribution of blocks between flash, SSDs and other storage media, including cloud-based archives.

What can you take away from this? For one, solid state technologies offer an attractive way to accelerate the speed of your critical database workloads. No surprise there. Used in moderation to complement fast spinning disks and high-capacity, bulk storage already in place, SSDs help you strike a nice balance between excellent response time and responsible spending. To establish and maintain that equilibrium in virtualized scenarios, you should accompany the new hardware with storage virtualization software – the device-independent type. This gives you the optimal means to assimilate flash/SSDs into a high-performance, well-tuned, continuously available environment. In this way, you can please the financial overseers as well as the database subscribers, not to mention all responsible for its caring and feeding - you included.

About the author:
Augie Gonzalez is director of product marketing for DataCore Software and has more than 25 years of experience developing, marketing and managing advanced IT products.  Before joining DataCore, he led the Citrix team that introduced simple, secure, remote access solutions for SMBs. Prior to Citrix, Gonzalez headed Sun Microsystems Storage Division’s Disaster Recovery Group. He’s held marketing / product planning roles at Encore Computers and Gould Computer Systems, specializing in high-end platforms for vehicle simulation and data acquisition.

Wednesday, 31 July 2013

Quorn Foods Announces “5 Nines” Availability and Improved SAP ERP Productivity with DataCore SANsymphony-V Storage Virtualization

Read: Quorn Foods Case Study

Leading food brand overhauls virtual infrastructure and optimizes Tier 1 applications: Takes Business Critical SAP ERP into the Virtual World and uses DataCore to reduce data mining times from 20 minutes to 20 seconds; plus enables Information Lifecycle Management via Auto Tiering.

Accelerate Business Applications"No one could fail to notice the dramatic leaps in performance that was now afforded by DataCore.”

Quorn Foods (http://www.quorn.com) have recently adopted DataCore SANsymphony-V software to achieve High Availability, to turbo charged application performance and to implement an intelligent Information Lifecycle Management (ILM) data flow with structured Auto Tiering.

Marlow Foods, better known as the owner of the Quorn brand, offers quality low fat, meat free food products to the discerning, health conscious customer. Employing 600 people across three UK sites, Quorn’s Head of IT is Fred Holmes. Back in 2011 when sold from a large parent company, Quorn had the opportunity to remap the entire existing physical server infrastructure which was rapidly falling outside of warranty. Fred notes:

"This was a three phrase project and had to be classified as a major systems overhaul that we were embarking on. In Phase 1, DataCore’s SANsymphony-V enabled smooth migration within a two-week period and dramatically increased IOPS, even with the high burden that virtual servers place when they are delivering thin client capabilities."

Phase 1: Server side Virtualization Progresses into Greenfield Site with DataCore providing the centralized storage and 99.999% reliability:

They consulted their trusted IT partner and DataCore gold partner, Waterstons, to assist with the major infrastructure overhaul. A greenfield site for virtualization, Fred and the assigned Waterstons project team, provided compelling financial analysis showing dramatic consolidation and resource savings to boot. A working proof of concept was deployed to substantiate findings and test that a Microsoft Remote Desktop Services (RDS) farm could support all applications for test user group, and to prove the benefits of server virtualization.

Two successful months later, the project team implemented full server side virtualization with three additional R710 hosts, all Brocade Fibre Channel attached to a storage area network (SAN) to support the full VMware vSphere Enterprise feature set. In total 30 workloads were virtualized into the new environment to allow older physical servers to be retired. From the Desktop perspective, a new RDS farm replaced 400 traditional desktops with thin client capabilities. DataCore’s SANsymphony-V solution provided the essential cost-effective centralized storage running across two Dell T710 commodity servers. DataCore’s storage hypervisor provided one general purpose synchronously mirrored SAN pool of 7TB usable (across a total of 48 10k SAS spindles in MD1220 SAS-attached storage shelves) to provide 99.999% reliability. The project team knew that the success of any robust, responsive VMware environment hinges on the abilities and performance of the storage infrastructure that sits beneath. This was especially true in Quorn’s highly virtualized infrastructure with users interacting directly with virtual RDS Session Hosts. From a business user perception, the virtualized estate provided them with a turbocharged world.

Phase II – taking Business Critical ERP into the Virtual World and using DataCore to reduce data mining times from 20 minutes to 20 seconds:

Phase II covered virtualization of SAP Enterprise Resource Planning for financial, HR, accounts and sales platforms. With around 8,500 outlets that stock the Quorn brand across the UK alone, Marlow Foods have an extremely high dependency on their SAP ERP servers to drive critical business advantages across all departments. The challenge was to integrate the current SAP physical servers into the virtualized environment, whilst maintaining their 99.999% reliability and not affecting existing virtual machines reliant on the SAN. To address this challenge, the project team added another R710 host to the cluster, and a further 4TB of usable synchronously mirrored storage within a new storage pool dedicated entirely to SAP (across a further 48 10k SAS spindles) and began the process to rebuild their SAP servers into the virtual infrastructure. This meant transitioning huge databases from the old physical environment. Proof would come at the end of the month, when database queries were traditionally the highest and performance expectations were unmet with erratic response times.

In fact, the data mining queries were returned within 20 seconds, compared to 20 minutes in the previous physical environment. This is in no small part down to the way that DataCore’s SANsymphony-V leverages disk resources, assigning I/O tasks to very fast server RAM and CPU to accelerate throughput and to speed up response when reading and writing to disk. And with the wholly mirrored configuration, continuous availability is afforded.

"Like all things in IT, dramatic improvements to the infrastructure remain invisible to the user who only notices when things go wrong. But in this instance, no one could fail to notice the dramatic leaps in performance that was now afforded," Fred notes.

Phase III: Enhancing the Virtualized Estate with Auto-Tiering:

With everything virtualized, Fred and the team gave themselves six months to reflect and monitor the new infrastructure before suggesting additional enhancements. What Fred suspected was that he could also achieve greater intelligence from the SAN itself. Simon Birbeck, Waterstons, one of the U.K.’s DataCore Master Certified Installation Engineers, designed a performance enhancing model to automatically migrate data blocks to the most appropriate class of storage within the estate. Thinly provisioned SAN capacity was at around 80% utilization, but for 2013 planning Fred and the Waterstons team had allocated a 20% year-on-year growth, thereby potentially stretching utilization to the maximum by the end of the year. Simon recommended switching to a three tier SAN design to facilitate the best cascading practices of Information Lifecycle Management (ILM).

A red top tier comprised a new layer of SSD flash storage, designed to be always full and utilized by the most frequently read blocks for extremely fast response. A pre-existing amber mid-tier caters for the average use data blocks served by commodity 10k SAS drives. Sitting beneath is a blue tier as the ‘catch all’ layer for the least frequently accessed data, maintained on low cost, high capacity 7.2k SAS spindles.

Fred summarizes, "What Waterstons recommended was an intelligent usable form of ILM with DataCore’s SANsymphony-V at the front-end making the intelligent decision as to which blocks of data should be allocated where."

Indeed SANsymphony-V has provided both strong reporting and accurate planning for data growth. Built-in diagnostics help to pro-actively identify when a problem is manifesting, changing the management role from reactive to proactive/intelligent. For the future, Marlow Foods will look to expand on the high availability/business continuity environment afforded by SANsymphony-V by adding a further asynchronous replica at another site to further protect the SAP ERP environment. The scalability of SANsymphony-V brings a new level of comfort not possible with other forms of storage.

Fred takes the final words: "DataCore’s SANsymphony-V now reliably underpins their entire virtual estate. From a transformation perspective we have new levels of availability and enhanced decision making for both IT and the users."

To learn more about how DataCore can improve your critical business applications, please click here: http://pages.datacore.com/WP_VirtualizingBusinessCriticalApps.html

About Quorn Foods
Today, Quorn Foods is an independent company focused on creating the world’s leading meat-alternative business. Quorn Foods’ is headquartered in Stokesley, North Yorkshire and we employ around 600 people across three UK sites. Launched nationally in 1995, The Quorn brand offers a wide range of meat-alternative products, made using the proprietary technology of Mycoprotein that uniquely delivers the taste and texture of meat to the increasing number of people who have chosen to reduce, replace or cut out their meat consumption and who still want to eat a normal, healthy diet.

Monday, 29 July 2013

The Need for SSD Speed, Tempered by Virtualization Budgets

Augie GonzalezWhat IT department doesn't lust for the hottest new SSDs? Costs usually tempers that lust.

Article from Virtualization Review By Augie Gonzalez: The Need for SSD Speed, Tempered by Virtualization Budgets

What IT department doesn't lust for the hottest new SSDs? You can just savor the look of amazement on users' faces when you amp up their systems with these solid state memory devices. Once-sluggish virtualized apps now peg the speed dial.

Then you wake up. The blissful dream is interrupted by the quarterly budget meeting. Your well-substantiated request to buy several terabytes of server-side flash is “postponed" -- that's finance's code-word for REJECTED. Reading between the lines, they're saying, “No way we're spending that kind of money on more hardware any time soon.”

Ironically, the same financial reviewers recommend fast-tracking additional server virtualization initiatives to further reduce the number of physical machines in the data center. Seems they didn't hear the first part of your SSD argument. Server consolidation slows down mission critical apps like SQL Server, Oracle, Exchange and SAP, to the point where they are unacceptable. Flash memories can buy back quickness.

This not-so-fictional scenario plays out more frequently than you might guess. According to a recent survey of 477 IT professionals conducted by DataCore Software, it boils down to one key concern: storage-related cost. Here are some other findings:
  • Cost considerations are preventing organizations from adopting flash memory and SSDs in their virtualization roll-outs. More than half of respondents (50.2 percent) said they are not planning to use flash/SSD for their virtualization projects due to cost.
  • Storage-related costs and performance issues are the two most significant barriers preventing respondents from virtualizing more of their workloads. 43 percent said that increasing storage-related costs were a “serious obstacle” or “somewhat of an obstacle. 42 percent of respondents said the same about performance degradation or inability to meet performance expectations. 
  • When asked about what classes of storage they are using across their environments, nearly six in ten respondents (59 percent) said they aren't using flash/SSD at all, and another two in ten (21 percent) said they rely on flash/SSD for just 5 percent of their total storage capacity. 
Yet, there is a solution – one far more likely to meet with approval from the budget oversight committee, and still please the user community.

Rather than indiscriminately stuff servers full of flash, I'd suggest using software to share fewer flash cards across multiple servers in blended pools of storage. By blended I mean a small percentage of flash/SSD alongside your current mix of high-performance disks and bulk storage. An effective example of this uses hardware- and manufacturer-agnostic storage virtualization techniques packaged in portable software to dynamically direct workloads to the proper class (or tier) of storage. The auto-tiering intelligence constantly optimizes the price/performance yield from the balanced storage pool. It also thin provisions capacity so valuable flash space doesn't get gobbled up by hoarder apps.

Auto-Tiering

The dynamic data management scheme gets one additional turbo boost. The speed-up comes from advanced caching algorithms in the software, for both storing and retrieving disk/flash blocks. In addition to cutting input/output latencies in half (or better), you'll get back tons of space previously wasted on short-stroking hard disk drives (HDDs). In essence, you no longer need to overprovision disk spindles trying to accelerate database performance, nor do you need to overspend on SSDs.

Of course, there are other ways for servers to share solid state memories. Hybrid arrays, for example, combine some flash with HDDs to achieve a smaller price tag. There are several important differences between buying these specialized arrays and virtualizing your existing storage infrastructure to take advantage of flash technologies, not the least of which is how much of your current assets can be leveraged. You could propose to rip out and replace your current disk farm with a bank of hybrid array, but how likely is that to get the OK?

Instead, device-independent storage virtualization software uniformly brings auto-tiering, thin provisioning, pooling, advanced read/write caching and several replication/snapshot services to any flash/SSD/HDD device already on your floor. For that matter, it covers the latest gear you'll be craving next year including any hybrid systems you may be considering. The software gets the best use of available hardware resources at the least cost, regardless of who manufactured it or what model you've chosen.

Virtualizing your storage also complies with the greater corporate mandate to further virtualize your data center. It's one of the unusual times when being extra frugal pays extra big dividends.

No doubt you've been bombarded by the term software-defined data center, and more recently, software-defined storage. It has become a pseudonym for storage virtualization, with a broader, infrastructure-wide scope; one spanning the many disk and flash-based technologies, as well as the many nuances which differentiate various models and brands. Without getting lost in too many semantics, it boils down to using smart software to get your arms around and derive greater value from all the storage hardware assets at your disposal.

Sounds like a very reasonable approach given how quickly disk technologies and packaging turn over in the storage industry.

One word of caution: Any software-defined storage inextricably tied to a piece of hardware may not be what the label advertises.

Yes, the need for speed is very real, but so is the reality of tight funding. Others have successfully surmounted your same predicament following the guidance above. It will surely ease the injection of flash/SSD technologies into your current environment, even under the toughest of scrutiny. At the same time, you'll strike the difficult but necessary balance between your business requirements for fast virtualized apps and your very real budget constraints; without question, a most desirable outcome.

About the Author
Augie Gonzalez is director of product marketing for storage virtualization software developer DataCore Software.

Saturday, 20 July 2013

Best of the Tour de France 2013 Features DataCore's Storage Virtualization Mascot 'Corey'

Check out the Reuters photo coverage of the cycling event!
Featured Slide Show
Cycling fan Dave Brown from Park City of the U.S. poses while waiting for the riders climbing the Alpe d'Huez mountain during the 172.5km eighteenth stage of the centenary Tour de France cycling race from Gap to l'Alpe d'Huez, in the French Alps, July 18, 2013.

REUTERS/Jacky Naegelen
http://sports.yahoo.com/photos/best-of-the-tour-de-france-2013-slideshow/cycling-fan-dave-brown-park-city-u-poses-photo-135555539.html

Additional Pictures
Corey

Thursday, 11 July 2013

One of Europe's largest construction project linking Scandinavia to mainland Europe picks DataCore storage virtualization to tier data and speed critical applications

Femern A/S, the managing company behind one of the most ambitious engineering projects connecting Scandinavia to mainland Europe, the Fehmarnbelt Tunnel, has adopted its SANsymphony-V to speed auto tiering of data and critical applications.

fehmarnbelt_tunnel_540

Tim Olsson, the IT manager gearing up to support the forthcoming €5.5 billion construction of the 18km immersed underwater tunnel, due for completion in 2021, said: "We are in the process of commencing 4 major construction contracts this summer to facilitate the start of the build of the Fehmarnbelt tunnel. The number of employees will start to escalate dramatically as we progress through the build; as well as the volume of CAD intensive documents and building designs and engineering specifications that will need to be accessed on the network."

Femern began the process of examining the present infrastructure and anticipating how it would scale as the construction progressed.

Within their Copenhagen data centre, Femern operated a Dell EqualLogic SAN that was reaching end of life with spare parts becoming increasingly hard to obtain and when new replacement drives did arrive, there were frequent incidences of premature failure. The organisation had already adopted virtualisation, with their critical apps running virtualised on VMware but they appeared slow due to disk I/O bottlenecks as workloads contended for the challenged EqualLogic storage. This was therefore not a time to consider allocating more disks, but an opportunity for an entire overhaul to an alternative software defined SAN and infrastructure.

"Not only was the EqualLogic box falling out of warranty and therefore coming with an increased high price tag, it seemed to have a lot of features that we never, or rarely, used. We felt certain that even if we considered an upgraded, latest EqualLogic box, that we would still be constrained by the hardware rigidity and lacking the impending scalability and flexibility that would be required to accommodate the various construction phases."

Femern consulted their day -to-day supplier of all IT solutions, COMM2IG, to make an official technical proposal on alternatives solutions to run in Copenhagen and on the construction sites and exhibition centre at the mouth of the tunnel in Rødbyhavn.

COMM2IG examined the existing environment in detail. Femern were running VMware's ESX for server virtualisation, but availability, ease of management and automatic allocation of storage were sadly lagging behind. COMM2IG grasped the opportunity to recommend the installation of SANsymphony-V software storage hypervisor to support the latest version of VMware's Vsphere delivering their tier 1 applications virtually. To help overcome performance issues, COMM2IG recommended that Femern incorporate flash-based Fusion-io, Inc.'s technology for their most important applications to achieve far faster speeds than from spinning disks and to overcome performance bottlenecks.

The software based solution - offering performance, flexibility and investment protection: Two SANsymphony-V nodes were implemented on two HP DL380 G7 servers. Each server was provisioned with two CPUs, 96GB of RAM for super caching performance boost and 320GB Fusion-io PCIe ioDrive2 cards for flash integration in the storage pool as part of a hybrid, automatically tiered storage configuration.

VMware's server virtualisation was upgraded to vSphere 5.1. Overall, the environment offered 50TB of mirrored storage based on HP direct-attached SASand SATA drives augmented by a flash storage layer. Use of the Fusion acceleration cards would be optimised through DataCore's auto tiering capabilities; reserving the more expensive flash storage layer for the most demanding applications; and ensuring that other, less accessed data, be relocated automatically to Femern's existing SAS and SATA based storage enabling a 1/2/3 auto tiered environment.

Results:Installation commenced with the help of COMM2IG and twelve months down the line, Femern are well placed to comment on the success of their software defined data centre. Most noticeable is the increased performance of their SQL applications which struggled with latency issues previously.

"From a user perspective, they used to experience slow response times and a performance lag from their SQL & Exchange applications running virtually on VMware. Today, and in the future, with DataCore in the background, the applications appear robust, instantaneous and seamless. We have achieved this without the cost prohibitive price tag of pure flash. That's what any IT department strives to achieve."

Olsson summarises: "As the data volumes increase up to ten fold as construction starts in 2015, intelligent allocations to flash will increase the lifespan of the Fusion-ioDrive and offset the overall cost of ownership, delivering less critical data to the most cost effective tier that can deliver acceptable performance. With this multi-faceted approach to storage allocation using DataCore's SANsymphony-V solution, Femern is able to manage all devices under one management interface; regardless of brand and type.  Given that DataCore has been robustly supporting VMware environments for many years, SANsymphony-V is viewed as the perfect complement, offering enhanced storage management and control intelligence. For Femern, this has manifested in reduction of downtime; adding new VMs; taking backups; migrating data and expanding capacity can all now be done without outages.

"What we have achieved here with DataCore storage virtualisation software sets us on the road to affordable, flexible growth to eliminate storage related downtime. Add the blistering speed of Fusion-io acceleration and we have created a super performing, auto tiered storage network, that does as the tunnel itself will do; connects others reliably, super fast and without stoppages," concludes Olsson.

From StorageNewsletter: https://www.storagenewsletter.com/news/customer/femern-fehmarnbelt-tunnel-datacore

A Defining Moment for the Software-Defined Data Center

Original post:  http://www.storagereview.com/a_defining_moment_for_the_softwaredefined_data_center

For some time, enterprise IT heads heard the phrase, “get virtualized or get left behind,” and after kicking the tires, the benefits couldn’t be denied and the rush was on. Now, there’s a push to create software-defined data centers. However, there is some trepidation as to whether these ground-breaking, more flexible environments can adequately handle the performance and availability requirements of business-critical applications, especially when it comes to the storage part of the equation. While decision-makers have had good reason for concern, they now have an even better reason to celebrate as new storage virtualization platforms have proven to overcome these I/O obstacles.
Software-defined Storage

Just as server hypervisors provided a virtual operating platform, a parallel approach to storage is quickly transforming the economics of virtualization for organizations of all sizes by offering the speed, scalability and continuous availability necessary to achieve the full benefits of software-defined data centers. Specifically, these advantages are widely reported:
  • Elimination of storage-related I/O bottlenecks in virtualized data centers
  • Harnessing flash storage resources efficiently for even greater application performance
  • Ensuring fast and always available applications without a substantial storage investment
Performance slowdowns caused by I/O bottlenecks and downtime attributed to storage-related outages are two of the foremost reasons why enterprises have refrained from virtualizing their Tier-1 applications such as SQL Server, Oracle, SAP and Exchange. This comes across clearly in the recent Third Annual State of Virtualization Survey conducted by my company which showed that 42% of respondents noted performance degradation or inability to meet performance expectations as an obstacle preventing them from virtualizing more of their workloads. Yet, effective storage virtualization platforms are now successfully overcoming these issues by using device-independent adaptive-caching and performance-boosting techniques to absorb wildly variable workloads, enabling applications to run faster virtualized.
To further increase Tier-1 application responsiveness, companies often spend excessively on flash memory-based SSDs. The Third Annual State of Virtualization Survey also reveals that 44% of respondents found disproportionate storage-related costs were an obstacle to virtualization. Again, effective storage virtualization platforms are now providing a solution with such features as auto-tiering. These enhancements optimize the use of these more premium-priced resources alongside more modestly priced, higher capacity disk drives.
Such an intelligent software platform constantly monitors I/O behavior and can intelligently auto-select between server memory caches, flash storage and traditional disk resources in real-time. This ensures that the most suitable class or tier of storage device is assigned to each workload based on priorities and urgency. As a result, a software defined data center can now deliver unmatched Tier-1 application performance with optimum cost efficiency and maximum ROI for existing storage.
Once I/O intensive Tier-1 applications are virtualized, the storage virtualization platform ensures high availability. It eliminates single points of failure and disruption through application-transparent physical separation, stretched across rooms or even off-site with full auto-recovery capabilities designed for the highest levels of business continuity. The right platform can effectively virtualize whatever storage is present, whether direct-attached or SAN-connected, to achieve a robust and responsive shared storage environment necessary to support highly dynamic, virtual IT environments.
Yes, the storage virtualization platform is a defining moment for the software defined data center. The performance, speed and high availability required for mission-critical databases and applications in a virtualized environment has been realized. Barriers have been removed, and there is a clear, supported path for realizing greater cost efficiency. Still, selecting the right platform is critical to a data center. Technology that is full-featured and has been proven “in the field” is essential. Also, it’s important to go with an independent, pure software virtualization solution in order to avoid hardware lock-in and to take advantage of future storage developments that will undoubtedly occur.
George Teixeira - Chief Executive Officer & President DataCore Software
George TeixeiraGeorge Teixeira is CEO and president of DataCore Software, the premier provider of storage virtualization software. The company’s software-as infrastructure platform solves the big problem stalling virtualization initiatives by eliminating storage-related barriers that make virtualization too difficult and too expensive.
DataCore Software-Defined Storage

Paul Murphy Joins DataCore as Vice President of Worldwide Marketing to Build on Company’s Software-Defined Storage Momentum

Former VMware and NetApp Marketing and Sales Director to Spearhead Strategic Marketing and Demand Generation to Drive Company’s Growth and Market Leadership in Software-Defined Storage

Paul Murphy“The timing is perfect. DataCore has just updated its SANsymphony-V storage virtualization platform and it is well positioned to take advantage of the paradigm shift and acceptance of software-defined storage infrastructures,” said Murphy. “After doing the market research and getting feedback from numerous customers, it is clear to me that there is a large degree of pent-up customer demand. Needless to say, I’m eager to spread the word on DataCore’s value proposition and make a difference in this exciting and critical role.”

Paul Murphy joins DataCore Software as the vice president of worldwide marketing. Murphy will oversee DataCore’s demand generation, inside sales and strategic marketing efforts needed to expand and accelerate the company’s growth and presence in the storage and virtualization sectors.  He brings to DataCore a proven track-record and a deep understanding of virtualization, storage technologies and the pivotal forces impacting customers in today’s ‘software-defined’ world. Murphy will drive the company’s marketing organization and programs to fuel sales for DataCore’s acclaimed storage virtualization software solution, SANsymphony- V.

“Our software solutions have been successfully deployed at thousands of sites around the world and now our priority is to reach out to a broader range of organizations that don’t yet realize the economic and productivity benefits they can achieve through the adoption of storage virtualization and SANsymphony-V,” said DataCore Software’s Chief Operating Officer, Steve Houck. “Murphy brings to the company a fresh strategic marketing perspective, the ability to simplify our messaging, new ways to energize our outbound marketing activities and the drive to expand our visibility and brand recognition around the world.”

With nearly 15 years of experience in the technology industry, Murphy possesses a diverse range of skills in areas including engineering, services, sales and marketing, which will be instrumental in overseeing DataCore’s marketing activities around the globe. He was previously Director Americas SMB Sales and Worldwide Channel Development Manager at VMware, where he developed go-to-market strategies and oversaw both direct and inside channel sales teams in both domestic and international markets.

Prior to that, Murphy was senior product marketing manager at NetApp, focusing on backup and recovery solutions and their Virtual Tape Library product line. In this role, Murphy led business development activities, sales training, compensation programs and joint-marketing campaigns. An excellent communicator, he has been a keynote speaker at numerous industry events, trade shows, end-user seminars, sales training events, partner/reseller events and webcasts. Before moving into sales and marketing, Murphy had a successful career in engineering.

Tuesday, 9 July 2013

Massive engineering project linking Scandinavia to mainland Europe picks DataCore storage virtualization to tier data and speed critical applications

Femern A/S, the managing company behind one of the most ambitious engineering projects connecting Scandinavia to mainland Europe, the Fehmarnbelt Tunnel, has adopted its SANsymphony-V to speed auto tiering of data and critical applications.

fehmarnbelt_tunnel_540

Tim Olsson, the IT manager gearing up to support the forthcoming €5.5 billion construction of the 18km immersed underwater tunnel, due for completion in 2021, said: "We are in the process of commencing 4 major construction contracts this summer to facilitate the start of the build of the Fehmarnbelt tunnel. The number of employees will start to escalate dramatically as we progress through the build; as well as the volume of CAD intensive documents and building designs and engineering specifications that will need to be accessed on the network."

Femern began the process of examining the present infrastructure and anticipating how it would scale as the construction progressed.

Within their Copenhagen data centre, Femern operated a Dell EqualLogic SAN that was reaching end of life with spare parts becoming increasingly hard to obtain and when new replacement drives did arrive, there were frequent incidences of premature failure. The organisation had already adopted virtualisation, with their critical apps running virtualised on VMware but they appeared slow due to disk I/O bottlenecks as workloads contended for the challenged EqualLogic storage. This was therefore not a time to consider allocating more disks, but an opportunity for an entire overhaul to an alternative software defined SAN and infrastructure.

"Not only was the EqualLogic box falling out of warranty and therefore coming with an increased high price tag, it seemed to have a lot of features that we never, or rarely, used. We felt certain that even if we considered an upgraded, latest EqualLogic box, that we would still be constrained by the hardware rigidity and lacking the impending scalability and flexibility that would be required to accommodate the various construction phases."

Femern consulted their day -to-day supplier of all IT solutions, COMM2IG, to make an official technical proposal on alternatives solutions to run in Copenhagen and on the construction sites and exhibition centre at the mouth of the tunnel in Rødbyhavn.

COMM2IG examined the existing environment in detail. Femern were running VMware's ESX for server virtualisation, but availability, ease of management and automatic allocation of storage were sadly lagging behind. COMM2IG grasped the opportunity to recommend the installation of SANsymphony-V software storage hypervisor to support the latest version of VMware's Vsphere delivering their tier 1 applications virtually. To help overcome performance issues, COMM2IG recommended that Femern incorporate flash-based Fusion-io, Inc.'s technology for their most important applications to achieve far faster speeds than from spinning disks and to overcome performance bottlenecks.

The software based solution - offering performance, flexibility and investment protection: Two SANsymphony-V nodes were implemented on two HP DL380 G7 servers. Each server was provisioned with two CPUs, 96GB of RAM for super caching performance boost and 320GB Fusion-io PCIe ioDrive2 cards for flash integration in the storage pool as part of a hybrid, automatically tiered storage configuration.

VMware's server virtualisation was upgraded to vSphere 5.1. Overall, the environment offered 50TB of mirrored storage based on HP direct-attached SASand SATA drives augmented by a flash storage layer. Use of the Fusion acceleration cards would be optimised through DataCore's auto tiering capabilities; reserving the more expensive flash storage layer for the most demanding applications; and ensuring that other, less accessed data, be relocated automatically to Femern's existing SAS and SATA based storage enabling a 1/2/3 auto tiered environment.

Results:Installation commenced with the help of COMM2IG and twelve months down the line, Femern are well placed to comment on the success of their software defined data centre. Most noticeable is the increased performance of their SQL applications which struggled with latency issues previously.

"From a user perspective, they used to experience slow response times and a performance lag from their SQL & Exchange applications running virtually on VMware. Today, and in the future, with DataCore in the background, the applications appear robust, instantaneous and seamless. We have achieved this without the cost prohibitive price tag of pure flash. That's what any IT department strives to achieve."

Olsson summarises: "As the data volumes increase up to ten fold as construction starts in 2015, intelligent allocations to flash will increase the lifespan of the Fusion-ioDrive and offset the overall cost of ownership, delivering less critical data to the most cost effective tier that can deliver acceptable performance. With this multi-faceted approach to storage allocation using DataCore's SANsymphony-V solution, Femern is able to manage all devices under one management interface; regardless of brand and type.  Given that DataCore has been robustly supporting VMware environments for many years, SANsymphony-V is viewed as the perfect complement, offering enhanced storage management and control intelligence. For Femern, this has manifested in reduction of downtime; adding new VMs; taking backups; migrating data and expanding capacity can all now be done without outages.

"What we have achieved here with DataCore storage virtualisation software sets us on the road to affordable, flexible growth to eliminate storage related downtime. Add the blistering speed of Fusion-io acceleration and we have created a super performing, auto tiered storage network, that does as the tunnel itself will do; connects others reliably, super fast and without stoppages," concludes Olsson.