Tuesday, 13 August 2013

Improving application performance and overcoming storage bottlenecks are the top business priorities for virtualized environments

Some interesting reports:

Enterprise Strategy Group recently comments on how organizations are resisting the move to virtualize their tier-1 applications due to poor performance:

"With the increasing demands on IT, users have less tolerance for poor application performance," said Mike Leone, ESG Lab engineer, Enterprise Strategy Group. "Many organizations resist moving tier-1 applications to virtual servers for fear that workload aggregation will slow performance. As a poor attempt to combat the problem, organizations add more hardware and software to the IT infrastructure, but with that comes higher costs and increased complexity. ESG research on virtualization revealed that after budget concerns and lack of legacy application support, performance issues were the key concern preventing organizations from expanding their virtualization deployments."

Storage and I/O bottlenecks appear to be the major obstacles. Gridstor recently published survey highlights the top priorities surrounding mid-market enterprises' virtual infrastructure requirements.



The industry survey revealed that improving application performance (51%) was the top business priority for virtualized environments, followed by the need to reduce I/O bottlenecks between VMs and storage (34%), the need for increased VM density (34%), the need to decrease storage costs (27%), and the need for improved manageability for virtualized systems (24%).



Respondents in the survey demonstrated agreement that storage resources have a direct correlation to the application performance and, as a result, ROI derived from virtualization projects. When asked about the top five factors they consider when choosing storage systems for virtualization projects, some of the highest priority responses included theability for storage to scale performance as needed (47%), the ability for storage to scale capacity as needed (47%), and the ability for storage to scale I/O as needed (37%).

The survey was conducted across 353 organizations representing multiple industry categories particularly in the areas of technology (14%), healthcare (13%), education (11%), government (8%), and finance (6%). The majority of responding companies had over 1,000 employees (93%) and more than 100 servers (59%).

http://www.storagenewsletter.com/news/marketreport/gridstore-application-performance

Friday, 9 August 2013

Virtualized Databases: How to strike the Right Balance between Solid State Technologies and Spinning Disks


Originally published in DataBase Trends and Applications Magazine, written by Augie Gonzalez
http://www.dbta.com/Articles/ReadArticle.aspx?ArticleID=90460

If money was not an issue, we wouldn’t be having this conversation. But money is front and center in every major database roll-out and optimization project, and even more so in the age of server virtualization and consolidation. It often forces us to settle for good enough, when we first aspired for swift and non-stop.

The financial tradeoffs are never more apparent than they have become with the arrival of lightning fast solid state technologies. Whether solid state disks (SSDs) or flash memories, we lust for more of them in the quest for speed, only to be moderated by silly constraints like shrinking budgets.

You know too well the pressure to please those pushy people behind the spreadsheets. The ones keeping track of what we spent in the past, and eager to trim more expenses in the future. They drive us to squeeze more from what we have before spending a nickel on new stuff. But if we step out of our technical roles and take the broader business view, their requests are really not that unreasonable. To that end, let’s see how we can strike a balance between flashy new hardware and the proven gear already on the data center floor. By that, I mean arriving at a good mix between solid state technologies and the conventional spinning disks that have served us well in years’ gone.

On the face of it, the problem can be rather intractable.  Even after spending tedious hours of fine tuning you’d never really be able to manually craft the ideal conditions where I/O intensive code sections are matched to flash, while hard disk drives (HDDs) serve the less demanding segments. Well, let me take that back - you could when databases ran on their own private servers.  The difficulty arises when the company opts to consolidate several database instances on the same physical server using server virtualization. And then wants the flexibility to move these virtualized databases between servers to load balance and circumvent outages.

Removing the Guesswork
When it was a single database instance on a dedicated machine, life was predictable. Guidelines for beefing up the spindle count and channels to handle additional transactions or users were well-documented. Not so when multiple instances collide in incalculable ways on the same server, made worse when multiple virtualized servers share the same storage resources. Under those circumstances you need little elves running alongside to figure out what’s best. And the elves have to know a lot about the behavioral and economic differences between SSDs and HDDs to do what’s right.

Turns out you can hire elves to help you do just that. They come shrink-wrapped in storage virtualization software packages. Look for the ones that can do automated storage tiering objectively - meaning, they don’t care who makes the hardware or where it resides.

On a more serious note, this new category of software really takes much of the guesswork, and the costs, out of the equation. Given a few hints on what should take priority, it makes all the right decisions in real time, keeping in mind all the competing I/O requests coming across the virtual wire. The software directs the most time-sensitive workloads to solid state devices and the least important ones to conventional drives or disk arrays. You can even override the algorithms to specifically pin some volumes on a preferred class of storage, say end-of-quarter jobs that must take precedence.

Better Storage Virtualization Products
The better storage virtualization products go one better. They provide additional turbo charging of disk requests by caching them on DRAM. Not just reads, but writes as well. Aside from the faster response, write caching helps reduce the duty cycle on the solid state memories to prolong their lives. Think how happy that makes the accountants. The storage assets are also thin provisioned to avoid wasteful over-allocation of premium-priced hardware.

This brings us to the question of uptime. How do we maintain database access when some of this superfast equipment has to be taken out of service? Again, device-independent storage virtualization software has much to offer here. Particularly those products which can keep redundant copies of the databases and their associated files on separate storage devices, despite model and brand differences. What’s written to a pool of flash memory and HDDs in one room is automatically copied to another pool of flash/HDDs. The copies can be in an adjacent room or 100 kilometers away. The software effectively provides continuous availability using the secondary copy while the other piece of hardware is down for upgrades, expansion or replacement. Same goes if the room where the storage is housed loses air conditioning, suffers a plumbing accident, or is temporarily out of commission during construction/remodeling.

The products use a combination of synchronous mirroring between like or unlike devices, along with standard multi-path I/O drivers on the hosts to transparently maintain the mirror images. They automatically fail-over and fail-back without manual intervention. Speaking of money, no special database replication licenses are required either. The same mechanisms protecting the databases also protect other virtualized and physical workloads, helping to converge and standardize business continuity practices.

And for the especially paranoid, you can keep distant replicas at disaster recovery (DR) sites as well. For this, asynchronous replication occurs over standard IP WANs.

If you follow the research from industry analysts, you’ve already been alerted to the difficulties of introducing flash memories/SSDs into an existing database environment with an active disk farm. Storage virtualization software can overcome many of these complications and dramatically shorten the transition time. For example, the richer implementations allow solid state devices to be inserted non-disruptively into the virtualized storage pools alongside the spinning disks. In the process, you simply classify them as your fastest tier, and designate the other storage devices as slower tiers. The software then transparently migrates disk blocks from the slower drives to the speedy new cards without disturbing users. You can also decommission older spinning storage with equal ease or move it to a DR site for the added safeguard.

Need for Insight
Of course, you’d like to keep an eye on what’s going on behind the scenes. Built-in instrumentation in the more comprehensive packages provides that precious insight. Real-time charts reveal fine grain metrics on I/O response and relative capacity consumption. They also provide historical perspectives to help you understand how the system as a whole responds when additional demands are placed on it, and anticipate when peak periods of activity are most likely to occur. Heat maps display the relative distribution of blocks between flash, SSDs and other storage media, including cloud-based archives.

What can you take away from this? For one, solid state technologies offer an attractive way to accelerate the speed of your critical database workloads. No surprise there. Used in moderation to complement fast spinning disks and high-capacity, bulk storage already in place, SSDs help you strike a nice balance between excellent response time and responsible spending. To establish and maintain that equilibrium in virtualized scenarios, you should accompany the new hardware with storage virtualization software – the device-independent type. This gives you the optimal means to assimilate flash/SSDs into a high-performance, well-tuned, continuously available environment. In this way, you can please the financial overseers as well as the database subscribers, not to mention all responsible for its caring and feeding - you included.

About the author:
Augie Gonzalez is director of product marketing for DataCore Software and has more than 25 years of experience developing, marketing and managing advanced IT products.  Before joining DataCore, he led the Citrix team that introduced simple, secure, remote access solutions for SMBs. Prior to Citrix, Gonzalez headed Sun Microsystems Storage Division’s Disaster Recovery Group. He’s held marketing / product planning roles at Encore Computers and Gould Computer Systems, specializing in high-end platforms for vehicle simulation and data acquisition.

Wednesday, 31 July 2013

Quorn Foods Announces “5 Nines” Availability and Improved SAP ERP Productivity with DataCore SANsymphony-V Storage Virtualization

Read: Quorn Foods Case Study

Leading food brand overhauls virtual infrastructure and optimizes Tier 1 applications: Takes Business Critical SAP ERP into the Virtual World and uses DataCore to reduce data mining times from 20 minutes to 20 seconds; plus enables Information Lifecycle Management via Auto Tiering.

Accelerate Business Applications"No one could fail to notice the dramatic leaps in performance that was now afforded by DataCore.”

Quorn Foods (http://www.quorn.com) have recently adopted DataCore SANsymphony-V software to achieve High Availability, to turbo charged application performance and to implement an intelligent Information Lifecycle Management (ILM) data flow with structured Auto Tiering.

Marlow Foods, better known as the owner of the Quorn brand, offers quality low fat, meat free food products to the discerning, health conscious customer. Employing 600 people across three UK sites, Quorn’s Head of IT is Fred Holmes. Back in 2011 when sold from a large parent company, Quorn had the opportunity to remap the entire existing physical server infrastructure which was rapidly falling outside of warranty. Fred notes:

"This was a three phrase project and had to be classified as a major systems overhaul that we were embarking on. In Phase 1, DataCore’s SANsymphony-V enabled smooth migration within a two-week period and dramatically increased IOPS, even with the high burden that virtual servers place when they are delivering thin client capabilities."

Phase 1: Server side Virtualization Progresses into Greenfield Site with DataCore providing the centralized storage and 99.999% reliability:

They consulted their trusted IT partner and DataCore gold partner, Waterstons, to assist with the major infrastructure overhaul. A greenfield site for virtualization, Fred and the assigned Waterstons project team, provided compelling financial analysis showing dramatic consolidation and resource savings to boot. A working proof of concept was deployed to substantiate findings and test that a Microsoft Remote Desktop Services (RDS) farm could support all applications for test user group, and to prove the benefits of server virtualization.

Two successful months later, the project team implemented full server side virtualization with three additional R710 hosts, all Brocade Fibre Channel attached to a storage area network (SAN) to support the full VMware vSphere Enterprise feature set. In total 30 workloads were virtualized into the new environment to allow older physical servers to be retired. From the Desktop perspective, a new RDS farm replaced 400 traditional desktops with thin client capabilities. DataCore’s SANsymphony-V solution provided the essential cost-effective centralized storage running across two Dell T710 commodity servers. DataCore’s storage hypervisor provided one general purpose synchronously mirrored SAN pool of 7TB usable (across a total of 48 10k SAS spindles in MD1220 SAS-attached storage shelves) to provide 99.999% reliability. The project team knew that the success of any robust, responsive VMware environment hinges on the abilities and performance of the storage infrastructure that sits beneath. This was especially true in Quorn’s highly virtualized infrastructure with users interacting directly with virtual RDS Session Hosts. From a business user perception, the virtualized estate provided them with a turbocharged world.

Phase II – taking Business Critical ERP into the Virtual World and using DataCore to reduce data mining times from 20 minutes to 20 seconds:

Phase II covered virtualization of SAP Enterprise Resource Planning for financial, HR, accounts and sales platforms. With around 8,500 outlets that stock the Quorn brand across the UK alone, Marlow Foods have an extremely high dependency on their SAP ERP servers to drive critical business advantages across all departments. The challenge was to integrate the current SAP physical servers into the virtualized environment, whilst maintaining their 99.999% reliability and not affecting existing virtual machines reliant on the SAN. To address this challenge, the project team added another R710 host to the cluster, and a further 4TB of usable synchronously mirrored storage within a new storage pool dedicated entirely to SAP (across a further 48 10k SAS spindles) and began the process to rebuild their SAP servers into the virtual infrastructure. This meant transitioning huge databases from the old physical environment. Proof would come at the end of the month, when database queries were traditionally the highest and performance expectations were unmet with erratic response times.

In fact, the data mining queries were returned within 20 seconds, compared to 20 minutes in the previous physical environment. This is in no small part down to the way that DataCore’s SANsymphony-V leverages disk resources, assigning I/O tasks to very fast server RAM and CPU to accelerate throughput and to speed up response when reading and writing to disk. And with the wholly mirrored configuration, continuous availability is afforded.

"Like all things in IT, dramatic improvements to the infrastructure remain invisible to the user who only notices when things go wrong. But in this instance, no one could fail to notice the dramatic leaps in performance that was now afforded," Fred notes.

Phase III: Enhancing the Virtualized Estate with Auto-Tiering:

With everything virtualized, Fred and the team gave themselves six months to reflect and monitor the new infrastructure before suggesting additional enhancements. What Fred suspected was that he could also achieve greater intelligence from the SAN itself. Simon Birbeck, Waterstons, one of the U.K.’s DataCore Master Certified Installation Engineers, designed a performance enhancing model to automatically migrate data blocks to the most appropriate class of storage within the estate. Thinly provisioned SAN capacity was at around 80% utilization, but for 2013 planning Fred and the Waterstons team had allocated a 20% year-on-year growth, thereby potentially stretching utilization to the maximum by the end of the year. Simon recommended switching to a three tier SAN design to facilitate the best cascading practices of Information Lifecycle Management (ILM).

A red top tier comprised a new layer of SSD flash storage, designed to be always full and utilized by the most frequently read blocks for extremely fast response. A pre-existing amber mid-tier caters for the average use data blocks served by commodity 10k SAS drives. Sitting beneath is a blue tier as the ‘catch all’ layer for the least frequently accessed data, maintained on low cost, high capacity 7.2k SAS spindles.

Fred summarizes, "What Waterstons recommended was an intelligent usable form of ILM with DataCore’s SANsymphony-V at the front-end making the intelligent decision as to which blocks of data should be allocated where."

Indeed SANsymphony-V has provided both strong reporting and accurate planning for data growth. Built-in diagnostics help to pro-actively identify when a problem is manifesting, changing the management role from reactive to proactive/intelligent. For the future, Marlow Foods will look to expand on the high availability/business continuity environment afforded by SANsymphony-V by adding a further asynchronous replica at another site to further protect the SAP ERP environment. The scalability of SANsymphony-V brings a new level of comfort not possible with other forms of storage.

Fred takes the final words: "DataCore’s SANsymphony-V now reliably underpins their entire virtual estate. From a transformation perspective we have new levels of availability and enhanced decision making for both IT and the users."

To learn more about how DataCore can improve your critical business applications, please click here: http://pages.datacore.com/WP_VirtualizingBusinessCriticalApps.html

About Quorn Foods
Today, Quorn Foods is an independent company focused on creating the world’s leading meat-alternative business. Quorn Foods’ is headquartered in Stokesley, North Yorkshire and we employ around 600 people across three UK sites. Launched nationally in 1995, The Quorn brand offers a wide range of meat-alternative products, made using the proprietary technology of Mycoprotein that uniquely delivers the taste and texture of meat to the increasing number of people who have chosen to reduce, replace or cut out their meat consumption and who still want to eat a normal, healthy diet.

Monday, 29 July 2013

The Need for SSD Speed, Tempered by Virtualization Budgets

Augie GonzalezWhat IT department doesn't lust for the hottest new SSDs? Costs usually tempers that lust.

Article from Virtualization Review By Augie Gonzalez: The Need for SSD Speed, Tempered by Virtualization Budgets

What IT department doesn't lust for the hottest new SSDs? You can just savor the look of amazement on users' faces when you amp up their systems with these solid state memory devices. Once-sluggish virtualized apps now peg the speed dial.

Then you wake up. The blissful dream is interrupted by the quarterly budget meeting. Your well-substantiated request to buy several terabytes of server-side flash is “postponed" -- that's finance's code-word for REJECTED. Reading between the lines, they're saying, “No way we're spending that kind of money on more hardware any time soon.”

Ironically, the same financial reviewers recommend fast-tracking additional server virtualization initiatives to further reduce the number of physical machines in the data center. Seems they didn't hear the first part of your SSD argument. Server consolidation slows down mission critical apps like SQL Server, Oracle, Exchange and SAP, to the point where they are unacceptable. Flash memories can buy back quickness.

This not-so-fictional scenario plays out more frequently than you might guess. According to a recent survey of 477 IT professionals conducted by DataCore Software, it boils down to one key concern: storage-related cost. Here are some other findings:
  • Cost considerations are preventing organizations from adopting flash memory and SSDs in their virtualization roll-outs. More than half of respondents (50.2 percent) said they are not planning to use flash/SSD for their virtualization projects due to cost.
  • Storage-related costs and performance issues are the two most significant barriers preventing respondents from virtualizing more of their workloads. 43 percent said that increasing storage-related costs were a “serious obstacle” or “somewhat of an obstacle. 42 percent of respondents said the same about performance degradation or inability to meet performance expectations. 
  • When asked about what classes of storage they are using across their environments, nearly six in ten respondents (59 percent) said they aren't using flash/SSD at all, and another two in ten (21 percent) said they rely on flash/SSD for just 5 percent of their total storage capacity. 
Yet, there is a solution – one far more likely to meet with approval from the budget oversight committee, and still please the user community.

Rather than indiscriminately stuff servers full of flash, I'd suggest using software to share fewer flash cards across multiple servers in blended pools of storage. By blended I mean a small percentage of flash/SSD alongside your current mix of high-performance disks and bulk storage. An effective example of this uses hardware- and manufacturer-agnostic storage virtualization techniques packaged in portable software to dynamically direct workloads to the proper class (or tier) of storage. The auto-tiering intelligence constantly optimizes the price/performance yield from the balanced storage pool. It also thin provisions capacity so valuable flash space doesn't get gobbled up by hoarder apps.

Auto-Tiering

The dynamic data management scheme gets one additional turbo boost. The speed-up comes from advanced caching algorithms in the software, for both storing and retrieving disk/flash blocks. In addition to cutting input/output latencies in half (or better), you'll get back tons of space previously wasted on short-stroking hard disk drives (HDDs). In essence, you no longer need to overprovision disk spindles trying to accelerate database performance, nor do you need to overspend on SSDs.

Of course, there are other ways for servers to share solid state memories. Hybrid arrays, for example, combine some flash with HDDs to achieve a smaller price tag. There are several important differences between buying these specialized arrays and virtualizing your existing storage infrastructure to take advantage of flash technologies, not the least of which is how much of your current assets can be leveraged. You could propose to rip out and replace your current disk farm with a bank of hybrid array, but how likely is that to get the OK?

Instead, device-independent storage virtualization software uniformly brings auto-tiering, thin provisioning, pooling, advanced read/write caching and several replication/snapshot services to any flash/SSD/HDD device already on your floor. For that matter, it covers the latest gear you'll be craving next year including any hybrid systems you may be considering. The software gets the best use of available hardware resources at the least cost, regardless of who manufactured it or what model you've chosen.

Virtualizing your storage also complies with the greater corporate mandate to further virtualize your data center. It's one of the unusual times when being extra frugal pays extra big dividends.

No doubt you've been bombarded by the term software-defined data center, and more recently, software-defined storage. It has become a pseudonym for storage virtualization, with a broader, infrastructure-wide scope; one spanning the many disk and flash-based technologies, as well as the many nuances which differentiate various models and brands. Without getting lost in too many semantics, it boils down to using smart software to get your arms around and derive greater value from all the storage hardware assets at your disposal.

Sounds like a very reasonable approach given how quickly disk technologies and packaging turn over in the storage industry.

One word of caution: Any software-defined storage inextricably tied to a piece of hardware may not be what the label advertises.

Yes, the need for speed is very real, but so is the reality of tight funding. Others have successfully surmounted your same predicament following the guidance above. It will surely ease the injection of flash/SSD technologies into your current environment, even under the toughest of scrutiny. At the same time, you'll strike the difficult but necessary balance between your business requirements for fast virtualized apps and your very real budget constraints; without question, a most desirable outcome.

About the Author
Augie Gonzalez is director of product marketing for storage virtualization software developer DataCore Software.

Saturday, 20 July 2013

Best of the Tour de France 2013 Features DataCore's Storage Virtualization Mascot 'Corey'

Check out the Reuters photo coverage of the cycling event!
Featured Slide Show
Cycling fan Dave Brown from Park City of the U.S. poses while waiting for the riders climbing the Alpe d'Huez mountain during the 172.5km eighteenth stage of the centenary Tour de France cycling race from Gap to l'Alpe d'Huez, in the French Alps, July 18, 2013.

REUTERS/Jacky Naegelen
http://sports.yahoo.com/photos/best-of-the-tour-de-france-2013-slideshow/cycling-fan-dave-brown-park-city-u-poses-photo-135555539.html

Additional Pictures
Corey

Thursday, 11 July 2013

One of Europe's largest construction project linking Scandinavia to mainland Europe picks DataCore storage virtualization to tier data and speed critical applications

Femern A/S, the managing company behind one of the most ambitious engineering projects connecting Scandinavia to mainland Europe, the Fehmarnbelt Tunnel, has adopted its SANsymphony-V to speed auto tiering of data and critical applications.

fehmarnbelt_tunnel_540

Tim Olsson, the IT manager gearing up to support the forthcoming €5.5 billion construction of the 18km immersed underwater tunnel, due for completion in 2021, said: "We are in the process of commencing 4 major construction contracts this summer to facilitate the start of the build of the Fehmarnbelt tunnel. The number of employees will start to escalate dramatically as we progress through the build; as well as the volume of CAD intensive documents and building designs and engineering specifications that will need to be accessed on the network."

Femern began the process of examining the present infrastructure and anticipating how it would scale as the construction progressed.

Within their Copenhagen data centre, Femern operated a Dell EqualLogic SAN that was reaching end of life with spare parts becoming increasingly hard to obtain and when new replacement drives did arrive, there were frequent incidences of premature failure. The organisation had already adopted virtualisation, with their critical apps running virtualised on VMware but they appeared slow due to disk I/O bottlenecks as workloads contended for the challenged EqualLogic storage. This was therefore not a time to consider allocating more disks, but an opportunity for an entire overhaul to an alternative software defined SAN and infrastructure.

"Not only was the EqualLogic box falling out of warranty and therefore coming with an increased high price tag, it seemed to have a lot of features that we never, or rarely, used. We felt certain that even if we considered an upgraded, latest EqualLogic box, that we would still be constrained by the hardware rigidity and lacking the impending scalability and flexibility that would be required to accommodate the various construction phases."

Femern consulted their day -to-day supplier of all IT solutions, COMM2IG, to make an official technical proposal on alternatives solutions to run in Copenhagen and on the construction sites and exhibition centre at the mouth of the tunnel in Rødbyhavn.

COMM2IG examined the existing environment in detail. Femern were running VMware's ESX for server virtualisation, but availability, ease of management and automatic allocation of storage were sadly lagging behind. COMM2IG grasped the opportunity to recommend the installation of SANsymphony-V software storage hypervisor to support the latest version of VMware's Vsphere delivering their tier 1 applications virtually. To help overcome performance issues, COMM2IG recommended that Femern incorporate flash-based Fusion-io, Inc.'s technology for their most important applications to achieve far faster speeds than from spinning disks and to overcome performance bottlenecks.

The software based solution - offering performance, flexibility and investment protection: Two SANsymphony-V nodes were implemented on two HP DL380 G7 servers. Each server was provisioned with two CPUs, 96GB of RAM for super caching performance boost and 320GB Fusion-io PCIe ioDrive2 cards for flash integration in the storage pool as part of a hybrid, automatically tiered storage configuration.

VMware's server virtualisation was upgraded to vSphere 5.1. Overall, the environment offered 50TB of mirrored storage based on HP direct-attached SASand SATA drives augmented by a flash storage layer. Use of the Fusion acceleration cards would be optimised through DataCore's auto tiering capabilities; reserving the more expensive flash storage layer for the most demanding applications; and ensuring that other, less accessed data, be relocated automatically to Femern's existing SAS and SATA based storage enabling a 1/2/3 auto tiered environment.

Results:Installation commenced with the help of COMM2IG and twelve months down the line, Femern are well placed to comment on the success of their software defined data centre. Most noticeable is the increased performance of their SQL applications which struggled with latency issues previously.

"From a user perspective, they used to experience slow response times and a performance lag from their SQL & Exchange applications running virtually on VMware. Today, and in the future, with DataCore in the background, the applications appear robust, instantaneous and seamless. We have achieved this without the cost prohibitive price tag of pure flash. That's what any IT department strives to achieve."

Olsson summarises: "As the data volumes increase up to ten fold as construction starts in 2015, intelligent allocations to flash will increase the lifespan of the Fusion-ioDrive and offset the overall cost of ownership, delivering less critical data to the most cost effective tier that can deliver acceptable performance. With this multi-faceted approach to storage allocation using DataCore's SANsymphony-V solution, Femern is able to manage all devices under one management interface; regardless of brand and type.  Given that DataCore has been robustly supporting VMware environments for many years, SANsymphony-V is viewed as the perfect complement, offering enhanced storage management and control intelligence. For Femern, this has manifested in reduction of downtime; adding new VMs; taking backups; migrating data and expanding capacity can all now be done without outages.

"What we have achieved here with DataCore storage virtualisation software sets us on the road to affordable, flexible growth to eliminate storage related downtime. Add the blistering speed of Fusion-io acceleration and we have created a super performing, auto tiered storage network, that does as the tunnel itself will do; connects others reliably, super fast and without stoppages," concludes Olsson.

From StorageNewsletter: https://www.storagenewsletter.com/news/customer/femern-fehmarnbelt-tunnel-datacore

A Defining Moment for the Software-Defined Data Center

Original post:  http://www.storagereview.com/a_defining_moment_for_the_softwaredefined_data_center

For some time, enterprise IT heads heard the phrase, “get virtualized or get left behind,” and after kicking the tires, the benefits couldn’t be denied and the rush was on. Now, there’s a push to create software-defined data centers. However, there is some trepidation as to whether these ground-breaking, more flexible environments can adequately handle the performance and availability requirements of business-critical applications, especially when it comes to the storage part of the equation. While decision-makers have had good reason for concern, they now have an even better reason to celebrate as new storage virtualization platforms have proven to overcome these I/O obstacles.
Software-defined Storage

Just as server hypervisors provided a virtual operating platform, a parallel approach to storage is quickly transforming the economics of virtualization for organizations of all sizes by offering the speed, scalability and continuous availability necessary to achieve the full benefits of software-defined data centers. Specifically, these advantages are widely reported:
  • Elimination of storage-related I/O bottlenecks in virtualized data centers
  • Harnessing flash storage resources efficiently for even greater application performance
  • Ensuring fast and always available applications without a substantial storage investment
Performance slowdowns caused by I/O bottlenecks and downtime attributed to storage-related outages are two of the foremost reasons why enterprises have refrained from virtualizing their Tier-1 applications such as SQL Server, Oracle, SAP and Exchange. This comes across clearly in the recent Third Annual State of Virtualization Survey conducted by my company which showed that 42% of respondents noted performance degradation or inability to meet performance expectations as an obstacle preventing them from virtualizing more of their workloads. Yet, effective storage virtualization platforms are now successfully overcoming these issues by using device-independent adaptive-caching and performance-boosting techniques to absorb wildly variable workloads, enabling applications to run faster virtualized.
To further increase Tier-1 application responsiveness, companies often spend excessively on flash memory-based SSDs. The Third Annual State of Virtualization Survey also reveals that 44% of respondents found disproportionate storage-related costs were an obstacle to virtualization. Again, effective storage virtualization platforms are now providing a solution with such features as auto-tiering. These enhancements optimize the use of these more premium-priced resources alongside more modestly priced, higher capacity disk drives.
Such an intelligent software platform constantly monitors I/O behavior and can intelligently auto-select between server memory caches, flash storage and traditional disk resources in real-time. This ensures that the most suitable class or tier of storage device is assigned to each workload based on priorities and urgency. As a result, a software defined data center can now deliver unmatched Tier-1 application performance with optimum cost efficiency and maximum ROI for existing storage.
Once I/O intensive Tier-1 applications are virtualized, the storage virtualization platform ensures high availability. It eliminates single points of failure and disruption through application-transparent physical separation, stretched across rooms or even off-site with full auto-recovery capabilities designed for the highest levels of business continuity. The right platform can effectively virtualize whatever storage is present, whether direct-attached or SAN-connected, to achieve a robust and responsive shared storage environment necessary to support highly dynamic, virtual IT environments.
Yes, the storage virtualization platform is a defining moment for the software defined data center. The performance, speed and high availability required for mission-critical databases and applications in a virtualized environment has been realized. Barriers have been removed, and there is a clear, supported path for realizing greater cost efficiency. Still, selecting the right platform is critical to a data center. Technology that is full-featured and has been proven “in the field” is essential. Also, it’s important to go with an independent, pure software virtualization solution in order to avoid hardware lock-in and to take advantage of future storage developments that will undoubtedly occur.
George Teixeira - Chief Executive Officer & President DataCore Software
George TeixeiraGeorge Teixeira is CEO and president of DataCore Software, the premier provider of storage virtualization software. The company’s software-as infrastructure platform solves the big problem stalling virtualization initiatives by eliminating storage-related barriers that make virtualization too difficult and too expensive.
DataCore Software-Defined Storage

Paul Murphy Joins DataCore as Vice President of Worldwide Marketing to Build on Company’s Software-Defined Storage Momentum

Former VMware and NetApp Marketing and Sales Director to Spearhead Strategic Marketing and Demand Generation to Drive Company’s Growth and Market Leadership in Software-Defined Storage

Paul Murphy“The timing is perfect. DataCore has just updated its SANsymphony-V storage virtualization platform and it is well positioned to take advantage of the paradigm shift and acceptance of software-defined storage infrastructures,” said Murphy. “After doing the market research and getting feedback from numerous customers, it is clear to me that there is a large degree of pent-up customer demand. Needless to say, I’m eager to spread the word on DataCore’s value proposition and make a difference in this exciting and critical role.”

Paul Murphy joins DataCore Software as the vice president of worldwide marketing. Murphy will oversee DataCore’s demand generation, inside sales and strategic marketing efforts needed to expand and accelerate the company’s growth and presence in the storage and virtualization sectors.  He brings to DataCore a proven track-record and a deep understanding of virtualization, storage technologies and the pivotal forces impacting customers in today’s ‘software-defined’ world. Murphy will drive the company’s marketing organization and programs to fuel sales for DataCore’s acclaimed storage virtualization software solution, SANsymphony- V.

“Our software solutions have been successfully deployed at thousands of sites around the world and now our priority is to reach out to a broader range of organizations that don’t yet realize the economic and productivity benefits they can achieve through the adoption of storage virtualization and SANsymphony-V,” said DataCore Software’s Chief Operating Officer, Steve Houck. “Murphy brings to the company a fresh strategic marketing perspective, the ability to simplify our messaging, new ways to energize our outbound marketing activities and the drive to expand our visibility and brand recognition around the world.”

With nearly 15 years of experience in the technology industry, Murphy possesses a diverse range of skills in areas including engineering, services, sales and marketing, which will be instrumental in overseeing DataCore’s marketing activities around the globe. He was previously Director Americas SMB Sales and Worldwide Channel Development Manager at VMware, where he developed go-to-market strategies and oversaw both direct and inside channel sales teams in both domestic and international markets.

Prior to that, Murphy was senior product marketing manager at NetApp, focusing on backup and recovery solutions and their Virtual Tape Library product line. In this role, Murphy led business development activities, sales training, compensation programs and joint-marketing campaigns. An excellent communicator, he has been a keynote speaker at numerous industry events, trade shows, end-user seminars, sales training events, partner/reseller events and webcasts. Before moving into sales and marketing, Murphy had a successful career in engineering.

Tuesday, 9 July 2013

Massive engineering project linking Scandinavia to mainland Europe picks DataCore storage virtualization to tier data and speed critical applications

Femern A/S, the managing company behind one of the most ambitious engineering projects connecting Scandinavia to mainland Europe, the Fehmarnbelt Tunnel, has adopted its SANsymphony-V to speed auto tiering of data and critical applications.

fehmarnbelt_tunnel_540

Tim Olsson, the IT manager gearing up to support the forthcoming €5.5 billion construction of the 18km immersed underwater tunnel, due for completion in 2021, said: "We are in the process of commencing 4 major construction contracts this summer to facilitate the start of the build of the Fehmarnbelt tunnel. The number of employees will start to escalate dramatically as we progress through the build; as well as the volume of CAD intensive documents and building designs and engineering specifications that will need to be accessed on the network."

Femern began the process of examining the present infrastructure and anticipating how it would scale as the construction progressed.

Within their Copenhagen data centre, Femern operated a Dell EqualLogic SAN that was reaching end of life with spare parts becoming increasingly hard to obtain and when new replacement drives did arrive, there were frequent incidences of premature failure. The organisation had already adopted virtualisation, with their critical apps running virtualised on VMware but they appeared slow due to disk I/O bottlenecks as workloads contended for the challenged EqualLogic storage. This was therefore not a time to consider allocating more disks, but an opportunity for an entire overhaul to an alternative software defined SAN and infrastructure.

"Not only was the EqualLogic box falling out of warranty and therefore coming with an increased high price tag, it seemed to have a lot of features that we never, or rarely, used. We felt certain that even if we considered an upgraded, latest EqualLogic box, that we would still be constrained by the hardware rigidity and lacking the impending scalability and flexibility that would be required to accommodate the various construction phases."

Femern consulted their day -to-day supplier of all IT solutions, COMM2IG, to make an official technical proposal on alternatives solutions to run in Copenhagen and on the construction sites and exhibition centre at the mouth of the tunnel in Rødbyhavn.

COMM2IG examined the existing environment in detail. Femern were running VMware's ESX for server virtualisation, but availability, ease of management and automatic allocation of storage were sadly lagging behind. COMM2IG grasped the opportunity to recommend the installation of SANsymphony-V software storage hypervisor to support the latest version of VMware's Vsphere delivering their tier 1 applications virtually. To help overcome performance issues, COMM2IG recommended that Femern incorporate flash-based Fusion-io, Inc.'s technology for their most important applications to achieve far faster speeds than from spinning disks and to overcome performance bottlenecks.

The software based solution - offering performance, flexibility and investment protection: Two SANsymphony-V nodes were implemented on two HP DL380 G7 servers. Each server was provisioned with two CPUs, 96GB of RAM for super caching performance boost and 320GB Fusion-io PCIe ioDrive2 cards for flash integration in the storage pool as part of a hybrid, automatically tiered storage configuration.

VMware's server virtualisation was upgraded to vSphere 5.1. Overall, the environment offered 50TB of mirrored storage based on HP direct-attached SASand SATA drives augmented by a flash storage layer. Use of the Fusion acceleration cards would be optimised through DataCore's auto tiering capabilities; reserving the more expensive flash storage layer for the most demanding applications; and ensuring that other, less accessed data, be relocated automatically to Femern's existing SAS and SATA based storage enabling a 1/2/3 auto tiered environment.

Results:Installation commenced with the help of COMM2IG and twelve months down the line, Femern are well placed to comment on the success of their software defined data centre. Most noticeable is the increased performance of their SQL applications which struggled with latency issues previously.

"From a user perspective, they used to experience slow response times and a performance lag from their SQL & Exchange applications running virtually on VMware. Today, and in the future, with DataCore in the background, the applications appear robust, instantaneous and seamless. We have achieved this without the cost prohibitive price tag of pure flash. That's what any IT department strives to achieve."

Olsson summarises: "As the data volumes increase up to ten fold as construction starts in 2015, intelligent allocations to flash will increase the lifespan of the Fusion-ioDrive and offset the overall cost of ownership, delivering less critical data to the most cost effective tier that can deliver acceptable performance. With this multi-faceted approach to storage allocation using DataCore's SANsymphony-V solution, Femern is able to manage all devices under one management interface; regardless of brand and type.  Given that DataCore has been robustly supporting VMware environments for many years, SANsymphony-V is viewed as the perfect complement, offering enhanced storage management and control intelligence. For Femern, this has manifested in reduction of downtime; adding new VMs; taking backups; migrating data and expanding capacity can all now be done without outages.

"What we have achieved here with DataCore storage virtualisation software sets us on the road to affordable, flexible growth to eliminate storage related downtime. Add the blistering speed of Fusion-io acceleration and we have created a super performing, auto tiered storage network, that does as the tunnel itself will do; connects others reliably, super fast and without stoppages," concludes Olsson.


 

Monday, 8 July 2013

DataCore Updates and Advances its Proven SANsymphony-V Storage Virtualization Platform

DataCore Software Builds on its Software-Defined Storage Lead with Enhancements to its Proven SANsymphony-V Storage Virtualization Platform
 
DataCore continues to advance and evolve its device-independent storage management and virtualization software, while maintaining focus on empowering IT users to take back control of their storage infrastructure. To that end, the company announced today significant enhancements to the comprehensive management capabilities within version R9 of its SANsymphony™-V storage virtualization platform.

 New advancements in SANsymphony-V include:
  • Wizards to provision multiple virtual disks from templates
  • Group commands to manage storage for multiple application hosts
  • Storage profiles for greater control and auto-tiering across multiple levels of flash, solid state (SSDs) and hard disk technologies
  • A new database repository option for recording and analyzing performance history and trends
  • Greater configurability and choices for incorporating high-performance “server-side” flash technology and cost-effective network attached storage (NAS) file serving capabilities
  • Preferred snapshot pools to simplify and segregate snapshots from impacting production work
  • Improved remote replication and connectivity optimizations for faster and more efficient performance
  • Support for higher speed 16Gbit Fibre Channel networking and more.
For more details, please read: What's New in SANsymphony-V R9.0.3

“Storage is undergoing a sea-change today and traditional hardware manufacturers are suffering because they are in catch-up mode to meet the ‘new world order’ for software-defined storage where automation, fast flash technologies and hardware interchangeability are standard,” said George Teixeira, co-founder, president and CEO of DataCore Software. “We have listened to our customers and stayed true to our vision. With the latest release of SANsymphony-V, we are well-positioned to help organizations manage growth and leverage existing investments, while making it simple to incorporate current and future innovations. Our software features and flexibility empowers CIOs and IT admins to overcome the many storage challenges faced in a dynamic virtual world.”

Real-World Software-Defined Storage: Customer-driven Enhancements Overcome Challenges
Many of the new features which extend the scope and breadth of storage management would not even occur to companies just developing a software-defined package. They are the product of DataCore’s 15 years of customer feedback and field-proven experience in broad scenarios across the globe.

The enhancements introduced in the latest version of SANsymphony-V take on major challenges faced by large scale IT organizations and more diverse mid-size data centers. Aside from confronting explosive storage growth (multi-petabyte disk farms), organizations are experiencing massive virtual machine (VM) sprawl where provisioning, partitioning and protecting disk space taxes both staff and budget. Problems are further aggravated by the insertion of flash technologies and SSDs used to speed up latency-sensitive workloads. The time and resource demands required to manage a broadening diversity of different storage models, disk devices and flash technologies – even when standardized with a single manufacturer – are a growing burden for organizations already struggling to meet application performance needs on limited budgets.

The bottom line is that companies are forced to confront many unknowns in terms of storage. With traditional storage systems, the conventional practice has been to oversize and overprovision storage with the hope that it will meet new and unpredictable demands, but this drives up costs and too often fails to meet performance objectives. As a result, companies have become smarter and have realized that it is no longer feasible or sensible to simply throw expensive, purpose-built hardware at the problem. Companies today are demanding a new level of software flexibility that endures over time and adds value over multiple generations and types of hardware devices. What organizations require is a strategic – rather than an ad hoc – approach to managing storage.

Notable Advances with SANsymphony-V Update 9.0.3
 

Thursday, 27 June 2013

Virtualization World: DataCore SANsymphony-V Brings Smooth Sailing, Performance and Cost Savings for United Arab Shipping Company

http://virtualizationworld365.info/news_full.php?id=27961

DataCore’s SANsymphony-V combines with the power of Solid State Disk to provide ultra high I/O and enterprise management capabilities.

DataCore Software announced that the world’s 3
rd largest shipping organisation, United Arab Shipping Company (UASC) has dramatically increased performance using DataCore’s SANsymphony-V and RamSan’s SSD.

Ashraf Jamal, UASC’s Data Centre Manager, Dubai, observes
 “Whilst SSDs can be up to 100 times faster than SAS hard disk drives, there is high price tag for this performance – up to 20 times higher cost per GB. However what we have seen by using DataCore SANsymphony-V to auto-tier RamSan, and Nexsan, is that in reality we save by requiring significantly less storage hardware to house, manage and cool.”

UASC is an established colossus of the shipping world, covering over 200 destinations globally, via an expanding fleet of containerized, conventional and temperature controlled cargo vessels, including three recently launched ‘green’ super containerships -the largest and most advanced such vessels in the world. Back in 2010, in line with its continued plan for growth, UASC initiated a full overhaul of global IT systems, moving data centers from Singapore to Dubai and transitioning its entire network to the latest high performance integrated container carrier information system, 'TRUST', to automate business operations and provide lightning fast communication with the company’s fleet. Within the  ‘TRUST’ system sits fleet management; company email; HR system; AMOS fleet maintenance system and accounts; together with over 22,000 outlets – that allow users to book shipments/source fleet bills and obtain rates.

Ashraf Jamal reflects on the then user feedback that pointed towards variable and slow performance of applications: 
“Providing the infrastructure behind one of the world’s largest commercial shipping fleets requires lightning fast performance to reduce processing time and retain competitive edge; whilst providing enhanced management and speed of recovery for our core applications. To achieve this, we consulted our trusted advisors for over 4 years, ProTechnology (ProTech).”

Combined SSD/SANsymphony-V solution provides an affordable ultra high speed appliance with auto tiering, mirroring, snapshotting and replication:


Ali Saadawi, Senior Account Manager at DataCore Gold Partner ProTech, was instrumental in the design phase at UASC:- 
“Working with UASC, we devised a high performance computing infrastructure using TMS’s RamSan-630 Rackmount appliance working together with DataCore’s SANsymphony-V solution running on two Dell PowerEdge R710 servers.  The resultant combination creates an ultra high speed appliance with auto tiering, mirroring, snapshotting and replication in an affordable solution. From the outset, we projected that the combination of RamSan and DataCore would provide a 40% increase in the performance of RamSan alone - already one of the world’s fastest SSDs.”

Also within the Storage Area Network, ProTech installed 3 Nexsan SATABeasts 2’s, all connected over the Fibre Channel network through QLogic HBAs.

In its rawest form in a test environment, the RamSan-630 proved capable of achieving a powerful 1,200,000 IOPS (Input Output per Second) speed from the SSDs, meaning data transfer rates were hundreds of times faster than traditional mechanical hard disks. The team at UASC were able to combine that raw performance with DataCore’s SANsymphony-V unique caching algorithms to harness further dramatic jumps in I/O. (The caching within SANsymphony-V essentially recognizes I/O patterns to anticipate which blocks to read next so that it can be fulfilled quickly from memory at electronic speeds). Of even greater importance to the team was the ability to provide storage efficiency by intelligently auto-tiering data to cope with performance demands. With DataCore’s SANsymphony-V in place, the software dynamically chooses between allocating data to the RamSan’s high-end SSDs and the lower cost, higher capacity Nexsan SATABeast 2 drives. It achieves this through monitoring I/O behaviour, determining frequency of use, then dynamically moving blocks of information to the most suitable class or tier of storage device. Therefore SANsymphony™-V software automatically promotes USACs most frequently used blocks to the fastest tier, whereas least frequently used blocks get “demoted” to the slowest tier.

Now in a production environment, with the team increasing the capacity licence for SANsymphony-V with both snapshot and synchronous mirroring becoming favourite features of SANsymphony-V. The Continuous Data Protection(CDP) feature, allows UASC to return to an earlier point-in-time without taking explicit backups or interrupting applications; whilst logging and timestamping I/Os. This is helpful when USAC needs to quickly and seamlessly recover in minutes to a point in time to undo unintended data modifications, or to recover from an application bug incident. Previous Ashrif notes, this was a laborious and time consuming process that previously could take six or eight hours, locating and restoring tapes and recovering tables. Unsurprisingly for an organisation that takes high availability very seriously, SANsymphony-V’s real-time I/O synchronous replication eliminates single point of failures while making UASCs mirrored virtual disks behave like one, multi-ported shared drive.

Ashrif is pleased to endorse DataCore’s SANsymphony-V in a high performance computing environment. 
“Now our HPC environment is affordable, secure and easy to maintain and our TRUST and core applications certainly run faster and are more manageable through our unified storage management and virtualization layer, thanks to SANsymphony-V.”

And for the future, UASC plan to add replication to their Kuwait corporate head office, some 1000km away  for Disaster Recovery, but that’s a different chapter!

Thursday, 20 June 2013

United Arab Shipping Company uses DataCore’s SANsymphony-V storage virtualization software to increase performance and lower costs

DataCore’s SANsymphony-V combines with the power of Solid State Disk to provide ultra high I/O and enterprise management capabilities. 

DataCore Software announced that
the world’s 3rd largest shipping
organisation, United Arab
Shipping Company (UASC) has dramatically increased performance using DataCore’s SANsymphony-V and RamSan’s SSD.


Ashraf Jamal, UASC’s Data Centre Manager, Dubai, observes “Whilst SSDs can be up to 100 times faster than SAS hard disk drives, there is high price tag for this performance – up to 20 times higher cost per GB. However what we have seen by using DataCore SANsymphony-V to auto-tier RamSan, and Nexsan, is that in reality we save by requiring significantly less storage hardware to house, manage and cool.”

UASC is an established colossus of the shipping world, covering over 200 destinations globally, via an expanding fleet of containerised, conventional and temperature controlled cargo vessels, including three recently launched ‘green’ super containerships -the largest and most advanced such vessels in the world. Back in 2010, in line with its continued plan for growth, UASC initiated a full overhaul of global IT systems, moving data centres from Singapore to Dubai and transitioning its entire network to the latest high performance integrated container carrier information system, 'TRUST', to automate business operations and provide lightning fast communication with the company’s fleet. Within the  ‘TRUST’ system sits fleet management; company email; HR system; AMOS fleet maintenance system and accounts; together with over 22,000 outlets – that allow users to book shipments/source fleet bills and obtain rates.

Ashraf Jamal reflects on the then user feedback that pointed towards variable and slow performance of applications: “Providing the infrastructure behind one of the world’s largest commercial shipping fleets requires lightning fast performance to reduce processing time and retain competitive edge; whilst providing enhanced management and speed of recovery for our core applications. To achieve this, we consulted our trusted advisors for over 4 years, ProTechnology (ProTech).”

Combined SSD/SANsymphony-V solution provides an affordable ultra high speed appliance with auto tiering, mirroring, snapshotting and replication: 

Ali Saadawi, Senior Account Manager at DataCore Gold Partner ProTech, was instrumental in the design phase at UASC:- “Working with UASC, we devised a high performance computing infrastructure using TMS’s RamSan-630 Rackmount appliance working together with DataCore’s SANsymphony-V solution running on two Dell PowerEdge R710 servers.  The resultant combination creates an ultra high speed appliance with auto tiering, mirroring, snapshotting and replication in an affordable solution. From the outset, we projected that the combination of RamSan and DataCore would provide a 40% increase in the performance of RamSan alone - already one of the world’s fastest SSDs.” 

Also within the Storage Area Network, ProTech installed 3 Nexsan SATABeasts 2’s, all connected over the Fibre Channel network through QLogic HBAs.

In its rawest form in a test environment, the RamSan-630 proved capable of achieving a powerful 1,200,000 IOPS (Input Output per Second) speed from the SSDs, meaning data transfer rates were hundreds of times faster than traditional mechanical hard disks. The team at UASC were able to combine that raw performance with DataCore’s SANsymphony-V unique caching algorithms to harness further dramatic jumps in I/O. (The caching within SANsymphony-V essentially recognizes I/O patterns to anticipate which blocks to read next so that it can be fulfilled quickly from memory at electronic speeds). Of even greater importance to the team was the ability to provide storage efficiency by intelligently auto-tiering data to cope with performance demands. With DataCore’s SANsymphony-V in place, the software dynamically chooses between allocating data to the RamSan’s high-end SSDs and the lower cost, higher capacity Nexsan SATABeast 2 drives. It achieves this through monitoring I/O behaviour, determining frequency of use, then dynamically moving blocks of information to the most suitable class or tier of storage device. Therefore SANsymphony™-V software automatically promotes USACs most frequently used blocks to the fastest tier, whereas least frequently used blocks get “demoted” to the slowest tier.

Now in a production environment, with the team increasing the capacity licence for SANsymphony-V with both snapshot and synchronous mirroring becoming favourite features of SANsymphony-V. The Continuous Data Protection(CDP) feature, allows UASC to return to an earlier point-in-time without taking explicit backups or interrupting applications; whilst logging and timestamping I/Os. This is helpful when USAC needs to quickly and seamlessly recover in minutes to a point in time to undo unintended data modifications, or to recover from an application bug incident. Previous Ashrif notes, this was a laborious and time consuming process that previously could take six or eight hours, locating and restoring tapes and recovering tables. Unsurprisingly for an organisation that takes high availability very seriously, SANsymphony-V’s real-time I/O synchronous replication eliminates single point of failures while making UASCs mirrored virtual disks behave like one, multi-ported shared drive.

Ashrif is pleased to endorse DataCore’s SANsymphony-V in a high performance computing environment. “Now our HPC environment is affordable, secure and easy to maintain and our TRUST and core applications certainly run faster and are more manageable through our unified storage management and virtualization layer, thanks to SANsymphony-V.”

And for the future, UASC plan to add replication to their Kuwait corporate head office, some 1000km away  for Disaster Recovery, but that’s a different chapter!

Friday, 14 June 2013

DataCore Continues to Advance its Proven Software-defined Storage and Updates SANsymphony-V

“Storage is undergoing a sea-change today and traditional hardware manufacturers are suffering because they are in catch-up mode to meet the ‘new world order’ for software-defined storage where automation, fast flash technologies and hardware interchangeability are standard,” said George Teixeira, co-founder, president and CEO of DataCore Software. “We have listened to our customers and stayed true to our vision. With the latest release of SANsymphony-V, we are well-positioned to help organizations manage growth and leverage existing investments, while making it simple to incorporate current and future innovations. Our software features and flexibility empowers CIOs and IT admins to overcome the many storage challenges faced in a dynamic virtual world.”

Amid all the talk and future-looking promises of software-defined storage from hardware-biased manufacturers, DataCore Software has delivered real-world solutions to thousands of customers worldwide. DataCore continues to advance and evolve its device-independent storage management and virtualization software, while maintaining focus on empowering IT users to take back control of their storage infrastructure. To that end, the DataCore has just announced a number of significant enhancements to the comprehensive management capabilities within version R9 of its SANsymphony™-V storage virtualization platform.

New advancements in SANsymphony-V include:
  • Wizards to provision multiple virtual disks from templates 
  • Group commands to manage storage for multiple application hosts 
  • Storage profiles for greater control and auto-tiering across multiple levels of flash, solid state (SSDs) and hard disk technologies 
  • A new database repository option for recording and analyzing performance history and trends 
  • Greater configurability and choices for incorporating high-performance “server-side” flash technology and cost-effective network attached storage (NAS) file serving capabilities 
  • Preferred snapshot pools to simplify and segregate snapshots from impacting production work 
  • Improved remote replication and connectivity optimizations for faster and more efficient performance 
  • Support for higher speed 16Gbit Fibre Channel networking and more.
Real-World Software-Defined Storage: Customer-driven Enhancements Overcome Challenges
Many of the new features which extend the scope and breadth of storage management would not even occur to companies just developing a software-defined package. They are the product of DataCore’s 15 years of customer feedback and field-proven experience in broad scenarios across the globe.

The enhancements introduced in the latest version of SANsymphony-V take on major challenges faced by large scale IT organizations and more diverse mid-size data centers. Aside from confronting explosive storage growth (multi-petabyte disk farms), organizations are experiencing massive virtual machine (VM) sprawl where provisioning, partitioning and protecting disk space taxes both staff and budget. Problems are further aggravated by the insertion of flash technologies and SSDs used to speed up latency-sensitive workloads. The time and resource demands required to manage a broadening diversity of different storage models, disk devices and flash technologies – even when standardized with a single manufacturer – are a growing burden for organizations already struggling to meet application performance needs on limited budgets.

The bottom line is that companies are forced to confront many unknowns in terms of storage. With traditional storage systems, the conventional practice has been to oversize and overprovision storage with the hope that it will meet new and unpredictable demands, but this drives up costs and too often fails to meet performance objectives. As a result, companies have become smarter and have realized that it is no longer feasible or sensible to simply throw expensive, purpose-built hardware at the problem. Companies today are demanding a new level of software flexibility that endures over time and adds value over multiple generations and types of hardware devices. What organizations require is a strategic – rather than an ad hoc – approach to managing storage.

SANsymphony-V is a strategic productivity solution that works infrastructure-wide across many storage hardware brands and models. Its auto-tuning cache and auto-tiering software maximize the use of available CPU, memory and disk resources to dramatically increase overall storage performance, which translates into faster, more responsive applications...

Monday, 10 June 2013

Danish tunnel builder Femern opts for DataCore storage virtualisation

By Anthony Adshead, ComputerWeekly.com

The organisation overseeing construction of the 18km €5.5bn Fehmarnbelt tunnel between Denmark and Germany has deployed DataCore storage virtualisation software in place of its existing Dell EqualLogic array, cutting disk costs by around 75%.
Danish state-owned Femern’s IT systems will be used by up to 150 engineering staff directly, as well as numerous consultants and contractors who need access via VMware ESX to CAD drawings, specifications and workflow systems for the project, which will complete in 2021.
In preparation for commencement of construction work on the underwater tunnel and an expected expansion of data, Femern examined its systems and found its existing EqualLogic iSCSI SAN prone to I/Obottlenecks and reaching end of life.
Tim Olsson, IT manager at Femern, said the organisation did not consider upgrading the EqualLogic array because of its complexity and the cost of buying disk.
“The EqualLogic array was at the end of its life and coming with an increased high price tag," he said. "We didn’t buy another one because it was too complex for our needs, with features we did not use, and that it locked us into buying only disks from EqualLogic.”
Femern consulted with its IT partner COMM2IG, which recommended DataCore’sSANsymphony-V storage software with Fusion-io PCIe server flash to boost performance for access to tier 1 applications.
Two DataCore SANsymphony-V nodes were implemented on two HP DL380 G7 servers. Each server has two CPUs and 96GB of RAM, plus a 320GB Fusion-io PCIe ioDrive2 card. Overall, there is 50TB of storage capacity based on HP direct-attached storage in SAS and SATAdrives.
With the flash layer and the two spinning disk types there are three tiers of storage, between which DataCore automatically migrates data according to use characteristics to ensure data is matched as best as possible to the cost of storage it resides on.
Software-only storage adds flexibility
DataCore is a software-only storage product that customers can install on any suitable server hardware. It provides storage virtualisation functions that can pool disk on direct-attached storage or commodity or legacy arrays to create shared storage.
The market is dominated by suppliers that sell storage hardware bundled with their own controller software and operating systems (OS). Software products aim to break that link by offering storage software that can be deployed on commodity servers with standard disk drives to cut.
That was one of the chief draws of DataCore for the Femern IT department, according to Olsson.
“With DataCore, we can be more flexible in where we get our disks from," he said. "We can buy cheaper, slow disk or faster disk and make it available to our apps with DataCore. It’s a seamless way of scaling capacity. So, for example, when the HP arrays reach end of life, it will be a fluent process to migrate to the next physical media.”
Olsson estimated that buying DataCore plus HP direct-attached disk cost 50,000 Danish Kroner (about £6,000) instead of the DKr 300,000 (£35,000) a new EqualLogic array would have been.
Did Femern consider the possible extra work involved in a software storage product compared with a factory-produced array with in-built controller software?
“It took some time to set DataCore up with the switches and so on, but once set up it’s easy to add disk and make it available to the apps,” said Olsson.

Featured SlideShow: Virtualization's Top Challenges: Storage Performance, Costs

Storage Costs have increased


Most IT organizations are familiar with virtualization, yet many struggle with storage performance and cost issues. A new survey of 477 IT professionals conducted by Datacore Software, a provider of storage virtualization software, finds that 42 percent have problems with storage performance and costs. Additionally, 51 percent report that their storage budgets remained the same year-over-year, while 20 percent said they were reduced. Only 30 percent said their storage budgets grew in 2013. At the same time, 52 percent said storage now accounts for more than 25 percent of their virtualization budget. Putting all these numbers together highlights an opportunity for the channel in the form of helping IT organizations identify ways to manage storage more efficiently at a time when costs are rising and new initiatives involving big data are just getting off the ground. Among those opportunities are the emergence of flash storage and solid-state drives (SSDs), which the survey finds has seen limited adoption, and cloud storage, which is another much-hyped technology that has yet to see broad adoption. In both cases, adoption seems inevitable once channel partners show customers how it can be done cost effectively. Here are key takeaways from the study.

Full story:
http://www.channelinsider.com/virtualization/slideshows/virtualizations-top-challenges-storage-performance-costs/

Friday, 7 June 2013

DataCore Software Named Among Top Finalists for the 2013 Microsoft Partner of the Year Award

DataCore Software, a premier provider of storage virtualization software, today announced it has been selected as a finalist for the Microsoft 2013 Partner of the Year Award in the server platform category.

“Since DataCore™ was founded 15 years ago, its storage virtualization solutions have been written to work solely on Microsoft-based platforms,” said Carlos M. Carreras, vice president of alliance & business development at DataCore Software. “Our SANsymphony-V storage virtualization platform helps Microsoft customers to gain the most from their storage investments, which is an absolutely critical component to any organization’s data center. This award validates the value that DataCore can provide to those who demand the highest performance from their IT infrastructure.”

The 
Microsoft Partner of the Year Awards recognize Microsoft partners that have developed and delivered exceptional Microsoft-based solutions during the past year.

Wednesday, 5 June 2013

Need Faster Enterprise Apps and Continuous Availability? Storage Virtualization Done Right Accelerates and Protects Tier 1 Applications

What’s holding you back from virtualizing business-critical applications like Oracle, VDI, SAP, Exchange, SQL Server and SharePoint?

Our guess is you’re concerned about decreased performance and increased downtime.

But have you asked yourself why these apps tend to run slowly and erratically once virtualized? Here’s a hint: It’s not virtualization. No – these performance and availability issues stem from competition for shared storage resources. The resulting inconsistencies in service levels cause users to become frustrated, productivity and organizational efficiency to plunge – and your business to take hits that it can’t afford.

DataCore SANsymphony-V allows you to take advantage of the benefits that come with virtualizing tier 1 applications – without taking on the risks. With SANsymphony-V, you get: faster virtualized tier-1 apps, continuous availability (and peace of mind), less time and money spent on upgrades, and a more responsive, agile IT environment. Listen to this on-demand webcast or follow the links below to learn more.

Microsoft SQL Server
DataCore optimizes storage efficiency, performance and availability for scalable Microsoft Exchange Environments.

Microsoft Sharepoint
DataCore empowers Microsoft SharePoint environments to provide the highest levels of data availability, performance, & responsiveness of service.

SAP Business Applications
DataCore ensures performance & scalability at a lower ownership cost, higher service quality, & protection of your critical SAP ERP & App systems.

Microsoft Exchange
DataCore optimizes storage efficiency, performance and availability for scalable Microsoft Exchange Environments.

Oracle
DataCore maximizes the performance and return on your investment in Oracle databases, applications, & data warehouses for your information infrastructure.

VDI
DataCore makes the economics of storage work for VDI. The SANsymphony-V storage hypervisor is specifically designed to handle the unique challenges of VDI environments.

Wednesday, 29 May 2013

DataCore’s SANsymphony-V Storage Virtualization Technical Deep Dive Series – Focus on Replication

DataCore’s SANsymphony-V – Replication

In this post I want to introduce you to the “Replication” feature offered by the storage hypervisor SANsymphony-V. A replication solution offers you the ability to keep a “copy” of your production data at a remote site, usually for disaster recovery scenarios. But a replication solution does not simply copy the data occasionally, no they continuously keep them up to date to be as close as possible to the production data to offer good restore point objectives (RPO).

Read More: http://vtricks.com/?p=744

To be able to replicate data you need to partner your server group with a replication group.PartnerWithReplicationGroup
Once the server groups are connected your SVV console will look similar to this.
ServerGroupOverview_new
 

Tuesday, 28 May 2013

DataCore Survey Finds Cause for Data Storage Pause

Article from IT BusinessEdge: DataCore Survey Findings

With the rise of Flash memory and cloud computing, there have never been more options for managing data storage effectively. Obviously, that’s a very good thing, given the amount of data that needs to be managed.

A new survey of 477 IT professionals conducted by DataCore Software, a provider of storage virtualization software, finds that the move to embrace new approaches to storage is growing at a slow but steady pace.

Datacore CEO George Teixeira says that despite some of the inherent performance benefits of Flash and the promise of reduced storage costs in the cloud, issues such as the cost of Flash memory and the fact that applications are not optimized for Flash mean that the broad transition to Flash memory has thus far been hampered. As for the cloud, Teixeira says IT organizations are still struggling with any number of compliance and performance issues.

Add the fact that most IT organizations seem predisposed to build their own private cloud and it becomes clear that a large number of cultural and process issues still have to be worked out.
As Flash memory continues to get less expensive, it will change the way primary storage is managed, and the cloud will increasingly be relied on for backup and archiving. What’s not as clear is to what degree storage administrators will lead this charge versus having it forced upon them by developers and senior IT managers.

In either case, for now it looks like these changes will take place at an evolutionary rather revolutionary pace.

Tuesday, 21 May 2013

DataCore storage virtualization software boosts telecom's DR strategy

Home
When Chris Jones, manager of IT services at Blair, Neb.-based Great Plains Communications Inc., sought to improve his disaster recovery strategy, he knew exactly what he needed: synchronous mirroring of data hosted at two locations approximately 10 miles from each other. He also knew that he wanted to keep his storage array, even if it didn't support synch mirroring.

The independent local exchange service communications company has about 50 TB of storage, nearly all of which is virtualized through 200 virtual servers and 200 virtual desktops. Jones set about investigating options for 
synchronous mirroring capabilities, and said he learned quickly that "only the large manufacturers had that ability."
Jones's shop was running an EqualLogic iSCSI SAN array. "It was a very nice storage box," he said. "We didn't want to pull it out of service early." With VMware ESX and vMotion running at both locations, he wanted a way to balance workloads dynamically between the two data centers without buying a new storage system.
"DataCore [Software] had this offering that you could layer on top [of our existing system]," Jones said. He purchased DataCore's SANsymphony-V storage, virtualization software billed by the vendor as having the ability to auto-tier and manage storage in enterprises using incompatible devices from multiple suppliers. SANsymphony-V's feature list includes synchronous mirroring, disk pooling, high-speed caching and RAID pooling, among others. "We saw some pretty good performance improvement through [DataCore's] caching technology," Jones said. "Our primary interest was in the mirroring."
According to Jones, "If I did have storage failure, which has occurred, everything would fail over [to the other DataCore node.] Should one of those storage systems fail, the VMs [virtual machines] immediately fail over to the remote storage system and retain their operational store. At that point, you would be looking at recovering from a snapshot or, heaven forbid, you actually go back to backup these days." That means "all our VMs reside in two storage systems at any given time," he said.
SANSymphony-V enabled Jones to "break out of a single-layer approach. We no longer have to buy big, complex systems. We can buy an x86 off the shelf and DataCore adds all the SAN technologies on top of that."
DataCore's tiering works similarly to that of Dell's Compellent Data Progression, EMC's FAST VP, Hewlett-Packard's 3PAR Adaptive Optimization, Hitachi Data Systems' Dynamic Tiering and IBM's System Storage Easy Tier. But among those vendors, only Hitachi supports arrays outside of its own. SANsymphony-V's tiering works across any storage device.
Jon Toigo, CEO and managing principal at Toigo Partners International, runs DataCore in his own environment and tells IT customers to look at DataCore to avoid "buying feature-encrusted gear that jacks up the price.
"DataCore can overlay on top of anything that connects to the server and manage it all as one pool," Toigo said. "It leverages all the load balance, receives all the writes into memory on the server and writes to non-volatile RAM. The system thinks your storage is four times faster because it's going down to RAM."