Friday, 30 August 2013

Vancouver School Board and Tiering; DataCore Automatically Triages Storage For Performance

The Vancouver School Board (VSB) in Canada has been using storage virtualization for over 10 years.

...As the start of a new school season looms, the Vancouver School Board (VSB) is looking to automated storage tiered technology to help its systems cope with spikes in workload and storage demand.

Serving a staff of 8,000 and students numbering more than 50,000, the VSB network infrastructure is subjected to storage and workload demands that can rapidly peak at any given time during the school season. The IT department supports the email accounts of employees, human resources applications, financial systems, student records, the board’s Web site, as well as applications needed for various subjects and school projects.

“Getting optimum use of our server network is critical in providing optimum system performance and avoiding system slowdowns or outages,” said Peter Powell, infrastructure manager at VSB.

Rapidly moving data workload from one sever that may be approaching full capacity to an unused server in order to prevent an overload is one of the crucial tasks of the IT team, he said.

“The biggest single request we get is IOs for storage, getting data on and off disks,” said Powell. “We normally need one person to spend the whole day monitoring server activity and making sure workload is assigned to the right server. That’s one person taken away from other duties that the team has to do.”

Recently, however, the VSB IT team has been testing the new auto tiering feature that was introduced by DataCore Software in its SANsymphony-V storage virtualization software.

The VSB has been working with DataCore for more than 12 years in other projects. The board based its storage strategy on SANsymphony-V back in 2002 when the IT department found that it was getting harder to keep up with the storage demands of each application. SANSymphony-V’s virtualization technology allowed the IT team to move workload from servers that were running out of disk space to servers that had unavailable an unused capacity.

“In recent test runs we found that the system could be configured to monitor set thresholds and automatically identify and move workloads when ascribed limits are reached,”Powell said. “There’s no longer any need for a person to monitor the system all the time.”

Automatic Triage for Storage: Optimize Performance and Lower Costs
George Teixiera, CEO of DataCore, separately stated that the Auto-tiering feature acts “like doing triage for storage.”

“SANsymphony V can help organizations optimize their cost of storage for performance trade-offs,” he said. “Rather than purchase more disks to keep up with performance demand, auto tiering helps IT identify available performance and instantly moves the workload there. DataCore prioritizes the use of your storage to maximize performance within your available levels of storage."

In terms of capacity utilization and cost, DataCore's ability to auto-provision and pool capacity from different vendors is key. He said, some surveys indicate that most firms typically use only 30 per cent of their existing storage, but DataCore can help businesses use as much as 90 per cent of their existing storage.

Post based on the recent article in ITworld: Vancouver school board eyes storage auto-tiering

Wednesday, 28 August 2013

Financial services EOS benefits from a storage virtualization environment

EOS Group turned to Fujitsu and DataCore to modernize their storage infrastructure.
Also please see: The Fusjitsu case study on EOS Group posted in German.
Fujitsu DataCore

The global EOS Group, one of Europe's leading providers of financial and authorization services for the banking and insurance business sectors, employs about 5,000 people and has experienced rapid growth over the last few years. And at its headquarters in Hamburg, to meet their increasing business demands they faced a number of IT challenges: These include the processing and verification of sensitive customer data, more automation of how to acquire new clients, tracking and maintaining accounting information and maintaining and alerting their clients on important follow-ups and reminders, to processing payables and collections of receivables and the buying and selling of portfolio items - all these services are top priority jobs for the IT headquarters team.

The Challenge:
To get control of their ever-growing amounts of data, EOS decided it needed to find a flexible, extensible and easy-to-manage solution to improve their storage and storage area network (SAN). They also need a very high availability solution and this aspect would play a critical role in their decision, obviously they also needed to find a way to save money on operational costs and were looking for a simultaneous reduction in energy and maintenance costs.

They needed to do a massive expansion of their data centers to keep up with growth and they knew they needed to modernize their systems to achieve greater productivity. They decided a completely new solution was a necessity.

The Solution:
The financial services EOS opted for Storage Virtualization Software solution from DataCore to virtualize, manage and work across their Fujitsu ETERNUS DX80 storage systems and PRIMERGY servers within the SAN.

The result: The EOS Group benefited from increased performance, the highest levels of data protection and reduced maintenance costs.



ETERNUS DX 80

Eight Fujitsu ETERNUS DX80 storage systems, each with 36 terabytes of net disk space and four Fujitsu PRIMERGY RX600 servers allowed the EOS Group to scale up well beyond its original 30 terabytes of capacity. After the expansion they now manage 288 terabytes of net capacity under DataCore storage virtualization control - and if they have additional data growth it will no longer be a problem since they can easily scale the architecture. And thanks to the DataCore virtualization solution, they now all can manage all their storage in a common way and scale independently of all the underlying physical hardware.

What else did EOS want? Greater business continuity, resiliency from major failures and disaster recovery, of course. Therefore Fujitusu and DataCore combined to achieve an even higher level of protection and business continuity and deal with disaster events. The system was set up with mirrored data to a remote site that was reflected fully and kept in sync with the main site. The remote site was located across town and took advantage of DataCore's ability to stretch mirrors at high-speed and the software automatically updates and mirrors the data to create identical virtual disks that reside on different drives and can be arbitrarily far apart. With DataCore, EOS has achieved its goal for greater business continuity and disaster recovery.

Tuesday, 13 August 2013

Improving application performance and overcoming storage bottlenecks are the top business priorities for virtualized environments

Some interesting reports:

Enterprise Strategy Group recently comments on how organizations are resisting the move to virtualize their tier-1 applications due to poor performance:

"With the increasing demands on IT, users have less tolerance for poor application performance," said Mike Leone, ESG Lab engineer, Enterprise Strategy Group. "Many organizations resist moving tier-1 applications to virtual servers for fear that workload aggregation will slow performance. As a poor attempt to combat the problem, organizations add more hardware and software to the IT infrastructure, but with that comes higher costs and increased complexity. ESG research on virtualization revealed that after budget concerns and lack of legacy application support, performance issues were the key concern preventing organizations from expanding their virtualization deployments."

Storage and I/O bottlenecks appear to be the major obstacles. Gridstor recently published survey highlights the top priorities surrounding mid-market enterprises' virtual infrastructure requirements.



The industry survey revealed that improving application performance (51%) was the top business priority for virtualized environments, followed by the need to reduce I/O bottlenecks between VMs and storage (34%), the need for increased VM density (34%), the need to decrease storage costs (27%), and the need for improved manageability for virtualized systems (24%).



Respondents in the survey demonstrated agreement that storage resources have a direct correlation to the application performance and, as a result, ROI derived from virtualization projects. When asked about the top five factors they consider when choosing storage systems for virtualization projects, some of the highest priority responses included theability for storage to scale performance as needed (47%), the ability for storage to scale capacity as needed (47%), and the ability for storage to scale I/O as needed (37%).

The survey was conducted across 353 organizations representing multiple industry categories particularly in the areas of technology (14%), healthcare (13%), education (11%), government (8%), and finance (6%). The majority of responding companies had over 1,000 employees (93%) and more than 100 servers (59%).

http://www.storagenewsletter.com/news/marketreport/gridstore-application-performance

Friday, 9 August 2013

Virtualized Databases: How to strike the Right Balance between Solid State Technologies and Spinning Disks


Originally published in DataBase Trends and Applications Magazine, written by Augie Gonzalez
http://www.dbta.com/Articles/ReadArticle.aspx?ArticleID=90460

If money was not an issue, we wouldn’t be having this conversation. But money is front and center in every major database roll-out and optimization project, and even more so in the age of server virtualization and consolidation. It often forces us to settle for good enough, when we first aspired for swift and non-stop.

The financial tradeoffs are never more apparent than they have become with the arrival of lightning fast solid state technologies. Whether solid state disks (SSDs) or flash memories, we lust for more of them in the quest for speed, only to be moderated by silly constraints like shrinking budgets.

You know too well the pressure to please those pushy people behind the spreadsheets. The ones keeping track of what we spent in the past, and eager to trim more expenses in the future. They drive us to squeeze more from what we have before spending a nickel on new stuff. But if we step out of our technical roles and take the broader business view, their requests are really not that unreasonable. To that end, let’s see how we can strike a balance between flashy new hardware and the proven gear already on the data center floor. By that, I mean arriving at a good mix between solid state technologies and the conventional spinning disks that have served us well in years’ gone.

On the face of it, the problem can be rather intractable.  Even after spending tedious hours of fine tuning you’d never really be able to manually craft the ideal conditions where I/O intensive code sections are matched to flash, while hard disk drives (HDDs) serve the less demanding segments. Well, let me take that back - you could when databases ran on their own private servers.  The difficulty arises when the company opts to consolidate several database instances on the same physical server using server virtualization. And then wants the flexibility to move these virtualized databases between servers to load balance and circumvent outages.

Removing the Guesswork
When it was a single database instance on a dedicated machine, life was predictable. Guidelines for beefing up the spindle count and channels to handle additional transactions or users were well-documented. Not so when multiple instances collide in incalculable ways on the same server, made worse when multiple virtualized servers share the same storage resources. Under those circumstances you need little elves running alongside to figure out what’s best. And the elves have to know a lot about the behavioral and economic differences between SSDs and HDDs to do what’s right.

Turns out you can hire elves to help you do just that. They come shrink-wrapped in storage virtualization software packages. Look for the ones that can do automated storage tiering objectively - meaning, they don’t care who makes the hardware or where it resides.

On a more serious note, this new category of software really takes much of the guesswork, and the costs, out of the equation. Given a few hints on what should take priority, it makes all the right decisions in real time, keeping in mind all the competing I/O requests coming across the virtual wire. The software directs the most time-sensitive workloads to solid state devices and the least important ones to conventional drives or disk arrays. You can even override the algorithms to specifically pin some volumes on a preferred class of storage, say end-of-quarter jobs that must take precedence.

Better Storage Virtualization Products
The better storage virtualization products go one better. They provide additional turbo charging of disk requests by caching them on DRAM. Not just reads, but writes as well. Aside from the faster response, write caching helps reduce the duty cycle on the solid state memories to prolong their lives. Think how happy that makes the accountants. The storage assets are also thin provisioned to avoid wasteful over-allocation of premium-priced hardware.

This brings us to the question of uptime. How do we maintain database access when some of this superfast equipment has to be taken out of service? Again, device-independent storage virtualization software has much to offer here. Particularly those products which can keep redundant copies of the databases and their associated files on separate storage devices, despite model and brand differences. What’s written to a pool of flash memory and HDDs in one room is automatically copied to another pool of flash/HDDs. The copies can be in an adjacent room or 100 kilometers away. The software effectively provides continuous availability using the secondary copy while the other piece of hardware is down for upgrades, expansion or replacement. Same goes if the room where the storage is housed loses air conditioning, suffers a plumbing accident, or is temporarily out of commission during construction/remodeling.

The products use a combination of synchronous mirroring between like or unlike devices, along with standard multi-path I/O drivers on the hosts to transparently maintain the mirror images. They automatically fail-over and fail-back without manual intervention. Speaking of money, no special database replication licenses are required either. The same mechanisms protecting the databases also protect other virtualized and physical workloads, helping to converge and standardize business continuity practices.

And for the especially paranoid, you can keep distant replicas at disaster recovery (DR) sites as well. For this, asynchronous replication occurs over standard IP WANs.

If you follow the research from industry analysts, you’ve already been alerted to the difficulties of introducing flash memories/SSDs into an existing database environment with an active disk farm. Storage virtualization software can overcome many of these complications and dramatically shorten the transition time. For example, the richer implementations allow solid state devices to be inserted non-disruptively into the virtualized storage pools alongside the spinning disks. In the process, you simply classify them as your fastest tier, and designate the other storage devices as slower tiers. The software then transparently migrates disk blocks from the slower drives to the speedy new cards without disturbing users. You can also decommission older spinning storage with equal ease or move it to a DR site for the added safeguard.

Need for Insight
Of course, you’d like to keep an eye on what’s going on behind the scenes. Built-in instrumentation in the more comprehensive packages provides that precious insight. Real-time charts reveal fine grain metrics on I/O response and relative capacity consumption. They also provide historical perspectives to help you understand how the system as a whole responds when additional demands are placed on it, and anticipate when peak periods of activity are most likely to occur. Heat maps display the relative distribution of blocks between flash, SSDs and other storage media, including cloud-based archives.

What can you take away from this? For one, solid state technologies offer an attractive way to accelerate the speed of your critical database workloads. No surprise there. Used in moderation to complement fast spinning disks and high-capacity, bulk storage already in place, SSDs help you strike a nice balance between excellent response time and responsible spending. To establish and maintain that equilibrium in virtualized scenarios, you should accompany the new hardware with storage virtualization software – the device-independent type. This gives you the optimal means to assimilate flash/SSDs into a high-performance, well-tuned, continuously available environment. In this way, you can please the financial overseers as well as the database subscribers, not to mention all responsible for its caring and feeding - you included.

About the author:
Augie Gonzalez is director of product marketing for DataCore Software and has more than 25 years of experience developing, marketing and managing advanced IT products.  Before joining DataCore, he led the Citrix team that introduced simple, secure, remote access solutions for SMBs. Prior to Citrix, Gonzalez headed Sun Microsystems Storage Division’s Disaster Recovery Group. He’s held marketing / product planning roles at Encore Computers and Gould Computer Systems, specializing in high-end platforms for vehicle simulation and data acquisition.