Tuesday, 29 May 2012

The Top 2 virtualization worries keeping IT admins up all night

System downtime and slow application performance related to storage in virtualized environments are the primary concerns of IT administrators surveyed in a new report on the state of private clouds.

IT administrators are more concerned about performance issues and less about the cost of storage than they were a year ago, according to the report, “2012 State of the Private Cloud Survey” by DataCore Software, a provider of storage virtualization software.
DataCore’s survey, which polled IT administrators in the private and public sectors worldwide, revealed that 63 percent of respondents consider system downtime and slow application performance to be their primary storage-related virtualization concerns, up from 36 percent in 2011.
IT administrators still consider the rising cost of storage to be a problem with virtualization initiatives, but overall it is declining as a major concern, with just over half (51 percent) describing increasing storage costs as one of their biggest problems (down from 66 percent in 2011), the report states.

While increasing storage costs may be less of an issue than last year, storage-related costs continue to comprise a significant portion of virtualization budgets, with 44 percent of respondents saying that storage costs represent more than a quarter of their total budget for virtualization, the report states

Many companies are allocating more money for storage, with 37 percent saying their storage budgets have increased this year, while just 13 percent say they have been cut.

Despite this, more than one in three respondents (34 percent) admit they underestimated the impact server/desktop virtualization would have on their storage costs. For those deploying a private cloud, more than one in four (28 percent) underestimated how storage costs would be affected.

“More companies are virtualizing more servers than ever before, but a notable faction of users – about a third – are underestimating the storage costs associated with server and desktop virtualization projects as well as the storage costs associated with private clouds,” said to Mark Peters, senior analyst with Enterprise Strategy Group.

Throwing more money at storage has not reduced performance concerns for companies that have embraced server and desktop virtualization. Even with an increase in the average storage budget, more companies reported significant problems with storage-related performance, bottlenecks, downtimes and business continuity in 2012, the report states.

“Rather than simply continuing to expand traditional storage solutions, IT managers would be well advised to consider addressing performance and downtime issues with a storage virtualization solution that enables them to apply a simplified management approach to manage their storage resources in a single, logical storage pool," Peters said.

“As virtualization moves from theory to practice, storage-related performance and availability are becoming of greater concern to businesses, but cost concerns haven’t gone away,” said George Teixeira, president and CEO of DataCore Software, who recommends the use of storage hypervisors.

Storage hypervisors ensure high performance and availability in the storage infrastructure through features such as auto-tiering, device interchangeability, thin provisioning and continuous data protection, Teixeira noted. “A storage hypervisor solves the cost issue by enabling enterprises to make greater use of existing storage infrastructure, while reducing the need for large-scale storage hardware,” he said.

Read full article at: http://gcn.com/articles/2012/05/01/datacore-storage-virtualization-survey-top-concerns.aspx

Tuesday, 22 May 2012

DataCore Software Announces Enterprise-Wide Storage Hypervisor Management Integration with VMware vCenter™ to Empower vSphere Admins to Manage VMs, Snapshots and Storage from a Single Console


Enterprises can control and schedule key SANsymphony-V storage virtualization services directly from their VMware vCenter™ Server management platform

"The new DataCore Storage Hypervisor Plug-in for VMware vCenter Server combines two industry-leading management solutions to offer customers the best of both worlds through a single view infrastructure management," said Parag Patel, vice president, Global Strategic Alliances, VMware. "Customers can better manage and realize the full benefits of their desktop, server and storage resources in a VMware virtual environment."

DataCore Software today announced that it has released new plug-in software to integrate the powerful enterprise-wide storage management capabilities of DataCore’s SANsymphony™-V Storage Hypervisor with VMware vCenter™ Server. The management plug-in software is available immediately for download to all SANsymphony-V customers.

VMware vCenter is the de facto standard for managing VMware virtual infrastructures and the SANsymphony-V Storage Hypervisor combines powerful storage virtualization and enterprise-wide storage management. Together, these capabilities are seamlessly integrated to allow a VMware administrator to efficiently manage all their virtual machines (VM) and storage resources from a single console.

DataCore and VMware – A Combination Delivering Compelling Benefits
VMware administrators are now able to provision, monitor and manage their storage and server resources from a single pane of glass and perform common storage tasks and complex scheduling workflows to clone, snapshot and restore data stores and VMs without having to become storage experts.

The vCenter Plug-In helps VMware administrators increase productivity and respond faster to user needs in a number of ways, including:


• Enabling an end-to-end view for VMware to storage from a virtual machine, host or virtual disk perspective; central monitoring and management of storage resources

• Rapidly provisioning virtual disks to VMware vSphere®/ESX® hosts


• Creating and restoring snapshots of data stores where VMs are housed


• Scheduling tasks to coordinate workflows between vSphere and SANsymphony-V servers


• Taking consistent snapshots of a data store for validation or before making changes to the environment


• Scheduling recovery clones (full copy) or efficient differential snapshots to rapidly restore VMs


• Employing high-availability mirror protection to virtual disks


• Visualizing current conditions in easy-to-understand terms to simplify troubleshooting and performing root cause analysis for virtual machines and their storage


• Lowering OpEx and enhancing administrator productivity – through automated workflows, task wizards and lesser need for storage training and specialized skill sets

Simplify Difficult Storage Tasks: Snapshots, Clones and Restorations
The software’s built in wizards and simple command menus take the complexity out of creating, scheduling or capturing snapshots of VMs. Administrators simply select the data store housing the specific VMs of interest. The plug-in signals the VMware vCenter Server to quiesce those VMs and then triggers SANsymphony-V to take snapshots of the corresponding data store. Both full clone and differential (lightweight) snapshots are supported. The known positive state in those online snapshots may be used to set up new VMs or quickly recover from malware and user errors that may have damaged their integrity.

Snapshots with a click of the mouse may be scheduled at regular intervals, for instance, Saturday morning at 3:00 a.m. when it will not disrupt daily operations. The workflow can also define how long the snapshots should be retained before being deleted.

VMware administrators want to focus on VMs not storage complexity
Transitioning to a virtualized environment creates new requirements for the effective management of the underlying storage resources that are virtualized. VMware administrators are faced with requests to provision new VMs, set up data stores and manage cloning and snapshot copy operations. There can be a lack of in-depth knowledge of storage systems and no end-to-end management leaving VMware administrators with an incomplete view into how physical resources in the virtual environment are used. This in turn, can increase the amount of time and effort required to manage the environment. Therefore, VMware administrators need to be prepared with the right management tools to take on these tasks and as a result are using VMware vCenter to manage their servers.

The SANsymphony-V Plug-in for VMware vCenter provides a rich set of management capabilities for storage. From the plug-in, vSphere administrators can fulfill the dynamic storage needs of their vSphere/ESX clusters, hosts and virtual machines without having to become a storage expert or having to leave their familiar central console.

DataCore and VMware: The Fully Virtualized Data Center
“VMware customers around the globe running VMware vSphere rely on DataCore’s solutions to deliver the critical 'third dimension' of virtualization – storage. DataCore has architected its software to cost-effectively shape the shared storage infrastructure required by virtualization,” states Carlos Carreras, vice president of alliances and business development for DataCore Software. “The SANsymphony-V VMware vCenter Plug-in makes the storage management job significantly easier for VMware administrators. They now have a ‘one-stop’ administrative console to monitor and manage the full set of server and storage resources across their entire virtual data center.”

For more information:
For more information please see the SANsymphony™-V Plug-in for VMware vCenter or email DataCore at info@datacore.com .

Data sheet: http://pages.datacore.com/SANsymphony-VforVMwarevCenter.html







Wednesday, 16 May 2012

Storage hypervisor makes cloud storage fast, affordable and reliable


Cloud providers like to tout how quickly and easily they can scale their environments to meet customer demands, but nothing puts that to the test like cloud storage growth. Even outside of the cloud, storage demands can easily spiral out of control, and there comes a point when it's no longer economical for a provider to keep adding boxes. Anticipating that challenge before launching its cloud services in 2009, Host.net deployed DataCore Software's storage hypervisor, SANsymphony-V, a platform that Host.net credits for providing its customers with 900 days of consecutive, uninterrupted uptime.

In this case-study podcast, SearchCloudProvider.com site editor Jessica Scarpati gets a crash course on storage hypervisors from George Teixeira, CEO of DataCore Software, before diving into the details of Host.net's deployment with Jeffrey Slapp, vice president of virtualization services at the Florida-based cloud and managed services provider.

Friday, 11 May 2012

Can Storage Hypervisors Enable BYOD for Data Storage?

By George Teixeira, CEO & President DataCore Software

I’m sure you have seen the wider use of the term BYOD (Bring Your Own Device) in our industry and while I know it mainly refers to mobile devices, it got me thinking about why this is an inevitable trend – fueled by customers wanting greater buying power and flexibility. Therefore why shouldn’t this same trend impact their storage devices?

In reality, with storage hypervisors it already has. DataCore’s SANsymphony-V software empowers users with hardware interchangeability (another way to say BYOD) and a powerful, enterprise-wide feature set that works with whatever the existing or latest storage devices happen to be. For those skeptics out there, try it out yourself Download a Storage Hypervisor or give it a Test Drive on our hosted virtual environment.

Why do I say it’s inevitable? Because it has already happened before with server hypervisors and they dramatically make the point. Think how the world has changed in terms of server platforms. We now take for granted that with VMware and Microsoft Hyper-V software in place that the hardware they run on has become largely irrelevant and for the most part a matter of personal preference.
 Think about it. When you deploy VMware vSphere or Hyper-V, do you really care if it is running on a Dell, HP, IBM or Intel server platform? Sure you may have a preference, but it clearly a secondary consideration. Instead, other factors like company buying practices, best price or the personal vendor choice of your boss often drive the decision. It is now obvious that after the advent of server virtualization, we now can take it for granted that software architecture is what truly matters and that the underlying hardware will continuously change. The software-based hypervisor architecture is what endures and enables new technologies to be incorporated over time. Specific hardware devices (or brands) are like fads -- they will continue to "come and go." The faster pace of change and the need to continue to drive costs down will make this trend inevitable. BYOD in a corporate sense means greater purchasing power and greater freedom to incorporate ”best value” storage solutions when and where they are needed.

Don’t take my word for it. Thousands of customers have already deployed DataCore SANsymphony-V and are realizing the compelling benefits of hardware independence. Find out more by checking out the SANsymphony-V analyst lab reports and the latest storage hypervisor capabilities like automated storage tiering that let you not only bring new devices into your infrastructure but get the most out of them.

Saturday, 5 May 2012

Big Smart Storage for a Big Data World - Major issues to Address, Virtualization and Auto-tiering

http://www.storagereview.com/big_smart_storage_for_a_big_data_world

In 2012, the phrase “big data” is on everyone’s lips, just like virtualization was five years ago. Also just like virtualization, and cloud computing, there is a lot of hype obscuring the more important discussions around IT challenges, skill sets needed and how to actually build an infrastructure that can support effective big data technologies.

When people waxed poetic about the wonders of virtualization, they failed to recognize the new performance demands and enormous increase in storage capacity required to support server and desktop initiatives. The need for higher performance and continuous availability came at a high price, and clearly, the budget-busting impact subsequently delayed or scuttled many projects.

These problems are now being solved through the adoption of storage virtualization that harnesses industry innovations in performance, drives greater commoditization of expensive equipment, and automates storage management with new technologies like enterprise-wide auto-tiering software.

That same dynamic is again at work, only this time with big data.

Business intelligence firms would have you believe that you simply slap their analytics platforms on top of a Hadoop framework and BINGO – you have a big data application. The reality is, Hadoop production deployments are practically non-existent in large enterprises. There are simply not many people who understand and can take advantage of these new technologies. In addition, the ecosystem of supporting technologies and methodologies are still too immature, which means the industry is going through a painful process of trial and error, just as it did with virtualization five years ago. Again, underestimating storage requirements is emerging as a major barrier to adoption as the age-old practice of “throwing hardware at the problem” creates often-untenable cost and complexity.

To put it bluntly, big data does not simply require big storage – it requires big smart storage. Unfortunately, the industry is threatening to repeat major mistakes of the past!

Before I turn to the technology, let me point out the biggest challenge being faced as we enter the world of big data - we need to develop the people who will drive and implement useful big data systems. A recent Forbes article indicated that our schools have not kept pace with the educational demands brought about by big data. I think the following passage sums up the problem well: “We are facing a huge deficit in people to not only handle big data, but more importantly to have the knowledge and skills to generate value from data — dealing with the non-stop tsunami. How do you aggregate and filter data, how do you present the data, how do you analyze them to gain insights, how do you use the insights to aid decision-making, and then how do you integrate this from an industry point of view into your business process?”

Now, back to the product technologies that will make a difference - especially those related to managing storage for this new paradigm.

As we all know, there has been an explosion in the growth of data, and traditional approaches to scaling storage and processing have begun to reach computational, operational and economic limits. A new tact is needed to intelligently manage and meet the performance and availability needs of rapidly growing data sets. Just a few short years ago, it was practically unheard of for organizations to be talking about scaling beyond several hundred terabytes, now discussions deal with several petabytes. Also in the past, companies chose to archive a large percentage of their content on tape. However, in todays on-demand, social media centric world, that’s no longer feasible. Users now demand instantaneous access and will not tolerate delays to restore data from slow and or remote tape vaults.



Instant gratification is the word-of-the-day and performance is critical.

New storage systems must automatically and non-disruptively migrate data from one generation of a system to another to effectively address long-term archiving. Also adding to this problem is the need to distribute storage among multiple data centers, either for disaster recovery or to place content closer to the requesting users in order to keep latency at a minimum and improve response times.

This is especially important for the performance of applications critical to the day-to-day running of a business such as databases, enterprise resource planning and other transaction oriented workloads. In addition to the data these resource-heavy applications produce, IT departments must deal with the massive scale of new media such as Facebook and YouTube. Traditional storage systems are not well suited to solve these problems, leaving IT architects with little choice but to attempt to develop solutions on their own or suffer with complex, low performing systems.

Storage systems are very challenged in big data and cloud environments because they were designed to be deployed on larger and larger storage arrays, typically located in a single location. These high-end proprietary systems have come at a high price in terms of capital and operational costs, as well as agility and vendor lock-in. They provide inadequate mechanisms to create a common, centrally managed resource pool. This leaves an IT architect to solve the problem with a multitude of individual storage systems usually compromised of varied brands, makes and models, requiring a different approach to replication across sites and some form of custom management.



With this in mind, there are major issues to address:
  • Virtual Resource Management and Flexibility: It takes smart software to transition the current landscape of monolithic, independent solutions into a virtual storage resource pool that incorporates and harnesses the large variety of storage hardware and devices available today. High Capital and Operational Costs: To reduce operational costs, achieving greater productivity through automation and centralized management is essential. Just as importantly, empowering companies with more of an ability to commoditize and allow hardware inter-changeability is critical to cost-effective storage scalability. No one can afford “big bucks” to throw more expensive “big box” solutions at a big data world. Hypervisors such as VMware and Microsoft Hyper-V have opened up purchasing power for users. Today, the use of a Dell, HP, IBM or Intel server has become largely a personal company preference. This same approach is required for storage and has driven the need for hypervisors.
  • Scalability Demands Performance and Continuous Availability: As data storage grows and more applications, users and systems need to be serviced, the requirement for higher performance and availability rise in parallel. It is no longer about how much can be contained in a large box, but, how do we harness new technologies to make storage faster and more responsive. Likewise, no matter how reliable any one single storage box may be, it does not compare to many systems working together over different locations to ensure the highest level of business continuity, regardless of the underlying hardware.
  • Dynamic Versus Static Storage: It is also obvious we can no longer keep up with how to optimize enterprise data for maximum performance and cost trade-offs. Software automation is now critical. Technologies such as thin-provisioning that span multiple storage devices are necessary for greater efficiency and better storage asset utilization. Powerful enterprise-wide capabilities that work across a diversity of storage classes and device types are becoming a must-have. Automated storage tiering is required to migrate from one class of storage to another. As a result, sophisticated storage hypervisors are critical, regardless of whether the data and I/O traffic is best-migrated from high-speed dynamic RAM memory, solid state drives, disks or even cloud storage providers.

Intensive applications now need a storage system that can start small, yet scale easily to many petabytes. It must be one that serves content rapidly and handles the growing workload of thousands of users each day. It must also be exceedingly easy to use, automate as much as possible and avoid a high degree of specialized expertise to deploy or tune. Perhaps most importantly, it must change the paradigm of storage from being cemented to a single array or location to being a dispersed, yet centrally managed system that can actively store, retrieve and distribute content anywhere it is needed. The good news is, storage hypervisors like DataCore’s SANsymphony-V, deliver an easy-to-use, scalable solution for the IT architect faced with large amounts of data running through a modern-day company.

Yes, big data offers big potential. Let’s just make sure we get storage right this time to avoid repeating the big mistakes of the past.

Wednesday, 2 May 2012

DataCore Software 2012 Private Cloud and Storage Virtualization Survey: Performance Bottlenecks and Downtime Top Storage-Related User Concerns

DataCore Software 2012 Private Cloud and Storage Virtualization Survey: Performance Bottlenecks and Downtime Top Storage-Related User Concerns

A Perfect Storm of Possibilities for Storage Virtualization

http://vmblog.com/archive/2012/05/01/datacore-software-2012-private-cloud-and-storage-virtualization-survey-performance-bottlenecks-and-downtime-top-storage-related-user-concerns.aspx
According to Mark Peters, senior analyst, Enterprise Strategy Group, “If the DataCore survey shows anything, it's that the time is ripe for storage virtualization, both to meet the business objectives associated with virtualization projects and to reduce the risks associated with those initiatives. More companies are virtualizing more servers than ever before, but a notable faction of users – about a third – are underestimating the storage costs associated with server and desktop virtualization projects as well as the storage costs associated with private clouds.

“Rather than simply continuing to expand traditional storage solutions, IT managers would be well advised to consider addressing performance and downtime issues with a storage virtualization solution that enables them to apply a simplified management approach to manage their storage resources in a single, logical storage pool. For the respondents in DataCore's survey – and others – who are not using storage virtualization, I would say that logic, availability, and need are all aligned to say it's time to take a serious look.”


Government Computer News: 2 virtualization worries that keep admins up at night


System downtime and slow application performance related to storage in virtualized environments are the primary concerns of IT administrators surveyed in a new report on the state of private clouds.

IT administrators are more concerned about performance issues and less about the cost of storage than they were a year ago, according to the report, “2012 State of the Private Cloud Survey” by DataCore Software, a provider of storage virtualization software.

DataCore’s survey, which polled 289 IT administrators in the private and public sectors worldwide, revealed that 63 percent of respondents consider system downtime and slow application performance to be their primary storage-related virtualization concerns, up from 36 percent in 2011. About 10.8 percent of the respondents state they work with government.

IT administrators still consider the rising cost of storage to be a problem with virtualization initiatives, but overall it is declining as a major concern, with just over half (51 percent) describing increasing storage costs as one of their biggest problems (down from 66 percent in 2011), the report states.

While increasing storage costs may be less of an issue than last year, storage-related costs continue to comprise a significant portion of virtualization budgets, with 44 percent of respondents saying that storage costs represent more than a quarter of their total budget for virtualization, the report states

Many companies are allocating more money for storage, with 37 percent saying their storage budgets have increased this year, while just 13 percent say they have been cut.

Despite this, more than one in three respondents (34 percent) admit they underestimated the impact server/desktop virtualization would have on their storage costs. For those deploying a private cloud, more than one in four (28 percent) underestimated how storage costs would be affected.

...“As virtualization moves from theory to practice, storage-related performance and availability are becoming of greater concern to businesses, but cost concerns haven’t gone away,” said George Teixeira, president and CEO of DataCore Software, who recommends the use of storage hypervisors.

Storage hypervisors ensure high performance and availability in the storage infrastructure through features such as auto-tiering, device interchangeability, thin provisioning and continuous data protection, Teixeira noted. “A storage hypervisor solves the cost issue by enabling enterprises to make greater use of existing storage infrastructure, while reducing the need for large-scale storage hardware,” he said.

The online survey was conducted in March 2012. The survey asked a series of questions about virtualization and its impact on storage.