Tuesday, 31 January 2012

Enterprise Systems: From Clouds Comes Clarity in 2012

http://esj.com/articles/2012/01/16/From-Clouds-Comes-Clarity.aspx

By George Teixeira, CEO and President, DataCore Software

What can bridge the state of virtualization today and the unfettered reality we seek from the cloud? A true storage Hypervisor may hold the key.

Some people read palms or tea leaves to predict the future. I'm looking at clouds.

Clouds are what all sorts of people are talking about these days. I was at dinner the other evening and overheard a conversation at the next table between two senior citizens and what was likely a more tech savvy 20-something grandchild and his companion. When one asked the grandmother where she keeps the photographs she was showing them, she confidently answered, "They're in the cloud." Now, I'll bet she wouldn't have been comfortable entrusting her precious photos to me, to you, or to a company that might not be in business in the future. That unease would likely be exacerbated by the complexity of any assurances as to how these photos are stored, made available, or protected.

Yet, she is quite comfortable with "the cloud."

That's how powerful the cloud metaphor has become. It uses a level of abstraction (the picture of a cloud) to represent in a simple, nonthreatening, "I don't need to think about that" way the complex hardware and software components, and internal network connections, that are actually required to provide the services delivered. When people refer to cloud computing, what they are really talking about is the ability to simplify IT by abstracting the complexity of the data center from a bunch of individually managed elements into a service that is offered as part of a holistic "cloud."

This simplification through abstraction is also the cornerstone of virtualization. In fact, the clamor for the cloud is both a compliment to the attributes of virtualization and a criticism of its progress to date. Virtualization is the key to cloud computing because it is the enabling technology allowing the creation of an intelligent abstraction layer that hides the complexity of underlying hardware or software. In the call for clouds, I hear an industry being challenged: "OK, we see what is possible through virtualization, so fill in the missing pieces and deliver on the promise already."

Software is the key to making clouds work because, in the cloud, resources (e.g., server computers, network connections, and desktops) must be dynamic. Simply put, only software can take a collection of static hardware devices that are not flexible and create from them flexible resource pools that can be allocated dynamically. Hypervisor solutions, like those from VMware and Microsoft, demonstrated the benefits of working devices as software abstractions at the server level (and to a lesser degree the desktop) and the importance of interchangeable servers that now have become the norm. It really does not matter whether a Dell, HP, IBM or an Intel server is the resource involved, that is a secondary consideration subject to price or particular vendor preference. From this experience, the market has become familiar with what is possible with virtualization.

Virtualization gives us greater productivity and faster responses to changing needs because software abstracted resources are not static and can be deployed flexibly and dynamically. It also gives us better economies of scale because these resources are pooled and can be easily changed and supplemented "behind the curtain" to keep up with growing and changing user demands. Yes, with a hypervisor we have freed our servers and desktops from their physical binds.

Still, it's just a taste of freedom; a removal of the handcuffs, but not the ankle chains; merely a lengthening of the leash, because eventually you hit the end and get yanked back to reality by the confines of your storage. Even amidst the great, industry-wide, liberation-through-virtualization movement of recent years, the answer when it comes to storage, unfortunately, has been to continue building traditional physical architectures that severely limit and, in fact, contradict the virtual infrastructures they are intended to support. They propagate locked-in, vendor specific hardware "silos" instead of decoupling storage resources from physical devices with a software abstraction layer.

In my view, that is the big hole in the ground that we keep falling into while desperately scanning the skies for clouds and a user experience free from physical architecture and hardware. Just what is it that can bridge the state of virtualization today and that unfettered reality we seek from clouds?

The answer: a Storage Hypervisor. In 2012, this critical piece to the fluffy puzzle will fill that gap in virtualization's march forward, clarifying how to bring that cloud future home to our storage today.

A Storage Hypervisor enables a new level of agility and storage hardware interchangeability. It creates an abstraction layer between applications running on servers and the physical storage used for data. Virtualizing storage and incorporating intelligence for provisioning and protection at the virtualization layer makes it possible to create a new and common level of storage management that works across the spectrum of assets including server attached disks, SSDs, disk systems, and storage in the cloud. Because it abstracts physical storage into virtual pools, any storage can work with any other storage, avoiding vendor hardware lock-in ensuring maximum ROI on existing resources and greater purchasing power in the future.

What is a "true" Storage Hypervisor? It's a portable, centrally managed, software package that enhances the value of multiple and dissimilar disk storage systems. It supplements these systems' individual capabilities with extended provisioning, replication, and performance acceleration services. Its comprehensive set of storage control and monitoring functions operates as a transparent virtual layer across consolidated disk pools to improve their availability and optimize speed and utilization.

A true Storage Hypervisor also provides important advanced storage management and intelligent features. For example, a critical Storage Hypervisor feature is automated tiering. This feature migrates and optimally matches the most cost-effective or performance-oriented hardware resources to application workload needs. Through this automated management capability, less-critical and infrequently accessed data is automatically stored on lower-cost disks, while more mission-critical data is migrated to faster, higher-performance storage and solid-state disks, whether those disks are located on premises or in the cloud. This enables organizations to keep demanding workloads operating at peak speeds while taking advantage of low-priced local storage assets or pay-as-you-go cloud storage. The Storage Hypervisor management layer makes it easy to incorporate new disk devices into existing data centers, providing enterprises, and, a fast and easy on-ramp to cloud resources, among other benefits.

It's clearly time for storage to acquire these cloud-like characteristics. Clouds are, after all, supposed to be pliant and nimble -- and that is what we need our storage to be. The whole point of cloud computing is delivering cost-effective services to users. This requires the highest degree of flexibility and openness, as opposed to being boxed-in to specific hardware that cannot adapt to change over time. That's the goal and it is what is driving such an interest in clouds. Hypervisors for virtual servers and desktops have mapped the way -- illustrating how portable software solutions can virtualize away complexity, constraint, and hardware-vendor lock-in. Only a Storage Hypervisor can do likewise for storage.

That's why 2012 will be the year storage goes virtual and the market learns Storage Hypervisors are the next level in flexible storage management. Already, they are being widely deployed and are enterprise-proven. A true Storage Hypervisor turns multiple, dissimilar, and static disk storage systems into a "what I want, where I need it, when I need it, without complexity" storage infrastructure. It gives our storage today what we've been looking for in the clouds of the future: a highly scalable, flexible infrastructure with real hardware interchangeability and the next level of virtual resource management. This is what is required to create virtual data centers or so-called "private clouds" and make practical the incorporation of external cloud services.

DataCore Software and Hitachi Team Together to Make IT Management Easier; Operations Analyzer for both physical and virtual IT assets

http://www.storagereview.com/datacore_and_hitachi_team_together_make_it_management_easier

The Hitachi IT Operations Analyzer now integrated with DataCore's storage hypervisor enables detailed monitoring of IT infrastructures including those under control of DataCore SANsymphony-V software. Enterprises will gain a holistic, single-pane view of their entire data center if a tight integration of the software is implemented, including virtualized storage resources, servers, networks and applications. This gives IT administrators the ability to easily maintain high levels of infrastructure availability, performance, as well as conducting fast problem diagnosis and resolution.

By unifying advanced services like provisioning, auto-tiering, replication and performance acceleration through a common storage virtualization layer, the DataCore SANsymphony-V storage hypervisor improves the value of diverse storage resources. Hitachi IT Operations Analyzer is an integrated availability and performance monitoring tool that can reduce complexity, streamline operations, and increase IT staff efficiency by monitoring availability and performance of heterogeneous physical and virtual server stacks, applications and network, and storage devices through a single screen.

SANsymphony-V plugs into IT Operations Analyzer through the integrated solution, regularly requesting inventory data and updates from the storage hypervisor. The single monitoring screen of IT Operations Analyzer is then populated with the latest information, keeping IT administrators up-to-date with all relevant changes in resources. It also alerts them to potential problems that require a proactive resolution.

DataCore’s SANsymphony-V storage hypervisor overcomes many existing issues by insulating users and applications from the constant upheaval in the underlying storage devices and physical plant where they reside. IT Operations Analyzer monitors all components that make up a dynamic, virtualized data center, while maintaining an inventory and collecting vital status and alerts.

With both the SANsymphony-V and the IT Operations Analyzer, administrators are now able to monitor every device in their data center infrastructure on a single screen, enabling them to quickly identify and resolve issues before they impact productivity.

The following are a few of the key benefits offered through with the integration of SANsymphony and IT Operations Analyzer:

  • Fast Problem Diagnosis & Resolution: the dashboard of IT Operations Analyzer features proactive alerting and a root cause analysis (RCA) engine, enabling IT administrators to reduce mean time of diagnosing problems by up to 90 percent, and the ability to process large numbers of event alerts in minutes.
  • Isolating Bottlenecks, Elevating Performance: IT Operations Analyzer allows thresholds to be set on devices to prevent bottlenecks. Congestion points can also be isolated, such as when reduced performance in a set of servers occurs; the single-pane console might show too many competing for the same resource. Administrators can then easily fix the issue by using SANsymphony-V to redistribute tasks.
  • Ease-of-Use: The integration of SANsymphony-V into IT Operations Analyzer provides visibility into storage assets that enables staff to identify issues without a high level of storage expertise and then resolve these before productivity suffers.

Friday, 27 January 2012

Storage Magazine/SearchStorage.com Storage System Software: 2011 Products of the Year Finalists

http://searchstorage.techtarget.com/Storage-system-software-2011-Products-of-the-Year-finalists

DataCore Software Corp. SANsymphony-V R8.0 Storage Hypervisor

DataCore’s SANsymphony-V storage virtualization product added automated storage tiering, enhanced continuous data production (CDP), faster asynchronous replication and an improved wizard-driven central console. The software runs on physical or virtual Windows servers and allows users to maintain, upgrade and expand their storage without disrupting applications.

Thursday, 26 January 2012

Prediction #6: Storage Hypervisors will make storage virtualization and Cloud storage practical.

Enterprise Strategy Group (ESG) recently authored a Market Report that I believe addresses the industry focus for the year ahead. In "The Relevance and Value of a ‘Storage Hypervisor'," it states "...buying and deploying servers is a pretty easy process, while buying and deploying storage is not. It's a mismatch of virtual capabilities on the server side and primarily physical capabilities on the storage side. Storage can be a ball and chain keeping IT shops in the 20th century instead of accommodating the 21rst century."

While enterprises strive to get all they can from their hardware investments in servers, desktops and storage devices, a major problem persists - a data storage bottleneck. Ironically, even as vendors promote and sell software-based, end-to-end virtualization and Cloud solutions, too often the reaction to handling storage is to throw another costly hunk of hardware at the problem in the form of a new storage array or device.

The time has come to resolve the storage crisis, to remove the last bastion of hardware dependency and to allow the final piece of the virtualization puzzle to fall into place. Server hypervisors like VMware and Hyper-V have gone beyond the basics of creating virtual machines and have created an entire platform and management layer to make virtualization practical for servers, desktops and Clouds.

Likewise, it's time to become familiar with a component quickly gaining traction and proving itself in the field: the storage hypervisor.

A storage hypervisor is unique in its ability to provide an architecture that manages, optimizes and spans all the different price-points and performance levels of storage. Only a storage hypervisor enables full hardware interchangeability. It provides important, advanced features such as automated tiering, which relocates disk blocks of data among pools of different storage devices (even into the Cloud) - thereby keeping demanding workloads operating cost-efficiently and at peak speeds. In this way, applications requiring speed and business-critical data protection can get what they need, while less critical, infrequently accessed data blocks gravitate towards lower-cost disks or are transparently pushed to the Cloud for "pay as you go" storage.

I think ESG's Market Report stated it well:

"The concept of a storage hypervisor is not just semantics. It is not just another way to market something that already exists or to ride the wave of a currently trendy IT term...Organizations have now experienced a good taste of the benefits of server virtualization with its hypervisor-based architecture and, in many cases, the results have been truly impressive: dramatic savings in both CAPEX and OPEX, vastly improved flexibility and mobility, faster provisioning of resources and ultimately of services delivered to the business, and advances in data protection.

"The storage hypervisor is a natural next step and it can provide a similar leap forward."


DataCore announced the world's first storage hypervisor in 2011. We built it with feedback gained in the real world over the last decade from thousands of customers. We saw this advance as a natural, but necessary, step forward for an industry that has been fixated on storage hardware solutions for far too long. 2012 will be the year that true hardware interchangeability and auto-tiering will move from "wish list" to "to do list" for many companies ready to break the grip by which storage hardware vendors have long held them.

Monday, 23 January 2012

Prediction #5: "Big Data" will get the Hype, but "Small Data" will continue to be where the action is in 2012.

Yes, Big Data is getting all the attention and yes Big Data needs "Big Storage," so every storage vendor will make it the buzz for 2012. Big Data has many definitions but what is obvious is that the growth rates and the amount of data being stored continue to grow. This Big Data requires better solutions in order for it to be cost-effectively managed. Analyst firm IDC believes the world's information is doubling every two years. By the end of 2011, according to IDC, the world will create a staggering 1.8 zettabytes of data. By 2020, the world will generate 50 times that amount of data, and IT departments will have to manage 75 times the number of "information containers" housing this data. Clearly, the largest companies managing petabytes, exabytes and beyond are the main focus of the talk, but the small and mid-size businesses that deal in terabytes of data comprise the vast majority of the real world. And it is these small-to-midsize companies that need practical data storage and management solutions TODAY. These business consumers can't afford to wait until tomorrow, nor can they afford to throw out their existing storage investments. Rather, they need solutions that build on what devices they currently have installed and make those more efficient, highly-available and easier to manage.

Software technologies, such as thin provisioning, auto-tiering, storage virtualization and storage hypervisors that empower users to easily manage all the storage assets they require - whether located on-premise, at a remote site or in the Cloud - will be key enablers in 2012. Big Data will be the buzz and will drive many new innovations. Big Data will also benefit greatly from these same software-based enablers, but the Small Data opportunity is extremely large and that is where I'm betting the real action will be in 2012.

Thursday, 19 January 2012

Prediction #4: Software will take center stage for storage in 2012 empowering users to a new level of hardware interchangeability and commodity-based "buying power."

"Throw more hardware at the problem" is still the storage vendor mantra; however, the growth rate and the complexity of managing storage are changing the model. The economics of "more hardware" doesn't work. Virtualization and Clouds are all about software. With the high-growth rates in storage, virtualization and Cloud computing, it is becoming increasingly clear that a hardware-bound scale-up model of storage is impractical. The "hardware mindset" restrains efficiency and productivity, while software that enables hardware interchangeability advances these critical characteristics. The hardware model goes against the IT trends of commoditization, openness and resource pooling, which have driven the IT industry over the last decade. Software is the key to automating and to increasing management productivity, while adding the flexibility and intelligence to harness, pool and leverage the full use of hardware investments.

As the world moves to Cloud-based, to virtualization-based and to "Big Data" environments, software models that allow for hardware interchangeability, open market purchasing and better resource management for storage will be the big winners.

Wednesday, 18 January 2012

Prediction #3: The real world is not 100% virtual. Going virtual is a "hot topic," but the real world requires software that transcends both the virtual and physical worlds.

Even VMware, the leading virtualization company in the world, as well as the most ardent virtualization supporters in the analyst community, predict that in 2012 over 50% of x86 architecture server workloads will be virtual. Different reports point out that only 25% of small businesses have been virtualized, and others highlight the many challenges of virtualizing mission critical applications, new special purpose devices and legacy systems. The key point is that the world is not 100% virtual and the physical world cannot be ignored.

I find it interesting to note the large number of new vendors that have jumped squarely on the virtualization trend and have designed their solutions solely to address the virtual world. Most do not, therefore, deal with managing physical devices or support migrating from one device type to another, or support migrating back and forth between physical and virtual environments. Some nouveau virtual vendors go further and make simplifying assumptions akin to theoretical physicists - disregarding real world capabilities like Fibre Channel and assuming the world is tidy because all IT infrastructures operate in a virtual world using virtual IP traffic. These virtualization-only vendors tend to speak about an IT nirvana in which everyone and everything that is connected to this world is virtual, open and tidy - devoid of the messy details of the physical world. Does this sound anything like your IT shop?

Most IT organizations have, and will have for many years to come, a major share of their storage, desktops and a good portion of their server infrastructure running on physical systems or on applications that are not virtualized. This new "virtual is all you need" breed of vendors clearly does not want you to think about your existing base of systems or those strange Fibre Channel-connected, UNIX or NetWare systems running in the shadows. All the virtual upstarts have a simple solution - buy all new and go totally virtual. But this is not the real world most of us live in.

Virtualization solutions must work and deliver a unified user experience across both virtual and physical environments. Solutions that can't deal with the physical device world do not work in the real world where flexibility, constant change, and migrations are the norm. While those solutions that "do" virtual will be "hot," I predict those that can encompass the broad range of physical and virtual worlds will be even "hotter."

Prediction #2: Hybrid data storage environments will need storage virtualization and auto-tiering to combine Cloud storage with existing storage.

Cloud storage has already become a viable option for businesses that don't have the room to add storage devices or are looking for "pay-as-you-go" storage for less critical storage needs, backups and archiving. Industry analysts have already proclaimed that Cloud gateways and heterogeneous storage virtualization solutions combined with auto-tiering functionality can provide a seamless transition path to the Cloud that preserves existing storage investments.

For most companies, the notion of moving all of their data to the Cloud is inconceivable. However, continuously expanding data storage requirements are fueling a need for more capacity. One way to address this growth is to include Cloud storage in the mix. The benefits of Cloud storage are numerous. Cloud storage can provide virtually limitless access to storage capacity and obviate the need for device upgrades or equipment replacements. Cloud storage can also reduce capital expenses. Look for continued advances in auto-tiering and storage virtualization technologies to seamlessly combine hybrid Cloud and on-premise environments in a way that operates with existing applications.

Tuesday, 17 January 2012

Prediction #1: SSD cost trade-offs will drive software advances. Auto-tiering software that spans all storage including flash memory devices and SSDs will become a "must-have" in 2012.

The drop in cost of SSDs (solid state drives) and flash memories has already had a sizable impact on IT organizations and this will continue in 2012. The latest models of storage systems incorporate these innovations and tout high performance and high-availability, but they remain beyond an acceptable price point for most companies. In addition to price, SSDs and flash memory useful lifetimes are impacted by the amount of write traffic and need to be monitored and protected to avoid potential data loss.

However, the big driver for SSDs is the need for greater performance, but performance needs do not apply equally to all data. In fact, the majority of data, on average 90%, can reside on low-cost archives or mid-tier storage. Meanwhile, the major storage vendors continue to implore us to throw new hardware systems and more SSDs at the problem because they want to sell higher priced systems. These innovations are great, but they must be applied wisely. To get the most value out of these expensive devices, software that can protect and optimally manage the utilization of these costly resources as well as minimize write traffic is now a "must-have."

In 2012, businesses will gain a better understanding of why software is needed to expand the range of practical use cases for SSDs and to automate when, where and how best to deploy and manage these devices. Auto-tiering and storage virtualization software is critical to cost-effectively optimize the full utilization of flash memory and SSD-based technologies – as an important element within the larger spectrum of storage devices that need to be fully integrated and managed within today's dynamic storage infrastructures.

Saturday, 14 January 2012

DataCore Software 2012 Predictions: Storage Hypervisors, Virtualization, SSDs, "Big Data," Clouds and Their Real World Impacts

Storage has been slow to adopt change and still remains one of the most static elements of today’s IT infrastructures. I believe this is due to a history of storage being driven from a hardware mindset. The storage industry, for the most part, has been controlled by a few major vendors who have been resistant to disruptive technologies, especially those that can impact their high profit margins. However, the time is ripe for a major shift and some disruption. An understanding of how software liberates resources from devices and a perfect storm of forces – success of server virtualization, new Cloud models, increasing data growth, greater complexity and unsustainable buying practices – are driving a mind-shift and forcing real changes to happen at a much faster pace. Therefore, it is a great time to be at the helm of a storage software company, and I am pleased to share my personal observations and 6 predictions for the New Year in a series of blog posts, the first one will directly follow this post.

Tuesday, 10 January 2012

DataCore’s SANsymphony-V 8.1 Chosen by Virtualization Review’s Readers as a 2012 Readers Choice Preferred Product Award Winner

http://virtualizationreview.com/articles/2012/01/03/2012-buyers-guide.aspx#StorageVirt

Storage Virtualization
Based on the strength of its storage hypervisor approach, DataCore SANsymphony-V 8.1 was chosen by Virtualization Review as a 2012 Readers Choice a Preferred Product Award Winner for storage virtualization. EMC, NetApp and DataCore Software were the top 3 chosen by the readership.

Tuesday, 3 January 2012

DataCore Software Celebrates New Year By Offering Free Storage Hypervisor Software To Microsoft Certified And System Center Professionals

DataCore Software is welcoming the new year by offering free license keys of its SANsymphony™ -V storage hypervisor to Microsoft Certified and System Center professionals. The not-for-resale (NFR) license keys – may be leveraged for non-production uses such as course development, proof-of-concepts, training, lab testing and demonstration purposes – are intended to support virtualization consultants, instructors and architects involved in managing and optimizing storage within private clouds, virtual server and VDI deployments. DataCore is making it easy for these certified professionals to benefit directly and learn for themselves the power of this innovative technology – the storage hypervisor – and its ability to redefine storage management and efficiency.

To receive a free license key, please sign up at: http://pages.datacore.com/nfr-for-experts.html.

 The SANsymphony-V storage hypervisor is a portable, centrally-managed software suite capable of enhancing the combined value of multiple disk storage systems, including the many purpose-built storage appliances and solid state disks (SSD) type devices arriving to the market daily. The storage hypervisor supplements the individual capabilities of specialized equipment with a broad range of device-independent, integrated services. The SANsymphony-V software executes on physical and virtual servers or can co-reside with server hypervisors. Other major features include, but are not limited to, “Quick serve” storage provisioning, automated tiering across SSDs, disk devices and different cloud storage providers and continuous data protection (CDP).

The free NFR license keys of SANsymphony-V are available for the new year and this offer will expire on January 31, 2012. The licenses may be used for non-production purposes only and the software can be installed on both physical servers and Hyper-V virtual machines as a virtual storage appliance.

Proof of certification as a Microsoft Certified Architect (MCA), Microsoft Certified Master (MCM), Microsoft Certified IT Professional (MCITP), Microsoft Certified Professional Developer (MCPD) or Microsoft Certified Technology Specialist (MCTS) is required.