Information, commentary and updates from Australia / New Zealand on virtualization, business continuity solutions, FC SAN, iSCSI, high-availability, remote replication, disaster recovery and storage virtualization and SAN management solutions.
Friday, 25 December 2015
Monday, 14 December 2015
DataCore Certified for SAP-HANA, first Software-defined Storage certified to operates across multiple vendors
We are pleased to announce the certification of SANsymphony™-V with the SAP HANA® platform. DataCore™ SANsymphony-V is storage infrastructure software that operates across multiple vendors’ storage systems to deliver the performance and availability required by demanding enterprise-class applications such as SAP HANA.
What is SAP HANA?
The SAP HANA in-memory database lets organizations process vast amounts of transactional, analytical and application data in real-time using a computer’s main memory. Its platform provides libraries for predictive, planning, text processing, spatial and business analytics.
Key Challenges for SAP HANA implementation:
SAP HANA demands a storage infrastructure to process data at an unprecedented speed and has zero-tolerance for downtime. Most organizations store entire SAP HANA multi-terabyte production systems on high-performance Tier 1 storage to meet the performance required during peak processing cycles, such as “period end,” or seasonal demand spikes. This practice presents the following challenges to IT departments:
- Tier 1 storage is expensive to deploy and significantly impact the IT budget.
- Tier 1 storage is limited to its physical constraints when it comes to data availability, staging, reporting, and test and development.
- Managing multiple storage systems (existing and new) can add considerable cost and complexity; routine tasks like test/dev and reporting are difficult to manage.
Benefits of DataCore
DataCore SANsymphony-V is the first Software-defined Storage solution that is certified to operate across multiple vendors’ SAP HANA-certified storage systems to deliver the performance and availability required. DataCore SANsymphony-V software provides the essential enterprise-class storage functionality needed to support the real-time applications offered by the SAP HANA® platform.
With DataCore, SAP HANA customers gain:
- Choice: Companies have the choice of using existing and/or new SAP HANA certified storage systems, with the ability to seamlessly manage and scale their data storage architectures as well as giving them more purchasing power (no vendor lock-in)
- Performance: Accelerate I/O with DataCore™ Adaptive Parallel I/O architecture as well as caching to take full advantage of SAP HANA. in-memory capabilities to transform transactions, analytics, text analysis, predictive and spatial processing.
- Cost-efficiency: DataCore reduces the amount of Tier 1 storage space needed, and makes the best use of lower cost persistent HANA-certified storage.
DataCore SANsymphony-V infrastructure software is the only SAP HANA-certified SDS solution that can be used together with an SAP-certified storage solution from Fujitsu, Huawei, IBM, Dell, NEC, Nimble Storage, Pure Storage, Fusion-io, Violin Memory, EMC, NetApp, HP and Hitachi.
Wednesday, 9 December 2015
Making Data Highly Available on Flash and DRAM
George Teixeira, CEO & President and Nick Connolly, Chief Scientist at DataCore Software discuss how DataCore's Software-Defined Storage solution takes advantage of flash and DRAM technologies to provide high availability and the right performance for your applications.
How Software-Defined Storage Enhances Hyper-converged Storage
How Software-Defined Storage Enhances Hyper-converged Storage
One of the fundamental requirements for virtualizing applications is shared storage. Applications can move around to different servers as long as those servers have access to the storage with the application and its data. Typically, shared storage takes place over a storage network known as a SAN. However, SANs typically run into issues in a virtual environment, so organizations are currently looking for new options. Hyper-converged infrastructure is a solution that seems well-suited to address these issues.
This following white paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of software-defined storage as a solution to provide reliable application performance and a highly available infrastructure.
Read the white paper here: http://info.datacore.com/How-Software-Defined-Storage-Enhances-Hyper-converged-Solutions
Friday, 4 December 2015
Software Defined Storage meets Parallel I/O; The impact on Hyperconvergence
In terms of storage performance, the actual drive is no longer the bottleneck. Thanks to flash storage, attention has turned to the hardware and software that surrounds them, especially the capabilities of the CPU that drives the storage software. The importance of CPU power is evidenced by the increase in overall storage system performance when an all-flash array vendor releases a new storage system. The flash media in that system doesn’t change, but overall performance does increase. But that increase in performance is not as optimal as it should be. The lack of achieving optimal performance is a result of storage software not taking advantage of the parallel nature of the modern CPU.
Moore’s Law Becomes Moore’s Suggestion
Moore’s Law is an observation by Intel co-founder Gordon Moore. The simplified version of this law states that number of transistors will double every two years. IT professionals assumed that meant that the CPU they buy would get significantly faster every two years or so. Traditionally, this meant that the clock speed of the processor would increase, but recently Intel has hit a wall because increasing clock speeds also led to increased power consumption and heat problems. Instead of increasing clock speed, Intel has focused on adding more cores per processor. The modern data center server has essentially become a parallel computer.
Multiple cores per processor are certainly an acceptable method of increasing performance and continuing to advance Moore’s law. Software, however, does need to be re-written to take advantage of this new parallel computing environment. Parallelization of software is required by operating systems, application software and of course storage software. The re-coding of software to make it parallel is challenging. The key is to manage I/O timing and locking, making multli-threading a storage application more difficult than a video rendering project for example. As a result, it has taken time to get to the point where the majority of operating systems and application software has some flavor of parallelism.
Lagging far behind in the effort to take full advantage of the modern processor is storage software. Most storage software, either built into the array or the new crop of software defined storage (SDS) solutions, are unable to exploit the wide availability of processing cores. They are primarily single core. As a worst case, they are only using one core per processor; at its best, they are using one core per function. If cores are thought of as workers, it is best to have all the workers available to all the tasks, rather than each worker focused on a single task.
Why Cores Matter
The importance of using cores efficiently has only recently become important. Most legacy storage systems were hard drive based, lacking advanced caching or flash media to drive performance. As a result, the need to support efficiently the multi-core environment was not as obvious as it is now that systems have a higher percentage of flash storage. The lack of multi-core performance was overshadowed by the latency of the hard disk drive. Flash and storage response time is just one side of the I/O equation. On the other side, the data center is now populated with highly dense virtual environments or, even more contentious, hyper-converged architectures. Both of these environments generate a massive amount of random I/Os that, thanks to flash, the storage system should be able to handle very quickly. The storage software is the interconnect between the I/O requester and the I/O deliverer and if it can’t efficiently support all the cores it has at its disposal then it becomes the bottleneck.
All storage systems that leverage Intel CPUs face the same challenge; how to leverage CPUs that are increasing in cores, but not in raw speed. In other words, they don’t perform a single process faster but they do perform multiple processes at the same speed simultaneously, netting in faster overall completion time, if the cores are used efficiently. Storage software needs to adapt and become multi-threaded so it can distribute I/O across these functions, taking full advantage of multiple cores.
For most vendors this may mean a complete re-write of their software, which will take time, effort and risk incompatibility with their legacy storage systems.
How Vendor’s Fake Parallel I/O
Vendors have tried several techniques to try to leverage the reality of multiple cores without specifically “parallelizing” their code. Some storage system vendors have tried to tie specific cores to specific storage processing tasks. For example, one core may handle raw inbound I/O while another handles RAID calculations. Other vendors will distribute storage processing tasks in a round robin fashion. If cores are thought of as workers, this technique treats cores as individuals instead of a team. As each task comes in each core is assigned a task, but only that core can work on that task. If it is a big task, it can’t get help from the other cores. While this technique does distribute the load, it doesn’t allow multiple workers to work on the same task at the same time. Each core has to do its own heavy lifting.
Scale-out storage systems are similar in that they leverage multiple processors within each node of the storage cluster, but that are not granular enough to assign multiple cores to the same task. They, like the systems described above, typically have a primary node that acts as a task delegator and assigns the I/O to a specific node, and that specific node handles storing the data and managing data protection.
These designs count on the I/O to come from multiple sources so that each discrete I/O stream can be processed by one of the available cores. These systems will claim very high IOPS numbers, but require multiple applications to get there. They work best in an environment that requires a million IOPS because it has ten workloads all generating 100,000 IOPS instead of an environment that has one workload that generates 1 million IOPS and no other workloads over 5,000. To some extent vendors also “game” the benchmark by varying I/O size and patterns (random vs. sequential) to achieve a desired result. The problem is this I/O is not the same as what customers will see in their data centers.
The Impact of True Parallel I/O
True parallel I/O utilizes all the available cores across all the available processors. Instead of siloing a task to a specific core, it assigns all the available cores to all the tasks. In other words, it treats the cores as members of a team. Parallel I/O storage software works well on either type of workload environment, ten generating 100k IOPS or one generating 1 million IOPS.
Parallel I/O is a key element in powering the next generation data center because the storage processing footprint can be dramatically reduced and can match the reduced footprint of solid-state storage and server virtualization. Parallel I/O provides many benefits to the data center:
- Full Flash Performance
As stated earlier, most flash systems show improved performance when more processing power is applied to the system. Correctly leveraging cores with multi-threading, delivers the same benefit without having to upgrade processing power. If the storage software is truly parallel, then the storage software can deliver better performance with less processing power, which drives costs down while increasing scalability.
- Predictable Hyper-Converged Architectures
Hyper-converged architectures are increasing in popularity thanks to available processing power at the computer tier. Hypervisors do a good job of utilizing multi-core processors. The problem is that a single threaded storage software component becomes the bottleneck. Often the key element of hyper-convergence, storage software, is isolated to one core per hyper-converged node. These cores can be overwhelmed if there is a performance spike leading to inconsistent performance that could impact the user experience. Also to service many VMs and critical business applications, they typically need to throw more and more nodes at the problem, impacting the productivity and cost saving benefits derived from consolidating more workload on fewer servers. A storage software solution that is parallel can leverage or share multiple cores in each node. The result is more virtual machines per host, less nodes to manage and more consistent storage I/O performance even under load.
- Scale Up Databases
While they don’t get the hype of modern NoSQL databases, traditional, scale up databases (e.g., Oracle, Microsoft SQL) are still at the heart of most organizations. Because the I/O stream is from a single application, they don’t generate enough independent parallel I/O so it can be distributed to specific cores. The parallel I/O software’s ability to make multiple cores act as one is critical for this type of environment. It allows scale up environments to scale further than ever.
Conclusion
The data center is increasingly becoming denser; more virtual machines are stacked on virtual hosts, legacy applications are expected to support more users per server, and more IOPS are expected from the storage infrastructure. While the storage infrastructure now has the right storage media (flash) in place to support the consolidation of the data center, the storage software needs to support the available compute power. The problem is that compute power is now delivered via multiple cores per processor instead of a single processor. Storage software that has parallel I/O will be able to take full advantage of the processor reality and support these dense architectures with a storage infrastructure that is equally dense.
Wednesday, 2 December 2015
Continuous Data Protection is a Key Component of Data Availability Architecture
George Teixeira, CEO & President and Nick Connolly, Chief Scientist at DataCore Software highlight how Continuous Data Protection works and how it’s making a difference for businesses by protecting them from hardware and logical failures.
Continuous Data Protection (CDP) and Recovery: An Undo Button for Your Data
Threats to data abound in today’s electronic world. Whatever the cause of the damage was, the modification to the data was undesirable and needs to be undone. Continuous Data Protection (CDP) delivers one-second granularity on rollbacks and provides the best Recovery Point Objective (RPO) and Recovery Time Objective (RTO) of any data protection solution.
Click here for more information on Continuous Data Protection and Recovery.
Friday, 13 November 2015
DataCore's Storage Virtualization Enables Any-to-Any Data Mirroring
George Teixeira, CEO & President and Nick Connolly, Chief Scientist at DataCore Software discuss how DataCore's Software-Defined Storage solution allows businesses to mirror their data across heterogeneous storage.
Synchronous Mirroring: Zero Downtime, Zero Touch High Availability
DataCore offers an enterprise-class high availability solution that is designed to avoid equipment and site outages from interrupting access to critical information flow. The goal is to deliver “zero downtime, zero touch” failover to maximize business continuity. The cost-disruptive software, proven in over 25,000 global deployments, constantly mirrors data at high speeds between geographically separate locations. It keeps active-active copies synchronized even when using different types of storage devices at each end. Local and stretched/metro clusters perceive the independent, mirrored copies as a single set of data, simultaneously reachable from either location over redundant paths.
For more information on DataCore's Synchronous Mirroring, see http://datacore.com/products/features/Sync-Mirroring-High-Availablility.aspx
Monday, 2 November 2015
DataCore Certifies Universal VVols - Brings VVols Benefits to Existing Storage and to Non-VVol Certified Storage
Universal Virtual Volumes: Extends VMware’s VVOL Benefits to Storage Systems that do not Support it
Many VMware administrators crave the power and fine-grain control promised by vSphere Virtual Volumes (VVOLs). However, most current storage arrays and systems do not support it. Manufacturers simply cannot afford to retrofit equipment with the new VM-aware interface.
DataCore offers these customers the chance to benefit from VVOLs on EMC, IBM, HDS, NetApp and other popular storage systems and all flash arrays simply by layering SANsymphony™-V storage virtualization software in front of them. The same is true for direct-attached storage (DAS) pooled by the DataCore™ Hyper-converged Virtual SAN.
Now, vSphere administrators can self-provision virtual volumes from virtual storage pools -- they specify the capacity and class of service without having to know anything about the hardware.
DataCore SANsymphony-V wins Reader’s Choice Award for Best Software Defined Storage Solution
Augsburg, Unterföhring, 30. Oktober 2015. Die Leser von
Storage-Insider.de und IT-BUSINESS haben entschieden: DataCore gewinnt den
Reader's Choice Award in der Kategorie Software Defined Storage.The TheThe
readers of Storage-Insider and IT business have decided: DataCore wins the
Reader's Choice Award for Best Software Defined Storage. Bei der großen Leserwahl der Vogel IT Medien
setzte sich DataCore mit seiner SANsymphony-V-Software im Finale gegen Dell und
Falconstor durch. In the readers' vote of Vogel IT Media DataCore
prevailed with its SANsymphony-V software in the final against solutions from Dell,
IBM and FalconStor. Die Platinum-Auszeichnung wurde im Rahmen
einer feierlichen Gala am 29. Oktober in Augsburg überreicht. The
Platinum Award was presented at a gala event on 29 October.
"We
are particularly pleased and proud to receive this award because this choice
was made by the readers of some of the most important publications in our
industry. IT professionals have selected
SANsymphony-V as the best Software Defined Storage solution", Stefan of
Dreusche, Director Central Europe at DataCore Software says.
Thursday, 29 October 2015
Real-World Customers Validate DataCore's Data Availability Strategy
George Teixeira, CEO & President and Nick Connolly, Chief Scientist at DataCore Software share examples of customers who are using DataCore's Software-Defined Storage solution as an integral part of their data availability strategy as well as some of the software's unique capabilities to achieve High Availability.
Maimonides Medical Center, based in Brooklyn, N.Y., is the third-largest independent teaching hospital in the U.S. The hospital has more than 800 physicians relying on its information systems to care for patients around-the-clock. Click the link below to see DataCore Software interview Gabriel Sandu and Walter Fahey of Maimonides Medical Center about why they chose DataCore SANsymphony-V as their software-defined storage solution.
http://www.datacore.com/testimonials/video-testimonials/maimonides
Maimonides Medical Center, based in Brooklyn, N.Y., is the third-largest independent teaching hospital in the U.S. The hospital has more than 800 physicians relying on its information systems to care for patients around-the-clock. Click the link below to see DataCore Software interview Gabriel Sandu and Walter Fahey of Maimonides Medical Center about why they chose DataCore SANsymphony-V as their software-defined storage solution.
http://www.datacore.com/testimonials/video-testimonials/maimonides
Thursday, 22 October 2015
Virtualization Review: Back to the Future in Virtualization and Storage – A Real Comeback, Parallel I/O by DataCore
"It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back."
What's old is new again. Marty McFly would get it. https://virtualizationreview.com/articles/2015/10/21/back-to-the-future-in-virtualization-and-storage.aspx
If you're on
social media this week, you've probably had your fill of references to Back
to the Future, the 1980s scifi comedy much beloved by those of us who are
now in our 50s, and the many generations of video watchers who have rented,
downloaded or streamed the film since. The nerds point out that the future
depicted in the movie, as signified by the date on the time machine clock in
the dashboard of a DeLorean, is Oct. 21, 2015. That's today, as I write this
piece…
Legacy Storage
Is Not the Problem
If you stick with x86 and virtualization, you may be concerned about the challenges of achieving decent throughput and application performance, which your hypervisor vendor has lately been blaming on legacy storage. That is usually a groundless accusation. The problem is typically located above the storage infrastructure in the I/O path; somewhere at the hypervisor and application software operations layer.
If you stick with x86 and virtualization, you may be concerned about the challenges of achieving decent throughput and application performance, which your hypervisor vendor has lately been blaming on legacy storage. That is usually a groundless accusation. The problem is typically located above the storage infrastructure in the I/O path; somewhere at the hypervisor and application software operations layer.
To put it
simply, hypervisor-based computing is the last expression of
sequentially-executing workload optimized for unicore processors introduced by
Intel and others in the late 70s and early 80s. Unicore processors with their
doubling transistor counts every 24 months (Moore's Law) and their doubling
clock speeds every 18 months (House's Hypothesis) created the PC revolution and
defined the architecture of the servers we use today. All applications were
written to execute sequentially, with some interesting time slicing created to
give the appearance of concurrency and multi-threading.
This model is
now reaching end of life. We ran out of clock speed improvements in the early
2000s and unicore chips became multicore chips with no real clock speed
improvements. Basically, we're back to a situation that confronted us way back
in the 70s and 80s, when everyone was working on parallel computing
architectures to gang together many low performance CPUs for faster execution.
A Parallel
Comeback
Those efforts ground to a halt with unicore's success, but now, with innovations from oldsters who remember parallel, they're making a comeback. As soon as Storage Performance Council audits some results, I'll have a story to tell you about parallel I/O and the dramatic improvements in performance and cost that it brings to storage in virtual server environments. It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back.
Those efforts ground to a halt with unicore's success, but now, with innovations from oldsters who remember parallel, they're making a comeback. As soon as Storage Performance Council audits some results, I'll have a story to tell you about parallel I/O and the dramatic improvements in performance and cost that it brings to storage in virtual server environments. It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back.
Friday, 16 October 2015
Achieving Universal Management and Control Across Heterogeneous Storage Infrastructure
In this video, George Teixeira, CEO & President at DataCore Software, highlights the key criteria decision makers should use to make an intelligent storage infrastructure choice.
SANsymphony™-V10 is DataCore's flagship 10th generation storage virtualization solution. In use at over 10,000 customer sites, it maximizes IT infrastructure performance, availability and utilization by virtualizing storage hardware. For more details on SANsymphony-V10, click here.
Thursday, 15 October 2015
VMworld Europe 2015: DataCore Showcases Universal VMware VVOL Support and Revolutionary Parallel I/O Software
New Hyper-Converged Reference Architectures, VM-aware VVOL
Provisioning, Rapid vSphere Deployment Wizards and Virtual Server Performance
Breakthroughs Also on Display
At
Barcelona, Spain, DataCore Software,
a leader in Software-Defined Storage and Hyper-converged Virtual SANs, is showcasing
its adaptive parallel I/O software to VMware customers and partners at VMworld
Europe 2015. The revolutionary software technology uniquely harnesses today’s
multi-core processing systems to maximize server consolidation, cost savings
and application productivity by eliminating the major bottleneck holding back
the IT industry – I/O performance. DataCore will also use the backdrop of
VMworld Europe 2015 to debut “Proven Design” reference architectures, a
powerful vSphere deployment wizard for hyper-converged virtual SANs and the
next update of SANsymphony™-V Software-Defined Storage
which extends vSphere Virtual Volume (VVOLs) support to new and already
installed flash and disk-based storage systems lacking this powerful
capability.
"The combination of ever-denser multi-core processors with
efficient CPU/memory designs and DataCore’s adaptive parallel I/O software
creates a new class of storage servers and hyper-converged systems that change
the math of storage performance...and not by just a fraction,” said DataCore
Chairman Ziya Aral. “As we begin to publish ongoing real-world performance
benchmarks in the very near future, the impact of this breakthrough will become
very clear."
At booth #S118, DataCore’s technical staff will discuss the
state-of-the-art techniques used to accelerate performance and achieve much
greater VM densities needed to respond to the demanding I/O needs of
enterprise-class, tier-1 applications. DataCore will highlight performance
optimizations for intense data processing and I/O workloads found in online
transaction processing (OLTP) systems, real-time analytics, business
intelligence and data warehouses. These breakthroughs have proven most valuable
in the mission-critical lines of business applications based on Microsoft SQL
Server, SAP and Oracle databases that are at the heart of every major
enterprise.
Universal Virtual Volumes:
Extends VMware’s VVOL Benefits to Storage Systems that do not Support it
Many VMware administrators crave the power and fine-grain control
promised by vSphere Virtual Volumes (VVOLs). However, most current storage
arrays and systems do not support it. Manufacturers simply cannot afford to
retrofit equipment with the new VM-aware interface.
DataCore offers these
customers the chance to benefit from VVOLs on EMC, IBM, HDS, NetApp and other
popular storage systems and all flash arrays simply by layering SANsymphony™-V
storage virtualization software in front of them. The same is true for
direct-attached storage (DAS) pooled by the DataCore™ Hyper-converged Virtual SAN.
Now, vSphere administrators can self-provision virtual volumes
from virtual storage pools -- they specify the capacity and class of service
without having to know anything about the hardware.
Other announcements and
innovations important to VMware customers and partners will also be featured by
DataCore at VMworld Europe. These include:
·
Hyper-converged software
solutions for enterprise applications and high-end OLTP workloads utilizing DataCore™ Adaptive Parallel I/O software
·
New “Proven Design” reference
architectures for Lenovo, Dell, Huawei, Fujitsu and Cisco servers spanning
high-end, midrange and smaller configurations
·
A worldwide partnership
with Curvature to provide users a novel
procurement and lifecycle model for storage products, data services and
centralized management that is cost-disruptive
·
vSphere Deployment Wizard
to quickly roll out DataCore™ Hyper-converged Virtual SAN software on ESXi
clusters
·
Stretch cluster
capabilities ideal for splitting hyper-converged systems over metro distances
·
Breakout Session: DataCore
will discuss the topics of Software-Defined Storage and application
virtualization in the Solutions Exchange Theatre.
Wednesday, 7 October 2015
1 minute Video Snapshot: DataCore CEO Describes Parallel I/O Software
DataCore Software, a leader in Software-Defined
Storage, recently used the backdrop of VMworld 2015 to showcase its hyper-converged
‘less is more’ architecture. More importantly, VMware customers and partners
got the chance to see first-hand DataCore’s adaptive parallel I/O harnessing
today’s multi-core processing systems to eliminate the major bottleneck holding
back the IT industry – I/O performance.
Here is DataCore’s CEO, George Teixeira
providing a quick 1 minute overview of parallel I/O…
Friday, 2 October 2015
Enterprise-Class Data Protection at an Affordable Price
George Teixeira, CEO & President at DataCore Software, points out how to use Software-defined Storage to protect your data and maintain continuous business operations without breaking the bank.
For further information, view DataCore's Infographic: The Industry's Take on Software-Defined Storage
Monday, 21 September 2015
Virtualization Review: Storage Virtualization and the Question of Balance - Parallel I/O to the Rescue
Dan's Take: It's Time to Consider Parallel I/O
"DataCore has been working for quite some time on parallel storage processing technology that can utilize excess processing capability without also creating islands of storage technology. When Lenovo came to DataCore with a new, highly-parallel hardware design and was looking for a way to make it perform well, DataCore's software technology came to mind. DataCore made it possible for Lenovo's systems to dynamically use their excess processing capacity to accelerate virtualized storage environments. The preliminary testing I've seen is very impressive and shows a significant reduction in cost, while also showing improved performance. I can hardly wait to see the benchmark results when they're audited and released."
Focusing too much on processors leads to problems.
- By Dan Kusnetzky
The storage virtualization industry is repeating an error it made long ago in the early days of industry standard x86 system: a focus on processing performance to the exclusion of other factors of balanced system design.
Let's take a stroll down memory lane and then look at the problems storage virtualization is revealing in today's industry standard systems.
Balanced Architectures
Balanced system design is where system resources such as processing, memory, networking and storage utilization are consumed at about the same rate. That is, there are enough resources in each category so that when the target workload is imposed upon the system, one resource doesn't run out while others still have capacity to do more work.
Balanced system design is where system resources such as processing, memory, networking and storage utilization are consumed at about the same rate. That is, there are enough resources in each category so that when the target workload is imposed upon the system, one resource doesn't run out while others still have capacity to do more work.
The type of workload, of course, has a great deal to do with how system architectures should be balanced. A technical application might use a great deal of processing and memory, but may not use networking and storage at an equal level. A database application, on the other hand, might use less processing but more memory and storage. A service oriented architecture application might use a great deal of processing and networking power, but less storage and memory than the other types of workloads.
A properly designed system can do more work at less cost than unbalanced systems. In short, systems having an excess of processing capability when compared to other system resources might do quite a bit less work at a higher overall system price than a system that's better balanced.
Mainframes to x86 Systems
Mainframe and midrange system designers worked very hard to design systems for each type of workload. Some systems offered large amounts of processing and memory capacity. Others offered more networking or storage capacity.
Mainframe and midrange system designers worked very hard to design systems for each type of workload. Some systems offered large amounts of processing and memory capacity. Others offered more networking or storage capacity.
Eventually, Intel and its partners and competitors broke through the door of the enterprise data center with systems based on high-performance microprocessors. The processor benchmark data for these systems was impressive. The rest of the system, however, often was built using the lowest cost, off-the-shelf components.
Enterprise IT decision makers often selected systems based upon a low initial price without considering balanced design or overall cost of operation. We've seen the impact this thinking has had on the market. Systems designed with expensive error correcting memory, parallel networking and storage interconnects often lose out to low cost systems having none of those "mainframe-like" enhancements.
This means that if we walked down a row of systems in a typical datacenter, we'd see systems having under-utilized processing power trying to drive work through configurations having insufficient memory and/or networking and storage bandwidth.
To address performance problems, enterprise IT decision makers often just purchase larger systems, even though the original systems have enough processing power; an unbalanced storage architecture is the problem.
Enter Storage and Networking Virtualization
As industry standard systems become virtualized environments, the industry is seeing system utilization and balance come to the forefront again. Virtualization technology takes advantage of excess processing, memory, storage and networking capability to create artificial environments; environments that offer important benefits.
As industry standard systems become virtualized environments, the industry is seeing system utilization and balance come to the forefront again. Virtualization technology takes advantage of excess processing, memory, storage and networking capability to create artificial environments; environments that offer important benefits.
While virtual processing technology is making more use of industry standard systems' excess capacity to create benefits, other forms of virtualization are stressing systems in unexpected ways.
Storage virtualization technology often uses system processing and memory to create benefits such as deduplication, compression, and highly available, replicated storage environments. Rather than to put this storage-focused processing load on the main systems, some suppliers push this work onto their own proprietary storage servers.
While this approach offers benefits, it also means that the data center becomes multiple islands of proprietary storage. It also can mean scaling up or down can be complicated or costly.
Another point is that many industry standard operating systems do their best to serialize I/O; that is, do one storage task at a time. This means that only a small amount of a system's processing capability is devoted to processing storage and networking requests, even if sufficient capacity exists to do more work.
Parallel I/O to the Rescue
If we look back to successful mainframe workloads, it's easy to see that the system architects made it possible to add storage and networking capability as needed. Multiple storage processors could be installed so that storage I/O could expand as needed to support the work. The same was true of network processors; many industry standard system designs have a great deal of processing power, but the software they're hosting doesn't assign excess capacity to storage or network tasks, due to the design of the operating systems.
If we look back to successful mainframe workloads, it's easy to see that the system architects made it possible to add storage and networking capability as needed. Multiple storage processors could be installed so that storage I/O could expand as needed to support the work. The same was true of network processors; many industry standard system designs have a great deal of processing power, but the software they're hosting doesn't assign excess capacity to storage or network tasks, due to the design of the operating systems.
DataCore has been working for quite some time on parallel storage processing technology that can utilize excess processing capability without also creating islands of storage technology. When Lenovo came to DataCore with a new, highly-parallel hardware design and was looking for a way to make it perform well, DataCore's software technology came to mind. DataCore made it possible for Lenovo's systems to dynamically use their excess processing capacity to accelerate virtualized storage environments. The preliminary testing I've seen is very impressive and shows a significant reduction in cost, while also showing improved performance. I can hardly wait to see the benchmark results when they're audited and released.
Dan's Take: It's Time to Consider Parallel I/O
In my article "The Limitations of Appliance Servers," I pointed out that we've just about reached the end of deploying a special-purpose appliance for each and every function. The "herd-o'-servers" approach to computing has become too complex and too costly to manage. I would point to the emergence of "hyperconverged" systems in which functions are being brought back into the system as a case in point.
In my article "The Limitations of Appliance Servers," I pointed out that we've just about reached the end of deploying a special-purpose appliance for each and every function. The "herd-o'-servers" approach to computing has become too complex and too costly to manage. I would point to the emergence of "hyperconverged" systems in which functions are being brought back into the system as a case in point.
Virtual systems need virtual storage. Virtual storage needs access to processing, memory and networking capability to be effective. DataCore appears to have the technology to make this all work.
About the Author
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.
Thursday, 17 September 2015
Matching Data Protection Capabilities to Data Availability Requirements
In this video, George Teixeira, CEO & President at DataCore Software, discusses how to take advantage of Software-defined Storage to protect data and meet specific data availability requirements.
Building a Highly Available Data Infrastructure“Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.”
This white paper from the Data Management Institute paper outlines best practices for improving overall business application availability by building a highly available data infrastructure. Read the full white paper at: http://info.datacore.com/building-a-highly-available-data-infrastructure
The Unrealized Dream of Data Availability and What You Can Do About It
Join industry expert Jon Toigo, Chairman of the Data Management Institute, as he discusses the problem of data availability and what can be done to build a highly available data infrastructure for your modern data center.
http://datacore.com/resources/webcasts.aspx?commid=157207
http://datacore.com/resources/webcasts.aspx?commid=157207
Thursday, 10 September 2015
How to Make Highly Available Data a Reality
George Teixeira, CEO & President and Nick Connolly, Chief Scientist at DataCore Software discuss the fundamentals for achieving High Availability and DataCore's role in making this a reality for enterprises.
In Jon Toigo’s recent article “High-Availability Features over Disaster Recovery? Not So Fast,” he discusses why, despite their benefits, high-availability features are not a full replacement for a good disaster recovery plan.
“I say that applications, and by extension their data, deserve HA only when they are mission-critical.”
Toigo also identifies some questions that need to be investigated when considering the high-availability features of software-defined and hyper-converged storage proposed for use behind virtual servers.
For further information, read the full article here: http://searchstorage.techtarget.com/opinion/Dump-DR-in-favor-of-high-availability-features-Not-so-fast
Tuesday, 8 September 2015
VMworld 2015: DataCore Unveils Revolutionary Parallel I/O Software; Proven Designs, "Less is More" Hyperconverged...
This
week virtualization giant VMware (VMW) held its annual VMworld customer
conference in San Francisco, and as always there was no shortage of
virtualization-centric news from partner companies. Since reading pages and
pages of press releases is no fun for anyone, we decided to compile some of the
biggest announcements going on at this year’s show.
Software-defined storae vendor DataCore Software unveiled
its new parallel I/O software at VMworld, which was designed to help users
eliminate bottlenecks associated with running multi-core processing systems.
The company also announced a new worldwide partnership with Curvature to
provide users with a procurement and lifecycle model for their storage
products, data services and centralized management.
Read more here.
VMworld Cube
Interview: https://www.youtube.com/watch?v=wH6Um_wUxZE
Why
Parallel I/O Software and Moore’s Law Enable Virtualization and
Software-Defined Data Centers to Achieve Their Potential
Read
DataCore's new whitepaper: Why Parallel I/O Software and Moore’s Law Enable Virtualization
and Software-Defined Data Centers to Achieve Their Potential
VirtualizationReview
- Hyperconvergence: Hype and Promise
The
field is evolving as lower-cost options start to emerge.
…Plus,
the latest innovation from DataCore -- something called Parallel I/O that I'll
be writing about in greater detail in my next column -- promises to convert
that Curvature gear (or any other hardware platform with which DataCore's SDS
is used) into the fastest storage infrastructure on the planet -- bar none.
Whether this new technology from DataCore is used with new gear, used gear, or
to build HCI appliances, it rocks. More later.
SiliconAngle:
Back to basics: Why we need hardware-agnostic storage | #VMworld
In
a world full of hyper this and flash that, George Teixeira, president
and CEO of DataCore Software Corp., explained how going back to to the basics
will improve enterprise-level storage solutions.
Teixeira
and Dustin Fennell, VP and CIO of EPIC Management, LP, sat down with Dave
Vellante on theCUBE from the SiliconANGLE Media team
at VMworld 2015 to discuss the evolution of architecture and the need to move
toward hardware-agnostic storage solutions.
VMworld
the Cube: Video Interview on DataCore and Parallel I/O: https://www.youtube.com/watch?t=16&v=wH6Um_wUxZE
IT-Director
on VMworld 2015: DataCore Unveils Revolutionary Parallel I/O Software
DataCore shows its hyper-converged 'less is more' architecture
DataCore shows its hyper-converged 'less is more' architecture
DataCore
Launches Proven Design reference Architecture Blueprints for Server Vendors
Lenovo,
Dell, Cisco and HP:
Fujitsu:
http://datacore.com/international/english/fujitsuappliance
& http://www.datacore.com/Company/news-press-center/Press-Releases/2014/11/19/software-defined-storage-and-virtual-san-solutions-powered-by-fujitsu-primergy-servers-achieve-datacore-ready-certification
More
Tweets from the show:
Make
any storage or Flash #VVOL
compatible with our #Software-defined
Storage Stack #SSD #virtualization #VMworld http://www.datacore.com
#ParallelIO software
will be the next ‘killer app’ that will allow the industry to build on #virtualization http://datacore.com/sf-docs/default-source/whitepapers/english/why-parallel-io-software-and-moore-s-law-enable-virtualization-and-software-defined-data-centers-to-achieve-their-potential.pdf
#VMworld2015
was amazing! Everyone was so excited about our revolutionary software. http://datacore.com/Company/news-press-center/Press-Releases/2015/08/31/vmworld-2015-datacore-unveils-revolutionary-parallel-i-o-software
Check out our
latest pictures from the show and tweets live from VMworld at: https://twitter.com/datacore
VMworld Cube
Interview: https://www.youtube.com/watch?v=wH6Um_wUxZE
#VMworld
DataCore Parallel IO Software is the 'Killer App' for #virtualization
& #Hyperconverged
systems...stop by booth 835 pic.twitter.com/IbcTaTmfpv
Great to see
the crowds at #VMworld
learning more about DataCore’s Parallel IO, #VSAN,
Hyperconverged & Software-defined Storage pic.twitter.com/chCZZ7H4x3
New
Hyper-Converged Reference Architectures, Real World VMware User Case Studies
and Virtual Server Performance Breakthroughs Also on Display
SAN
FRANCISCO, CA
– August 31, 2015 – DataCore
Software,
a leader in Software-Defined Storage, will use the backdrop of VMworld 2015 to
show its hyper-converged ‘less is more’ architecture. Most
significantly, VMware customers and partners will see first-hand DataCore’s
adaptive parallel I/O harnessing today’s multi-core processing systems to
eliminate the major bottleneck holding back the IT industry – I/O performance.
"It really is a
perfect storm," said DataCore Chairman Ziya Aral. "The combination of
ever-denser
multi-core processors with efficient CPU/memory designs and DataCore’s parallel I/O software create a new class of storage servers and hyper-converged systems that change the math of storage performance in our industry...and not by just a little bit. As we publish an ever-wider array of benchmarks and real-world performance results, the real impact of this storm will become clear."
multi-core processors with efficient CPU/memory designs and DataCore’s parallel I/O software create a new class of storage servers and hyper-converged systems that change the math of storage performance in our industry...and not by just a little bit. As we publish an ever-wider array of benchmarks and real-world performance results, the real impact of this storm will become clear."
At booth #835, DataCore’s staff of technical consultants will discuss the
state-of-the-art techniques used to achieve much greater VM densities needed to
respond to the demanding I/O needs of enterprise-class, tier-1 applications.
DataCore will highlight performance optimizations for intense data processing
and I/O workloads found in mainstream online transaction processing (OLTP)
systems, real-time analytics, business intelligence and data warehouses. These
breakthroughs have proven most valuable in the mission-critical lines of
business applications based on Microsoft SQL Server, SAP and Oracle databases
that are at the heart of every major enterprise.
Other
announcements and innovations important to VMware customers and partners will
also be featured by DataCore at VMworld. These include:
·
Hyper-converged
software solutions for enterprise applications and high-end OLTP workloads
utilizing DataCore™ Adaptive Parallel I/O software
·
New
‘Proven Design’ reference architectures for Lenovo, Dell, Huawei, Fujitsu and
Cisco servers spanning high-end, midrange and smaller configurations
·
A
new worldwide partnership with Curvature to provide users a novel procurement
and lifecycle model for storage products, data services and centralized
management that is cost-disruptive
·
Preview
of DataCore’s upcoming VVOL capabilities
·
Stretch
cluster capabilities ideal for splitting hyper-converged systems over metro
distances
Breakout Sessions
·
DataCore
and VMware customer case study featuring Mission Community Hospital: “Virtualizing
an Application When the Vendor Says ‘No’” in the Software-Defined Data
Center track -- Monday, August 31, 2015 at 12:00 p.m.
·
“Lenovo
Servers in Hyper-Converged and SAN Storage Roles” Learn how Lenovo servers
are being used in place of traditional storage arrays to handle
enterprise-class storage requirements in hyper-converged clusters as well as
external SANs. Uncover the agility and cost savings you can realize simply by
adding DataCore Software to Lenovo systems. Two theater
presentations -- Tuesday, September 1, and Wednesday, September 2 at
3:30 p.m in the Lenovo Booth #1537
·
“Less is More with Hyper-Converged: When is 2>3” and “Efficiently
Scaling Hyper-Converged: How to Avoid Buyers’ Remorse” Daily in the
DataCore Booth #835
About
VMworld
VMworld
2015 U.S. takes place at San Francisco’s Moscone Center from August 30 through
September 3, 2015. It is the industry's largest virtualization and cloud
computing event, with more than 400 unique breakout sessions and labs, and more
than 240 sponsors and exhibitors. To learn more about VMworld, please visit: www.vmworld.com
Subscribe to:
Posts (Atom)