Tuesday 12 January 2016

Tick-Tock: Multicore processors mark next era of storage



Multicore processor technology represents not only next era in storage, but that everything old is new again.

From time to time, in presentations by tech vendors, one hears reference to a "tick-tock." Tick-tock is jargon describing a perceived pattern in the events that occur over a designated time frame. In recognizing such a pattern, the tick-tock narrative provides an orderly perspective on the seemingly great disorder of technological advancement, while at the same time providing a framework for predicting the future. Both make us feel like the future is less scary.

As we'll discuss in this article, multicore processors could very well be the next tick-tock for storage…

…Multicore the new tick-tock
Multicore processors have been the basis of the new tick-tock for some time now. Year after year, we are presented with CPUs offering double the number of processor cores on the same die, even though chip speeds have not increased significantly or at all…

…Unleash the power of multicore, multithreading chips
To really unlock the potential power of multicore processors and multithreading chips, we would need to get back to multiprocessing, parallel computing design.

DataCore Software is the first to revisit these concepts, which co-founder and Chairman of the Board Ziya Aral helped to pioneer in the 1980s. The company has found a way to take a user-designated portion of the logical cores available on a server and to allocate them specifically for storage I/O handling.

The technique they are using is getting increasingly granular and will eventually enable very specific processor resource allocation to the I/O processing of discreet workload. Best of all, once set, it is adaptive and self-tunes the number of multicore processors being used to handle I/O workloads.

DataCore's Storage Performance Council SPC-1 benchmark numbers are telling: they have blown the socks off of the hardware guys in terms of storage performance while reducing the cost per I/O well below the current low-cost leader -- using any and all off-the-shelf interconnects and storage devices.

We are about to enter a whole new era with a completely new tick-tock for storage -- and perhaps for the full server-network-storage stack -- based on multiprocessor architecture and engineering applied to multicore processor-driven systems.

Everything old is new again.


Monday 11 January 2016

2016: The Parallel IO Revolution is Underway, Servers are the New Storage









By George Teixeira, CEO and President, 
DataCore Software 
Key Points Shaping DataCore’s Views in 2016 










Parallel I/O software and multicore technology will transform IT productivity in 2016The melding of the server and storage worlds along with advances in parallel I/O software will revolutionize business productivity and transform our industry. Similar to server virtualization, the impact will be dramatic. Here are some of the key points shaping DataCore's views in 2016:

1. Servers are the new storage
A major transformation is underway as traditional storage systems are being replaced by commodity servers and software-defined solutions that can harness their power to solve the growing storage problem. Simply put, storage and data services will inevitably become yet another 'application workload' running on cost-efficient server platforms. This new wave of server-based storage systems are already having an impact. They are being marketed as server-SANs, virtual SANs, web-scale, scale-out and hyper-converged systems. However, when you look underneath the fancy marketing, they are pretty much a collection of standard off-the-shelf servers, flash cards and disk drives - the software defines their value differentiation.


Why has this change happened? Traditional storage vendors with specialized systems can no longer keep up with Moore's Law and the pace of cost savings and innovations that generic server platforms can deliver. Dell buying EMC is indicative of the change and the need to merge the server and storage worlds to remain competitive. Parallel I/O software and the ability to harness multicore server technology will be a major game-changer in 2016. In combination with software-defined storage, it will lead to a productivity revolution and establish 'servers as the new storage.'

2. Parallel I/O software and multi-core technology will revolutionize the IT world in 2016. 

The modern microprocessor universe started in the 1970's, and it along with Moore's Law drove two major paths of technology advances: the first resulted in faster more efficient uniprocessors which directly led to the PC revolution and to today's pervasive use of microprocessors in everything from smartphones to intelligent devices. The second path was parallel computing which set out to harness the power of many microprocessors. While parallel computing started with a flurry, the pace of advances were ultimately stifled by a lack of commodity parallel computing hardware, the overshadowing and rapid pace of advances in uniprocessor clock speeds that resulted from Moore's Law and most importantly by the lack of available software to do parallel work. Therefore, parallel computing for the most part remained an exotic discipline which required too much specialization for more general business use.


While faster clock speeds drove the PC revolution what went unnoticed was that the silicon vendors began to put many cores on the same platform (more transistors became more cores) and the result is that multicores are everywhere. In effect parallel processing power is now readily available, but there is still a lack of software to fully use its power. Bottom-line, the promised parallel computing revolution as a generic capability was put on hold awaiting software to advance.We are now at that critical turning point with software.
The parallel processing revolution is happening right now. DataCore recently set the new world record on price-performance and did it on a hyper-converged platform (on the Storage Performance Council's peer reviewed SPC-1 benchmark). DataCore also reported the best performance per footprint and the fastest response times ever. Bottom-line, today's multicore servers and software can 'do far more with less' and dramatically change the economics and productivity one can achieve.

Parallel I/O software will overcome the I/O bottleneck holding back our industry. It harnesses the power of multicores to dramatically increase productivity - and as a result it will revolutionize the industry.

3. Dramatic performance and productivity gains will transform hyper-converged and software-defined storage; get ready for a giant leap forward in 2016 

Finally, the hype around hyper-converged has continued to grow. From the marketing, one would believe it is the panacea to all problems. However, consumers and enterprises are realizing that they create new silos to manage and there are multiple limitations with the current offerings, particularly in terms of the scale and performance of those solutions to effectively handle enterprises-class workloads. As 2016 progresses, many customers will find themselves looking for solutions that can bring the ease of use benefits but also be easily integrated within company infrastructures with both existing investments and future technologies. Users are looking forward to the next stage of hyper-converged technology deployments where they don't have to sacrifice performance and interoperability with the rest of their investments.

Only a software-defined storage layer combined with parallel I/O software can effectively manage the power of multicore servers, migrate and manage data across the entire storage infrastructure, incorporate flash and hyper-converged systems without adding extra silos, and effectively utilize data stored anywhere in the enterprise or in the cloud. By untapping the power within standard multi-core servers, data infrastructures will realize tremendous consolidation and productivity benefits from parallel I/O technologies.

The impact is dramatic, it translates into much greater cost savings and productivity by allowing a new level of consolidation far beyond server virtualization alone and enabling systems to truly 'do more with less.' Application performance, enterprise workloads and greater consolidation densities on virtual platforms won't have to be held back by the growing gap between compute and I/O.

This combination of powerful software and servers will drive greater functionality, more automation, and comprehensive services to productively manage and store data across the entire data infrastructure. It will lead to a new era where "servers are the new storage" and the benefits of multi-core parallel processing can be applied universally. These advances which are already before us are key to solving the problems caused by slow I/O and inadequate response times that have been responsible for holding back application workload performance and cost savings from consolidation. These advances - multicore processing, parallel I/O and software-defined storage- collectively, are fundamental to achieving the next giant leap forward in business productivity. 

Better use of Multi-Core Chips and More; Virtualization Review's Jon Toigo Crystal Ball Predictions for 2016?



Virtualization Review’s The Infrastruggle Blog: 4 Possibly Correct Predictions for 2016

Excerpt:
…Better Use of Multi-Core Chips 
With the release of DataCore Software Parallel I/O technology, I expect to see a flood of parallel I/O woo enter the market. Parallel I/O involves the use of spare logical CPU cores ganged together into a very fast I/O processing engine to deliver phenomenal throughput improvements without much cost (you already own the multi-core processor). DataCore has paved the way to an extremely low-cost, high-performance storage tier by combining its P-I/O algorithm with its storage virtualization capabilities that include adaptive caching and interconnect load balancing. I suspect that many vendors will seek to pursue a comparable strategy, though most lack the experience in multiprocessor architecture that DataCore still has on staff.

Read Toigo’s full post and predictions at 4 Possibly Correct Predictions for 2016

1. The Zettabyte Apocalypse Will Not Come in 2016
2. Better Use of Multi-Core Chips
3. Tape Will Continue Its Comeback
4. Mainframes Are Cool Again



Wednesday 6 January 2016

Star Wars, the Force and the Power of Parallel Multicore Processing: Getting More Out of Virtualized Workloads

By George Teixeira, President & CEO, DataCore Software

During the 80’s, the original Star Wars movies featured amazing future technology and were all about “the power of the Force.” The latest movie has now broken all box office records and got me thinking about how much IT and computing technology has progressed over the years but yet, there is still so much left untapped.

Yes, several of the envisioned gains have come true – many of these driven by Moore’s Law and the growing force of the microprocessor revolution. For example, server virtualization software such as VMware radically redefined consolidation savings and productivity, CPU clock speeds got faster and microprocessors became commodities used everywhere – powering PCs, laptops, smart phones and intelligent devices of all types. But the full force and promise of using many microprocessors in parallel, what is now called ‘multicores,’ still remains largely untapped and I/O continues to be the major bottleneck holding back the IT industry from achieving the next revolution in consolidation, performance and productivity.

Virtual computing is still bottlenecked by I/O. Just as city drivers can only dream about flying vehicles as gridlock haunts their morning commute, IT is left wondering if they will ever see the day when application workloads will reach light speed.

How can it be that with multi-core processing, virtualized apps, abundant RAM and large amounts of flash, you still have to deal with I/O-starved virtual machines (VMs) while many processor cores remain idle? Yes, you can run several independent workloads at once on the same server using separate CPU and memory resources, but that’s where everything begins to break down. The many workloads in operation generate concurrent I/O requests yet only one core is charged with I/O processing. This architectural limitation strangles the life out of application performance. Instead of one server doing vast quantities of work, IT is forced to add more servers and racks to deal with I/O bottlenecks – this sprawl goes against the ‘consolidation and productivity savings’ which is the basic premise and driver of virtualization.

All it takes, then, is a few VMs running simultaneously on multi-core processors churning out almost inconceivable volumes of work and you quickly overwhelm the one processor tasked with serial I/O. Instead of a flood of accomplished computing, a trickle of I/O emerges. IT is left feeling like the kids who grew up watching Star Wars who ask – where are our flying starships and when can we travel at light-speed?!

The good news is that all is not lost. DataCore has a number of bright minds hard at work to bring a revolutionary breakthrough for I/O to prime time, DataCore Parallel I/O technology lets virtualized traffic flow through without slowdown. Its unique software-defined parallel I/O architecture is needed to capitalize on today’s powerful multi-core/parallel processing infrastructure. By enlisting software to drive I/O processing across many different cores simultaneously, this eradicates I/O bottlenecks and drives a higher level of consolidation savings and productivity. The better news is that this technology is already on the market today.

Just like Star Wars has shattered the world record, check out how DataCore recently set the new world record on price-performance and on a hyperconverged system (on the Storage Performance Councils peer reviewed SPC1 benchmark). DataCore also reported the best performance per footprint and the fastest response times ever and so while the numbers do not actually reach light-speed, DataCore has lapped the field not once but multiple times. See for yourself the latest benchmark results in this article that appeared in Forbes: The Rebirth of Parallel I/O.

How? DataCore’s software actively senses the I/O load being generated by concurrent VMs. It adapts and responds dynamically by assigning the appropriate number of cores to process the input and output traffic. As a result, VM’s no longer sit idle waiting on a serial I/O thread to become available. Should the I/O load lighten, however, CPU cores are freed to do more computational work.

This not only solves the immediate performance problem facing multi-core virtualized environments, it significantly increases the VM density possible per physical server. It allows IT to do ‘far more with less.’ This means fewer servers or racks and less space, power and cooling are needed to get the work done. In effect, it achieves remarkable cost reductions through maximum utilization of CPU cores, memory and storage while fulfilling the productivity promise of virtualization.

You can read more about this in DataCore’s white paper, “Waiting on I/O: The Straw that Broke Virtualization’s Back.”