Monday, 25 April 2016

DataCore Reports the Fastest Response Time and Best Price-Performance Among Top 10 SPC-1 Leaders

Parallel I/O Technology Drives More than 1.5M SPC-1 IOPS™ at a 100 Microsecond Response Time While Simultaneously Running Enterprise-class Database Workloads; Delivers SPC-1 Price-Performance™ of 9 Cents per SPC-1 IOPS™

DataCore announced that its second SPC-1 result has catapulted the company into third place among the SPC's Top 10 of absolute performers while achieving the best price-performance and fastest response times among those Top 10. DataCore again leapfrogged the field and now holds the top two positions in the SPC-1 Price-Performance™ category1. The DataCore™ Parallel Server software at the heart of the hyper-converged configuration delivered 1,510,090.52 SPC-1 IOPS™ 2. Notably, the number one and two systems3 in the category are very large footprint multimillion dollar systems that are 14 times more costly than the compact 4U-sized DataCore based solution.
"There is no magic in what we are doing," states Ziya Aral, Chairman of DataCore Software. "Yes, we use a standard 2U server but it is a server with 36 cores and 72 logical CPUs. At 2.5 GHz clock speed that multiplies out to the equivalent of 180 GHz, provided only that we use those CPUs concurrently. Even if the CPUs don't scale perfectly, we have an 'embarrassment of riches' in compute power. If they scaled at only 60% - and they do much better than that - we effectively have access to over 100 GHz of CPU power. Frankly, we would have been disappointed if we hadn't been able to put up these kinds of I/O numbers with a 100 GHz CPU."
DataCore's initial results showcasing the power of parallel I/O were first published in late 2015. The new results, which tripled the previous performance achievements, were attained on the same server platform hardware to demonstrate the potential and the pace of advancement possible from the company's new software and parallel I/O architecture. And, there is more to come.
Record-Breaking Performance
To illustrate the system's I/O power in demanding database environments, DataCore chose the Storage Performance Council's SPC-1 benchmark – the Gold Standard used by all major storage manufacturers to measure top end I/O performance, price-performance and response time. For the benchmark, DataCore used an off-the-shelf Intel-based Lenovo System x3650 M5 server.
The 1,510,090.52 SPC-1 IOPS™ were attained with the total cost for hardware, software and three years of support totaling$136,758.88. This yielded the SPC-1 Price-Performance™ result of $0.09 per SPC-1 IOPS™, which is more than eight times lower than all of the top performing high-end systems that have achieved over one million SPC-1 IOPS™ 4.
The DataCore Parallel Server configuration placed third overall in SPC-1 IOPS™ behind two systems costing over $2 million. Only the Huawei OceanStor 18800V3 at a total price of $2,370,760 and the Hitachi VSP G1000 system at $2,003,803 had higher SPC-1 IOPS™ numbers than the $136,759 solution from DataCore. Unlike those two storage systems which only provide external SAN functions, the DataCore Parallel Server also ran the computational enterprise-class database and OLTP workloads inside the same compact package.
Most remarkably, the DataCore configuration delivered the fastest SPC-1 response time ever recorded (100 Microseconds at 100% load), besting all systems, including multi-million dollar systems and all-flash arrays, by seven times or more. From a real estate standpoint, the entire system takes up only 4U (seven vertical inches for a 2U server and 2U for disks) of standard 19" rack space. In stark contrast, other systems reaching the million SPC-1 IOPS™ mark occupy multiple 42U cabinets consuming considerably more data center space, power, and cooling.
DataCore now holds the two top positions in the SPC-1 Price-Performance™ category5 (the previous DataCore™ SANsymphony™ system running on a hyper-converged configuration using a similar Lenovo System x server attained an SPC-1 Price-Performance™ record of $0.08/SPC-1 IOPS™ 6). "Essentially the only major difference between our first and second SPC-1 results was our software," notes Ziya Aral who continued by answering the obvious question - how is that possible? "The truth is that the hardware platform matters, multiprocessing matters, and I/O craft matters, but what matters most of all is software architecture. DataCore was designed from the outset for parallel architectures...but the definition of 'parallel' at the time was 4, 8, maybe 12 CPUs. Today, we are running in standard platforms with 72, 144 or even 288 logical CPU cores, and that will double with the next few ticks of the clock - because Moore's law now advances in multiples."
Aral explains further, "Parallel Server is designed to take advantage of that evolution in computer architectures - not just for the present but into the future. This software inverts our previous understanding: what was once a precious commodity now exists in surplus and the software must take advantage of it."
Tested Product: DataCore™ Parallel Server for Hyper-Converged and Server Systems
DataCore certified its results using DataCore Parallel Server software on a compact 2U Lenovo System x3650 M5 multi-core server featuring Intel® Xeon® E5-2600 v3 series processors with a mix of flash SSD and disk storage.
DataCore Parallel Server is a software product that transforms standard servers into parallel servers targeted for applications where extremely high IOPS and low latency are the primary requirements. DataCore's parallel I/O technology executes many independent I/O streams simultaneously across multiple CPU cores, significantly reducing the latency to service and process I/Os. This technology removes the serialized I/O limitations and bottlenecks that restrict the number of virtual machines (VMs), virtual desktops (VDI) and application workloads that can be consolidated on a server or a hyper-converged platform – and instead enables them to process far more work per server and significantly accelerate I/O-intensive applications.
DataCore Parallel Server software is now available to DataCore OEM partners and is currently being evaluated by server and system vendors. General availability is planned for Q2 2016.
Hyper-Consolidation and Next Generation Productivity with DataCore Parallel I/O Technology
The practical significance and business advantages of DataCore Parallel Server's record-breaking results can be appreciated from several perspectives:
  • Servers are the new storage: I/O-intensive workloads which had previously required enormous investments in exotic SAN hardware or enterprise-class external arrays can now be addressed with relatively inexpensive, compact, off-the-shelf hardware equipped with DataCore Parallel Server software.
  • One machine is simpler than many: Organizations no longer need to split I/O-intensive problems across hundreds of servers to reduce their dependency on exotic equipment. They can run these programs unaltered inside a few low-cost servers without undue complexity, delay and expense.
  • Hyper-consolidation versus server sprawl: Several years into virtualization initiatives, serial I/O processing inside servers remains singularly responsible for poor virtual machine densities. By putting multiple CPU cores to work on I/O, DataCore helps customers do the work of 10 servers on one or two.
DataCore's Parallel Server software enables industry-standard x86 servers to fully harness their untapped parallel computation power and gain the essential I/O functionality needed to drive today's demanding tier-1 business application requirements. In this way, companies benefit from dramatically higher productivity and huge server consolidation savings. To learn more visit:
About the Storage Performance Council
The Storage Performance Council (SPC) is a vendor-neutral standards body focused on the storage industry. The SPC created the first industry-standard performance benchmark targeted at the needs and concerns of the storage industry. From component level evaluation to the measurement of complete distributed storage systems, the SPC benchmark portfolio provides independently audited, rigorous and reliable measures of performance, price-performance and power consumption. For more information about the SPC and its benchmarks, please visit:

Wednesday, 13 April 2016

Research on nearly 2,000 DataCore Customers and Software-Defined Storage Confirms Performance, High Availabilty and Lower Cost of Ownership are the Primary Business Drivers

Transdev Australasia
Data from Nearly 2,000 DataCore Customers Uncovers Impact of Software-Defined Flexibility on Acquisitions, Refreshes and Migrations; Productivity Gains from Increasing Performance Up to 10x and Reducing Total Cost of Ownership
DataCore has announced the results of a new research study conducted by TechValidate. The study primarily focused on the experience of DataCore customer’s in terms of performance, availability/reliability and total cost of ownership (TCO). Overall, participants reported faster applications with up to 10x performance increases; higher availability with a 90% or greater reduction in storage-related downtime; substantial reduction in costs; and greater productivity with the majority of respondents reporting a 50-90% decrease in time spent on routine tasks.
Highlights from the findings include:
  • 47% of customers reported a 50% or more reduction in storage-related spending; over 80% of customers reported at least 25% savings.
  • The majority of customers reported that they were able to defer or skip multiple refresh cycles, and over 60% saved by deferring storage hardware acquisitions by using DataCore to extend the life and enhance the productivity of current investments.
  • 79% of customers reported improvements of at least 3x, and nearly half of the DataCore customers surveyed reported performance improvements between 5x-10x.
  • 60% reduced storage-related downtime with DataCore by 90% or more; the majority of customers who had systems deployed for two years or more reported no storage-related downtime whatsoever.
  • 72% of respondents reported a 50% or more decrease in time spent on managing routine storage tasks, with some noting a reduction as high as 90%.
  • All respondents reported a positive ROI with DataCore in the first year; 50% reported a positive ROI in six months or less.
These findings further support that DataCore offers the best performance and the lowest TCO in the industry by complementing new data recently published by the Storage Performance Council (SPC). In a series of recently released SPC-1 benchmarks, DataCore’s current SANsymphony and Hyper-converged Virtual SAN software achieved the industry’s best price-performance, coming in at just $0.08 cents per SPC-1 IOPS™. The results also measured incredibly fast response times of just 0.32 milliseconds [1] which were achieved while running the full load of the demanding enterprise-class application and database benchmark. At 0.32 milliseconds, the results are 3x-10x better than all other reported results including those from all-flash arrays and million dollar plus systems.
Stennis Space Center

To highlight the full impact of parallel I/O on performance, DataCore recently announced the results of new software that will enable servers to utilize multicores to multiply performance. The software, available in Q2, has demonstrated an incredible result of more than 1.5 million SPC-1 IOPS™ with a new world record response time of just 0.10 milliseconds at 100 percent load [2].
“This is game changer which is ahead of the trend and a key element to help organizations truly be able to have a software-defined data center,” DataCore customer Irvin Nio, IT Architect at Capgemini, noted during the survey.
The new level of enterprise-class high availability and reliability proved by TechValidate research can be attributed to DataCore’s features including hardware interoperability, hardware-independent storage services, data migration capabilities, and more. DataCore’s latest addition to its technology portfolio, parallel I/O, uniquely takes advantage of today’s advanced multi-core server platforms to execute many independent I/O streams simultaneously across multiple CPU cores – supporting the I/O needed to run more VMs and application workloads faster and at a much lower cost. It significantly reduces the latency to service and process I/Os while enabling companies to benefit from dramatically higher productivity and huge server consolidation savings.
A total of 1,984 responses were recorded from DataCore customers globally.TechValidate research data is sourced directly from verified business and technology professionals. The full findings of the study can be viewed at: .

Tuesday, 12 April 2016

Software-defined data centers and the need for parallel I/O hyper-consolidation and performance

"Parallel I/O is essentially like a multi-lane superhighway with “EZ pass” on all the lanes. It avoids the bottleneck of waiting on a single toll booth and the wait time. It opens up the other cores (all the “lanes” in this analogy) for I/O distribution so that data can continue to flow back and forth between the application and the storage media at top speed."
By George Teixeira, President and CEO, DataCore Software
In the software-defined data center (SDDC), all elements of the infrastructure such as networking, compute, servers and storage, are virtualized and delivered as a service. Virtualization at the server and storage level are critical components on the journey to a SDDC since they enable greater productivity through software automation and agility while shielding users from the underlying complexity of the hardware.
Today, applications are driving the enterprise – and these demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. The problem is that in a world that requires near instant response times and increasingly faster access to business-critical data, the needs of tier 1 enterprise applications such as SQL, Oracle and SAP databases have been largely unmet. For most data centers the number one cause of these delays is the data storage infrastructure.
Why? The major bottleneck has been I/O performance. Despite the fact that most commodity servers already cost-effectively provide a wealth of powerful multiprocessor capabilities, most sit parked and in idle mode, unexploited. This is because current systems still rely on device-level optimizations tied to specific disk and flash technologies that don’t have the software intelligence that can fully harness these more powerful server system technologies with multicore architectures.
While the virtual server revolution became the “killer app” that exploited CPU utilization and to some degree the multicore capabilities, the downside is that virtualization and the move to greater server consolidation created a workload blender effect in which more and more of the application I/O workloads were concentrated and had to be scheduled on the same system. All of those VMs and their applications become easily bottlenecked going through a serialized “I/O straw.” As processors and memory have dramatically increased in speed, this I/O straw continues to bottleneck performance — especially when it comes to the critical business applications driving databases and on-line transaction workloads.
Many have tried to address the performance problem at the device level by adding solid-state storage (flash) to meet the increasing demands of enterprise applications or by hard-wiring these fast devices to virtual machines (VMs) in hyper-converged systems. However, improving the performance of the storage media—which replacing spinning disks with flash attempts to do—only addresses one aspect of the I/O stack. Hard-wiring flash to VMs also seems to be a contradiction to the concept of virtualization in which technology is elevated to a software-defined level above the hard-wired and physical aware level, and it also adds complexity and vendor specific lock-ins between the hypervisor and device levels.
Multi-core processors are up to the challenge. The primary element that is missing is software that can take advantage of the multicore/parallel processing infrastructure. Parallel I/O technology enables the I/O processing to be done separately from computation and in parallel to improve I/O performance by building on virtualization’s ability to decouple software advances from hardware innovations. This method uses software to drive parallel I/O across all of those CPU cores.
Parallel I/O technology can schedule I/O from virtualization and application workloads effectively across readily available multicore server platforms. It can overcome the I/O bottleneck by harnessing the power of multicores to dramatically increase productivity, consolidate more workloads and reduce inefficient server sprawl. This will allow much greater cost savings and productivity by taking consolidation to the next level and allowing systems to do far more with less.
Parallel I/O is essentially like a multi-lane superhighway with “EZ pass” on all the lanes. It avoids the bottleneck of waiting on a single toll booth and the wait time. It opens up the other cores (all the “lanes” in this analogy) for I/O distribution so that data can continue to flow back and forth between the application and the storage media at top speed.
The effect is that more data flows through the same hardware infrastructure in the same amount of time as legacy storage systems. The traditional three-tier infrastructure of servers, network, and compute benefits by having storage systems that directly respond and service existing I/O requests faster and thus have the capability of supporting significantly more applications and workloads on the same platforms. The efficiency of a low-latent parallel architecture is potentially more critical in hyper-converged architectures, which are a “shared-everything” infrastructure. If the storage software is more efficient in its use of computing resources, that means that it returns more available processing power to the other processes on which it runs.
By taking full advantage of the processing power offered by multicore servers, parallel I/O technology acts as a key enabler for a true software-defined data center. This is due to the fact that it avoids any special hardwiring that impedes achieving the benefits of virtualization while it unlocks the underlying hardware power to achieve a dramatic acceleration in I/O and storage performance – solving the I/O bottleneck problem and making the realization of software-defined data centers possible.

Sunday, 10 April 2016

ESG’s Senior Lab Analyst shares his hands-on experiences with DataCore Hyper-converged Virtual SAN software and SANsymphony Software-defined Storage platform

ESG’s Senior Lab Analyst, Tony Palmer, shares his hands-on experiences with DataCore™ Hyper-converged Virtual SAN software and SANsymphony™ Software-defined Storage platform.  See how the products fared under a comprehensive battery of tests, exercising many of their enterprise-class features. Learn why these capabilities matter, especially to IT organizations tasked with non-stop operations, latency-sensitive workloads and cost-reduction mandates.

Get a glimpse for self-provisioning storage with the desired SLAs during virtual machine creation thanks to DataCore’s deep integration with VMware VVols. Observe the performance acceleration and savings that cross-array auto-tiering brings. And witness active-active, high-availability in action for failover clusters.

Tony also puts into perspective the significance of DataCore Parallel I/O technology – key to the company’s record-shattering results for I/O response and price-performance under heavy transactional database processing.

Download the complete Lab Validation Report.