Wednesday 28 December 2016

The 2016 Virtualization Review Editor's Choice Awards

VR Editors Choice 2016 Logo
Our picks for the best of the best.
As we close the book on 2016 and start writing a new one for 2017, it's a good time to reflect on the products we've liked best over the past year. In these pages, you'll find old friends, stalwart standbys and newcomers you may not have even thought about.
Our contributors are experts in the fields of virtualization and cloud computing. They work with and study this stuff on a daily basis, so a product has to be top-notch to make their lists. But note that this isn't a "best of" type of list; it's merely an account of the technologies they rely on to get their jobs done, or maybe products they think are especially cool or noteworthy.
Jon Toigo on Adaptive Parallel I/O Technology
In January, DataCore Software provided proof that the central marketing rationale for transitioning shared storage (SAN, NAS and so on) to direct-attached/software-defined kits was inherently bogus. DataCore Adaptive Parallel I/O technology was put to the test on multiple occasions in 2016 by the Storage Performance Council, always with the same result: parallelization of RAW I/O significantly improved the performance of VMs and databases without changing storage topology or storage interconnects. This flew in the face of much of the woo around converged and hyper-converged storage, whose pitchmen attributed slow VM performance to storage I/O latency -- especially in shared platforms connected to servers via Fibre Channel links.
DC express configurator
While it is true that I like DataCore simply for being an upstart that proved all of the big players in the storage and the virtualization industries to be wrong about slow VM performance being the fault of storage I/O latency, the company has done something even more important. Its work has opened the door to a broader consideration of what functionality should be included in a properly defined SDS stack.
In DataCore's view, SDS should be more than an instantiation on a server of a stack of software services that used to be hosted on an array controller. The SDS stack should also include the virtualization of all storage infrastructure, so that capacity can be allocated independently of hypervisor silos to any workload in the form of logical volumes. And, of course, any decent stack should include RAW I/O acceleration at the north end of the storage I/O bus to support system-wide performance.
DataCore hasn't engendered a lot of love, however, from the storage or hypervisor vendor communities with its demonstration of 5 million IOPS from a commodity Intel server using SAS/SATA and non-NVMe FLASH devices, all connected via Fibre Channel link. But it is well ahead of anyone in this space. IBM may have the capabilities in its SPECTRUM portfolio to catch up, but the company would first need to get a number of product managers of different component technologies to work and play well together.
Dan Kusnetzky on SANsymphony and Hyper-converged Virtual SAN
Why I love it: DataCore is a company I've tracked for a very long time. The company's products include ways to enhance storage optimization, storage efficiency and to make the most flexible use of today's hyper-converged systems.
The technology supports physical storage, virtual storage or cloud storage in whatever combination fits the customer's business requirements. The technology supports workloads running directly on physical systems, in VMs or in containers.
The company's Parallel I/O technology, by breaking down OS-based storage silos, makes it possible for customers to get higher levels of performance from a server than many would believe possible (just look at the benchmark data if you don't believe me). This, by the way, also means that smaller, less-costly server configurations can support large workloads.
What would make it even better: I can't think of anything.
Next best product in this category: VMware vSAN

Monday 12 December 2016

DataCore Parallel Processing - Applications & I/O on Steroids

Video 360 Overview by Enterprise Strategy Group and ESG Labs
Mark Peters, Senior Market Research Analyst and Brian Garrett, VP ESG Labs discuss DataCore’s parallel technologies and market shifts, trends to Software-defined Storage, Hyperconverged and Cloud; describe parallel I/O and impact and value based on the leap in performance that goes beyond technologies like all-flash arrays; the fit for data analytics, databases and more…
Must see 6 minute video below:

Thursday 8 December 2016

DataCore Software Chairman Wins Innovator of the Year at Best in Biz Awards 2016

ziya-aral2    BiBA-2016-gold-midres
DataCore Software has been named a gold winner in the Best in Biz Awards Innovator of the Year category, honoring the achievements of DataCore Chairman and Co-Founder, Ziya Aral.
The Best in Biz Awards is the only independent business awards program judged by members of the press and industry analysts. The sixth annual program in North America garnered more than 600 entries, from public and private companies of all sizes and from a variety of industries and geographic regions in the U.S. and Canada.
Much of DataCore’s technological success can be attributed to its Chairman and Co-Founder, Ziya Aral. Responsible for the direction of DataCore’s technologies, advancements and products, Aral is truly a pioneer in the storage industry.
While Aral has long been considered a “guru” in the space – widely published in the field of computer science – there’s no doubt that this past year was among his most innovative. His fundamental role in creating DataCore’s new Parallel I/O technology, the heart of the company’s software products including SANsymphony and DataCore Hyper-converged Virtual SAN is one of his greatest achievements to date. This says a lot for someone who designed the first high availability UNIX-based intelligent storage controller and whose development team invented disk-based data-sharing technology.
The problem facing the IT industry today is that in order to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads, business applications, data analytics, Internet of Things and highly virtualized environments, systems require ultra-fast I/O response times. However, the required I/O performance had failed to materialize in large part because software development hadn’t exploited the symmetrical multiprocessing characteristics of today’s cost-effective multicore systems. This is where Ziya Aral comes in…
In the early days of parallel computing, Aral was vice president of engineering and CTO of Encore Computer Corporation, one of the pioneers of parallel computing development. Then as co-founder of DataCore Software, he helped create the storage virtualization movement and what is now widely known as software-defined storage technology. As Aral and his team of technologists at DataCore Software set out to tackle the I/O bottleneck issue, this unique combination of expertise has enabled DataCore to develop the technology required to leverage the power of multi-cores to power I/O intensive applications like no other company can.
Parallel I/O executes independent ‘no stall’ I/O streams simultaneously across multiple CPU cores, dramatically reducing the time it takes to process I/O and enabling a single server to do the work of many.
The result? Parallel I/O not only completely revolutionizes storage performance but takes parallel processing from the realm of the specialist to making it practical and affordable for the masses. Unleashing multicores from the shackles of being I/O bound opens up endless new possibilities for cognitive computing, AI, machine learning, IoT and data analytics. The technology is proven at customer sites and has shattered the world-record for I/O performance and response times using industry-audited and peer-reviewed benchmarks.
“If companies are going to stand out from the crowd and remain competitive in future years, innovation is key. The market is tough and there is no guarantee that today’s dominant players will remain so — unless time and effort are concentrated on research and development,” said Charlie Osborne, ZDNet, one of Best in Biz Awards’ judges this year. “This year’s entries in Best in Biz Awards highlighted not only innovative business practices but the emergence of next-generation technologies which will keep companies current and relevant.”
For a full list of gold, silver and bronze winners in Best in Biz Awards 2016, visit: http://www.bestinbizawards.com/2016-winners.

DataCore’s SANsymphony-V Software Receives Editor’s Choice ‘SVC 2016 Industry Award’ due to its Outstanding Contribution to Technology





READING, UK: DataCore have announced that their tenth generation SANsymphony-V platform has received the coveted Storage, Virtualisation and Cloud (SVC) 2016 Industry Award from an editorial judging panel, held at a London-based glittering ceremony, last Thursday evening.

“SANsymphony-V firmly deserves this important industry award based on two counts. Firstly, on the maturity and longevity of the platform – DataCore were the first to market Software-Defined Storage back in the early 2000’s, bringing software powered storage to thousands of customers. And secondly, on the immense impact that Parallel IO processing is having today within data centres, handling compute with unprecedented ease and on a scale never witnessed before.” notes Brett Denly, Regional Director, DataCore Software UK.

Brett collected the Award alongside DataCore’s Neil Crispin and Pierre Aguerreberry.

With unparalleled  numbers of vendors to select from within the Storage, Cloud and Virtualisation space, the eminent editorial judging panel of Digitalisation World stable of titles contemplated long and hard before bestowing the 2016 SVC Industry Award to DataCore Software, noting:-

“DataCore Software is a leader in software-defined storage backed by 10,000 customer sites around the world, so they must be doing something right!” said Jason Holloway, Director of IT Publishing at Angel Business Communications, Organiser of the SVC Awards

SVC Industry Awards continue to set a benchmark for outstanding performance on the contribution of individuals, projects, organisations and technologies that have excelled in the use, development and deployment of IT.


Image: DataCore’s Brett Denly (Regional Director), Neil Crispin (Account Director), Pierre Aguerreberry (Director, Enterprise Sales) collect the Award from SVC organiser & Director of IT Publishing, Jason Holloway.

Tuesday 15 November 2016

The magic of DataCore Parallel I/O Technology


DataCore Parallel I/O technology seems like a kind of magic, and too good to be true… but you only need to try it once to understand that it is real and has the potential to save you loads of money!
Benchmarks Vs Real world
Frankly, I was skeptical at first and I totally underestimated this technology. The benchmark posted a while ago was incredibly good (too good to be true?!). And even though this one wasn’t false, sometimes you can just work around some limits of the benchmarking suite and build specific and unrealistic configurations to get numbers that look very good, but that are hard to reproduce in real world scenarios.

When I was briefed by DataCore they convinced me not with sterile benchmarks, but with real workload testing! In fact, I was particularly impressed by a set of amazing demos I had the chance to watch where a Windows database server, equipped with Parallel I/O Technology, was able to process data dozens of times faster than the same server without DataCore’s software… and the same happened with a cloud VM instance (which is theoretically the same, since this is a software technology, but is much more important than you think… especially if you look at how much money you could save by adopting it).
Yes, dozens of times faster!
I know it seems ridiculous, but it isn’t. DataCore Parallel Server is a very simple piece of software that changes the way IO operations are performed. It takes advantage of the large number of CPU cores and RAM available on a server and allows to organize all the IOs in a parallel fashion, instead of serial, allowing to achieve microsecond level latency and, consequently, a very large number of IOPs. 


This kind of performance allows to build smaller clusters or get results much faster with the same amount of nodes… and without changing the software stack or adding expensive in-memory options to your DB. It is ideal for Big Data Analytics use cases, but there are also other scenarios where this technology can be of great benefit!
Just software
I don’t want to downplay DataCore’s work by saying “just software”, quite the contrary indeed! The fact that we are talking about a relatively simple piece of software makes it applicable not only to your physical server but also to a VM or, better, a cloud VM.
If you look at cloud VM prices, you’ll realise that it is much better to run a job in a small set of large-CPU large-memory VMs than in a large amount of SSD-based VMs for example… and this simply means that you can spend less to do more, faster. And, again, when it comes to Big Data Analytics this is a great result, isn’t it?
Closing the circle
DataCore is one of those companies that has been successful and profitable for years. Last year, with the introduction of Parallel I/O they demonstrated their capability of still being able to innovate and bring value to their customers. Now, thanks to an evolution of Parallel I/O, they are entering in a totally new market, with a solution that can easily enable end users to save loads of money and get faster results. It’s not magic of course, just a much better way to use the resources available in modern servers.

Parallel Server is perfect for Big Data Analytics, makes it available to a larger audience, and I’m sure we will see other interesting use cases for this solution over time…


DataCore Hyperconverged Virtual SAN Speeds Up 9-1-1 Dispatch Response

Critical Microsoft SQL Server-based Application Runs 20X Faster

"Response times are faster. The 200 millisecond latency has gone away now with DataCore running," stated ESCO IT Manager Corey Nelson. "In fact, we are down to under five milliseconds as far as application response times at peak load. Under normal load, the response times are currently under one millisecond."

DataCore Software announced that Emergency Communications of Southern Oregon (ECSO) has significantly increased performance and reduced storage-related downtime with the DataCore Hyper-converged Virtual SAN.

Located in Medford Oregon, ECSO is a combined emergency dispatch facility and Public Safety Answering Point (PSAP) for the 9-1-1 lines in Jackson County, Oregon. ECSO wanted to replace its existing storage solution because its dispatch application, based on Microsoft SQL Server, was experiencing latencies of 200 milliseconds at multiple times throughout the day - impacting how fast fire and police could respond to an emergency. In addition to improving response time, ECSO wanted a new solution that could meet other key requirements, including higher availability, remote replication, and an overall more robust storage infrastructure.  

After considering various hyper-converged solutions, ECSO IT Manager Corey Nelson decided that the DataCore Hyper-converged Virtual SAN was the only one that could meet all of his technology and business objectives. DataCore Hyper-converged Virtual SAN enables users to put the internal storage capacity of their servers to work as a shared resource while also serving as integrated storage architecture. Now ECSO runs DataCore Hyper-converged Virtual SAN on a single tier of infrastructure, combining storage and compute on the same clustered servers.

Performance Surges with DataCore
Prior to DataCore, performance -- specifically, latency -- was a problem at ECSO, due to the organization's prior disk array that took 200 milliseconds on average to respond. DataCore has solved the performance issues and fixed the real-time replication issues that ECSO was previously encountering because its Hyper-converged Virtual SAN speeds up response and throughput with its innovative Parallel I/O technology in combination with high-speed caching to keep the data close to the applications.
ECSO's critical 9-1-1 dispatch application must interact nearly instantly with the SQL Server-based database. Therefore, during the evaluation and testing period, understanding response times were vital criteria. To test this, Nelson ran a SQL Server benchmark against his current environment as well as the DataCore solution. The benchmark used a variety of block sizes as well as a mix of random/sequential and read/write to measure the performance. The results were definitive -- the DataCore Hyper-converged Virtual SAN solution was 20X faster than the current environment.

"Response times are faster. The 200 millisecond latency has gone away now with DataCore running," stated Nelson. "In fact, we are down to under five milliseconds as far as application response times at peak load. Under normal load, the response times are currently under one millisecond."

Unsurpassed Management, Performance and Efficiency
Before DataCore, storage-related tasks were labor intensive at ECSO. Nelson was accessing and reviewing documentation continuously to ensure that any essential step concerning storage administration was not overlooked. He knew that if he purchased a traditional storage SAN, it would be yet another point to manage.
"I wanted as few ‘panes of glass' to manage as possible," noted Nelson. "Adding yet another storage management solution to manage would just add unnecessary complexity."
The DataCore hyper-converged solution was exactly what Nelson was looking for. DataCore has streamlined the storage management process by automating it and enabling IT to gain visibility to overall health and behavior of the storage infrastructure from a central console.

"DataCore has radically improved the efficiency, performance and availability of our storage infrastructure," he said. "I was in the process of purchasing new hosts, and DataCore Hyper-converged Virtual SAN fit perfectly into the budget and plan. This is a very unique product that can be tested in anyone's environment without purchasing additional hardware." 

To see the full case study on Emergency Communications of Southern Oregon, click here.


Thursday 3 November 2016

Performance, Availability and Agility for SQL Server

Introduction - Part 1

IT organizations must maintain service level agreements (SLAs) by meeting the demands of applications that work on a SQL Server. To meet these requirements they must deliver superior performance and continuous uptime of each SQL Server instance.  Furthermore, application dependent on SQL Server, such as agile development, CRM, BI, or IOT,  are increasingly dynamic and require faster adaptability to performance and high availability challenges than device level provisioning, analytics and management can provide. 
In this blog, which is the first of a 3 part series, we will discuss the challenges IT organizations face with SQL server and solution that helps them overcome these challenges.
Challenges
All these concerns can be associated to a common root cause, which is the storage infrastructure. Did you know that 62% of DBAs experience latency of more than 10 milliseconds when writing to disks1? Not only does this slowdown impact the user experience, but also has DBAs spending hours tuning the database. Now that is the impact of storage on SQL Server performance; so what about its impact on availability? Well, according to surveys, 50% of organizations don’t have an adequate business continuity plan because of expensive storage solution2. When it comes to agility, DBAs have agility on at the SQL Server level, but IT administrators don’t have the same agility on the storage side – especially when they have to depend on heterogeneous disk arrays. Surveys shows that a majority of enterprises have 2 or more types of storage and 73% have more than 4 types3
A common IT trend to solve the performance issue is it to adopt flash storage4. However, moving the entire database to flash storage significantly increases cost. To save on cost, the DBAs end up with the burden of having to pick and choose the instances that require high-performance. The other option to overcome the performance issue is to tune the database and change the query. This requires significant database expertise, demands time, and changes to the production database. Most organizations either don’t have dedicated database performance tuning experts or the luxury of time or are sensitive to making changes to the production database. This common dilemma makes the option of tuning the database a very farfetched approach.
For higher uptime, DBAs utilize Cluster Failover Instance (formerly Microsoft Cluster Service) for server availability, but clustering alone cannot overcome storage-related downtime. One option is to upgrade to SQL Server Enterprise, but it puts a heavy cost burden on the organization (Figure 1.) This leaves them with the option to either not upgrading to SQL Server Enterprise or choosing only few SQL Server instances to be upgraded to the SQL Server Enterprise. The other option is to use storage or 3rd-party mirroring, but neither solution guarantee a Recovery Point Objective (RPO) & Recovery Time Objective (RTO) of zero.
SQL-server-challenges-part1
Figure 1
Solution
DataCore’s advanced software-defined storage solution addresses both the latency and uptime challenges of SQL Server environments. It is easy to use, delivers high-performance and offers continuous storage availability. DataCore™ Parallel I/O and high-speed ‘in-memory’ caching technologies increases productivity by dramatically reducing the SQL Server query times.

Next blog
In the next blog, we will touch more on the performance aspect of DataCore.

Thursday 29 September 2016

Parallel Application Meets Parallel Storage

A shift in the computer industry has occurred. It wasn't a shift that happened yesterday -- the year was 2005 and Moore's Law took a deviation from the path that it had been traveling on for over 35 years. Up until this point, improved processor performance was mainly due to frequency scaling, but when the core speed reached ~3.8GHz, the situation quickly became cost prohibitive due to the physics involved with pushing beyond this barrier (factors such as core current, voltage, heat dissipation, structural integrity of the transistors, etc.). Thus, processor manufacturers (and Moore's Law) were forced to take a different path. This was the dawning of the massive symmetrical multiprocessing era (or what we refer to today as 'multicore').
The shift to superscalar symmetrical multiprocessing (SMP) architectures required a specialized skill set in parallel programming to fully realize the performance increase across the numerous processor resources. It was no longer enough to rely on frequency scaling for better application response times and throughput. More than a decade later, a severe gap persists in our ability to harness the power of multicore, mainly due to either a lack of understanding of parallel programming or the inherent difficulty in porting a well-established application framework to a parallel programming construct.
Perhaps virtualization is also responsible for some of the gap since the entire concept of virtualization (specifically compute virtualization) is to create many independent virtual machines whereby each one can run the same application simultaneously and independently. Within this framework, the demand for parallelism at the application level may have diminished since the parallelism is handled by the abstraction layer and scheduler within the compute hypervisor (and no longer as necessary for the application developer -- I'm just speculating here). So, while databases and hypervisors are largely rooted in parallelism, there is one massive area that still suffers from a lack of parallelism - storage.

THE PARALLEL STORAGE REVOLUTION BEGINS

In 1998, DataCore Software began work on a framework specifically intended for driving storage I/O. This framework would become known as a storage hypervisor. At the time, the best multiprocessor systems that were commercially available were multi-socket single-core systems (2 or 4 sockets per server). From 1998 to 2005, DataCore perfected the method of harnessing the full potential of common x86 SMP architectures with the sole purpose of driving high-performance storage I/O. For the first time, the storage industry had a portable software-based storage controller technology that was not coupled to a proprietary hardware frame.
In 2005, when multicore processors arrived in the x86 market, an intersection formed between multicore processing and increasingly parallel applications such as VMware's hypervisor and parallel database engines such as Microsoft SQL and Oracle. Enterprise applications started to slowly become more and more parallel, while surprisingly, the storage subsystems that supported these applications remained serial.
The serial nature of storage subsystems did not go unnoticed, at least by storage manufacturers. It was well understood that at the current rate of increase in processor density coupled with wider adoption of virtualization technologies (which drove much higher I/O demand density per system), a change was needed at the storage layer to keep up with increased workloads.
In order to overcome the serial limitation in storage I/O processing, the industry had to make a decision to go parallel. At the time, the path of least resistance was to simply make disks faster, or taken from another perspective, make solid state disks, which by 2005 had been around in some form for over 30 years, more affordable and with higher densities.
As it turns out, the path of least resistance was chosen, either because alternative methods of storage I/O parallelization were unrealized or perhaps there was an unwillingness by the storage industry to completely recode their already highly complex storage subsystem programming. The chosen technique, referred to as [Hardware] Device Parallelization, is now used by every major storage vendor in the industry. The only problem is that it doesn't drastically address the fundamental problem of storage performance which is latency.
Chris Mellor from The Register wrote recently in an article, "The entire recent investment in developing all-flash arrays could have been avoided simply by parallelizing server IO and populating the servers with SSDs."

For the full story, read the complete blog post by our Director, Systems Engineering and Solution Architecture, Jeff Slapp.

Friday 15 July 2016

Gartner Research Report on Software-defined Storage: The CxO View


Download this complimentary Gartner report Software-defined Storage: The CxO View – featuring the Top Five Use Cases and Benefits of Software-defined Storage – and learn how it can help grow your business while reducing TCO.              
Agile, cost-effective data infrastructure for today’s business climate
Welcome Fellow CxO,                                                                                                            
Today’s business climate carries a great deal of uncertainty for companies of all sizes and industries.  To seize new business models and opportunities, systems must be flexible and easily adjusted in order to respond to growth spurts, seasonality and peak periods. Likewise agility helps us mitigate risk .With the sluggish economies across the world, there is a need to be prepared to react quickly to changing fortunes.  From cutting back when needed to rapidly growing when opportunities present themselves, companies are less focused on long-term planning in favor of quick decisions and meeting quarterly expectations.

Technology is changing business dynamics as well.  Social, mobile and cloud are impacting companies’ operations, meaning they need to be able to meet changing demand 24x7.  This has put a premium on companies’ ability to react quickly while being able to absorb and analyze all the data they are gathering.
In survey after survey, CxOs highlight the following challenges when it comes to IT:
+ Dealing with the rapid growth of data
+ High cost of storing this data
+ Delivering high-performance applications
+ Meeting Business Continuity / Disaster Recovery requirements

When looking at IT infrastructure, it’s pretty clear that compute and networking have taken the lead in meeting these demanding requirements.  But, storage is a laggard.

Enter Software-defined Storage (SDS).  Aside from being the latest buzzword, what is SDS and will it help companies like yours succeed?

Put simply, SDS delivers agility, faster time to respond to change and more purchasing power control in terms of cost decisions.  Gartner defines SDS as “storage software on industry-standard server hardware [to] significantly lower opex of storage upgrades and maintenance costs… Eliminates need for high-priced proprietary storage hardware”. 

Our own research based on thousands of customers real-world feedback shows a growing interest in SDS.  By separating the storage software from storage hardware, SDS is able to:
+ Pool data stores allowing all storage assets and their existing and new capacity investments to work together ;enabling different storage devices from different vendors to be managed in common
+ Provide a comprehensive set of data services across different platforms and hardware
+ Separate advances in software from advances in hardware
+ Automate and simplify management of all storage


The benefits to your company are potentially enormous.  In a recent survey of over 2000 DataCore customers that have deployed SDS, key findings include:
79% improved performance by 3X or more  
83% reduced storage-related downtime by 50% or more 
81% reduced storage-related spending by 25% or more   
100% saw a positive ROI in the first year

It’s these kind of results and the advances in performance and efficiency due to DataCore’s revolutionary Parallel I/O technology within our SDS solution that have led to over 30k customer deployments globally and 96% of CxOs surveyed stating they recommend DataCore SDS.

Sincerely,
George Teixeira,

President and CEO, Co-founder

Tuesday 5 July 2016

DataCore takes storage performance crown

http://www.itwire.com/business-it-news/storage/73497-datacore-takes-spc-1-crown.html
Storage vendor DataCore has established a record for the SPC-1 benchmark, blowing the doors off the previous top performers despite its use of commodity hardware.
The Top 5 Best Results ever reported:
A pair of Lenovo X3650 M5 servers running the DataCore Parallel Server software has achieved 5,120,098.98 SPC-1 IOPS.
The previous top performers on this benchmark were the Huawei OceanStor 18800 V3 (3,010,007.37 SPC-1 IOPS) and the Hitachi VSP G1000 (2,004,941.89 SPC-1 IOPS).
While those two systems cost in excess of US$2 million, the DataCore-based system cost just over US$506,000.
...No other vendor in the top 10 comes close to matching DataStore's average response time under full load. The three systems managed 0.28, 0.10 and 0.22ms respectively. The only others with sub-millisecond response were the Huawei (0.92ms) and Hitachi (0.96ms) systems mentioned above
DataCore's high performance comes from taking full advantage of the parallelism available in modern multi-core CPUs, explained vice president of APAC sales Jamie Humphrey.
"We're redefining not only how storage works, but the economies inside the data centre," he toldiTWire.
What other vendors deliver in 48 or 72U of rack space, a DataCore-based system can provide in 14U, he said.
DataCore's approach makes high performance storage available to midmarket organisations as well as large enterprises, ANZ regional sales director Marco Marinelli told iTWire. Furthermore, the company offers a "highly mature product" currently on version 10.
Not every customer needs 5.1 million IOPS, but most would like the reduced latency that comes from being able to being able to fully utilise Fibre Channel's performance. Humphrey gave the example of a mid-sised organisation that just wants faster database access. With conventional systems it would need to over-engineer the storage to get the required response time, but DataCore provides "a very adaptive architecture" that can accommodate various workloads.
Customers need the flexibility to buy what they need, not what they're told they can buy, he said.
And where implementing software-defined storage is usually seen as a "rip and replace" project, that's not the case with DataCore, which can be used to augment an existing environment, bringing together various point solutions in a way their vendors cannot manage.
DataCore has hardware alliances with server, networking and storage vendors, said Humphrey, and publishes reference architectures for assembling the various products.

Thursday 30 June 2016

Performance debate - 'Sour grapes' DataCore chairman fires back

Fresh from the DataCoreLabs blog:
 
Based on the questions raised in recent press articles, it seems some have missed a major aspect that contributed to DataCore's world record storage performance. As some may think, it wasn’t just the cache in memory that made the biggest difference in the result. The principal innovation that provided the differentiation is DataCore’s new parallel I/O architecture. I think our Chairman and Technologist; Ziya Aral says it well in this excerpt from the recent article from The Register, written by Chris Mellor: The SPC-1 benchmark is cobblers, thunders Oracle veep

The press release that sparked the debate is located here: 
DataCore Parallel Server Rockets Past All Competitors, Setting the New World Record for Storage Performance

Measured Results are Faster than the Previous Top Two Leaders Combined, yet Costs Only a Fraction of Their Price in Head-to-head Comparisons Validated by the Storage Performance Council; See Chart Below:
Top 3 Capture

Comments from the original article:

The DataCore SPC-1-topping benchmark has attracted attention, with some saying that it is artificial (read cache-centric) and unrealistic as the benchmark is not applicable to today's workloads.

Oracle SVP Chuck Hollis told The Register: "The way [DataCore] can get such amazing IOPS on a SPC-1 is that they're using an enormous amount of server cache."
...In his view: "The trick is to size the capacity of the benchmark so everything fits in memory. The SPC-1 rules allow this, as long as the data is recoverable after a power outage. Unfortunately, the SPC-1 hasn't been updated in a long, long time. So, all congrats to DataCore (or whoever) who is able to figure out how to fit an appropriately sized SPC-1 workload into cache."

But, in his opinion, "we're not really talking about a storage benchmark any more, we're really talking about a memory benchmark. Whether that is relevant or not I'll leave to others to debate."

DataCore's response ... Sour grapes
Ziya Aral, DataCore's chairman, has a different view, which we present in at length as we reckon it is important to understand his, as well as DataCore's, point of view.
"Mr. Hollis' comments are odd coming from a company which has spent so much effort on in-memory databases. Unfortunately, they fall into the category of 'sour grapes'."
“The SPC-1 does not specify the size of the database which may be run and this makes the discussion around 'enormous cache', etc. moot,” continued Aral. “The benchmark has always been able to fit inside the cache of the storage server at any given point, simply by making the database small enough. Several all-cache systems have been benchmarked over the years, going back over a decade and reaching almost to the present day.”

"Conversely, 'large caches' have been an attribute of most recent SPC-1 submissions. I think Huawei used 4TB of DRAM cache and Hitachi used 2TB. TB caches have become typical as DRAM densities have evolved. In some cases, this has been supplemented by 'fast flash', also serving in a caching role."

Aral continued:
In none of the examples above were vendors able to produce results similar to DataCore's, either in absolute or relative terms. If Mr. Hollis were right, it should be possible for any number of vendors to duplicate DataCore's results. More, it should not have waited for DataCore to implement such an obvious strategy given the competitive significance of SPC-1. We welcome such an attempt by other vendors.

“So too with 'tuning tricks,'” he went on. “One advantage of the SPC-1 is that it has been run so long by so many vendors and with so much intensity that very few such "tricks" remain undiscovered. There is no secret to DataCore's results and no reason to try guess how they came about. DRAM is very important but it is not the magnitude of the memory array so much as the bandwidth to it."

Symmetric multi-processing
Aral also says SMP is a crucial aspect of DataCore's technology concerning memory array bandwidth, explaining this at length:

As multi-core CPUs have evolved through several iterations, their architecture has been simplified to yield a NUMA per socket, a private DRAM array per NUMA and inter-NUMA links fast enough to approach uniform access shared memory for many applications. At the same time, bandwidth to the DRAMs has grown dramatically, from the current four channels to DRAM, to six in the next iteration.

The above has made Symmetrical Multi-Processing or SMP, practical again. SMP was always the most general and, in most ways, the most efficient of the various parallel processing techniques to be employed. It was ultimately defeated nearly 20 years ago by the application of Moore's Law – it became impossible to iterate SMP generations as qucikly as uniprocessors were advancing.

DataCore is the first recent practitioner of the Science/Art to put SMP to work... in our case with Parallel I/O. In DataCore's world record SPC-1 run, we use two small systems but no less than 72 cores organized as 144 usable logical CPUs. The DRAM serves as a large speed matching buffer and shared memory pool, most important because it brings a large number of those CPUs to ground. The numbers are impressive but I assure Mr. Hollis that there is a long way to go.

DataCore likes SPC-1. It generates a reasonable workload and simulates a virtual machine environment so common today. But, Mr. Hollis would be mistaken in believing that the DataCore approach is confined to this segment. The next big focus of our work will be on, analytics which is properly on the other end of this workload spectrum. We expect to yield a similar result in an entirely dissimilar environment.
The irony in Mr. Hollis' comments is that Oracle was an early pioneer and practitioner of SMP programming and made important contributions in that area.

...
DRAM usage
DataCore's Eric Wendel, Director for Technical Ecosystem Development, added this fascinating fact: "We actually only used 1.25TB (per server node) for the DRAM (2.5TB total for both nodes) to get 5.1 million IOPS, while Huawei used 4.0TB [in total] to get 3 million IOPS."

Although 1.536TB of memory was fitted to each server only 1.25TB was actually configured for DataCore's Parallel Server (See the full disclosure report) which means DataCore used 1.5TB of DRAM in total for 5 million IOPS compared to Huawei's 4TB for 3 million IOPS...

Monday 27 June 2016

DataCore takes World Record for Performance and the SPC-1 crown

Storage vendor DataCore has established a record for the SPC-1 benchmark, blowing the doors off the previous top performers despite its use of commodity hardware.
A pair of Lenovo X3650 M5 servers running the DataCore Parallel Server software has achieved 5,120,098.98 SPC-1 IOPS.
The previous top performers on this benchmark were the Huawei OceanStor 18800 V3 (3,010,007.37 SPC-1 IOPS) and the Hitachi VSP G1000 (2,004,941.89 SPC-1 IOPS).
While those two systems cost in excess of US$2 million, the DataCore-based system cost just over US$506,000.
Two other DataCore systems are in the SPC-1 top 10: a single node configuration of DataCore Parallel Server (1,150,090 SPC-1 IOPS for US$137,000) and the DataCore SANsymphony HA-FC (1,201,961 SPC-1 IOPS for US$115,000).
No other vendor in the top 10 comes close to matching DataStore's average response time under full load. The three systems managed 0.28, 0.10 and 0.22ms respectively. The only others with sub-millisecond response were the Huawei (0.92ms) and Hitachi (0.96ms) systems mentioned above
DataCore's high performance comes from taking full advantage of the parallelism available in modern multi-core CPUs, explained vice president of APAC sales Jamie Humphrey.
"We're redefining not only how storage works, but the economies inside the data centre," he toldiTWire.
What other vendors deliver in 48 or 72U of rack space, a DataCore-based system can provide in 14U, he said.
DataCore's approach makes high performance storage available to midmarket organisations as well as large enterprises, ANZ regional sales director Marco Marinelli told iTWire. Furthermore, the company offers a "highly mature product" currently on version 10.
Not every customer needs 5.1 million IOPS, but most would like the reduced latency that comes from being able to being able to fully utilise Fibre Channel's performance. Humphrey gave the example of a mid-sised organisation that just wants faster database access. With conventional systems it would need to over-engineer the storage to get the required response time, but DataCore provides "a very adaptive architecture" that can accommodate various workloads.
Customers need the flexibility to buy what they need, not what they're told they can buy, he said.
And where implementing software-defined storage is usually seen as a "rip and replace" project, that's not the case with DataCore, which can be used to augment an existing environment, bringing together various point solutions in a way their vendors cannot manage.
DataCore has hardware alliances with server, networking and storage vendors, said Humphrey, and publishes reference architectures for assembling the various products.