Wednesday, 25 December 2013

Monday, 23 December 2013

DataCore deployed at over 10000 customer sites and selected as Software-defined Storage Vendor in Advantage Phase of Gartner Group “IT Market Clock"

DataCore Surpasses 10,000 Customer Sites Globally as Companies Embrace Software-Defined Storage

Customers Realize Software, Not Hardware, Key to Increasing Performance, Reducing Cost and Simplifying Management
DataCore has experienced significant customer adoption of its ninth-generation SANsymphony-V platform in 2013. As the company surpassed 10,000 customer sites globally, new trends have materialized around the need for businesses to rethink their storage infrastructures with software architecture becoming the real blueprint for the next wave of data centers.
“The remarkable increase in infrastructure-wide deployments that DataCore experienced throughout 2013 reflects an irreversible market shift from tactical, device-centric acquisitions to strategic software-defined storage decisions. Its significance is clear when even EMC concedes the rapid commoditization of hardware is underway. Their ViPR announcement acknowledges the ‘sea change’ in customer attitudes and the fact that the traditional storage model is broken,” said George Teixeira, president and CEO at DataCore. “We are clearly in the age of software defined data centers, where virtualization, automation and across-the-board efficiencies must be driven through software. Businesses can no longer afford yearly ‘rip-and-replace’ cycles, and require a cost-effective approach to managing storage growth that allows them to innovate while getting the most out of existing investments.”
In addition to the mass customer adoption. DataCore’s software was recently selected as a software-defined storage vendor in Gartner’s “IT Market Clock for Storage, 2013,” published September 6, 2013. The report, by analysts Valdis Filks, Dave Russell, Arun Chandrasekan et al., identifies software-defined storage vendors in the Advantage Phase, and recognizes two main benefits of software-defined storage:
“First, in the storage domain, the notion of optimizing, perhaps often lowering, storage expenses via the broad deployment of commodity components under the direction of robust, policy-managed software has great potential value. Second, in the data center as a whole, enabling multitenant data and workload mobility among servers, data centers and cloud providers without disrupting application and data services would be transformational.”
Three major themes in 2013 shaped the software-defined storage market and defined the use cases of DataCore’s new customers:
Adoption and Appropriate Use of Flash Storage in the Data Center
As more companies rely on flash to achieve greater performance, a unique challenge is arising when it comes to redesigning storage architectures. While the rule of thumb is five percent of workloads require top tier performance, flash vendors are doing their best to convince customers to go all flash despite the low ROI. Instead, businesses have turned to auto-tiering software to make sure applications are sharing flash and spinning disk, based on the need to optimize performance and investment. Going beyond other implementations, DataCore has redefined automation and mobility of data storage with a new policy-managed paradigm that makes auto-tiering a true ‘enterprise wide’ capability that works across multiple vendor offerings and the many levels and varied mix of flash devices and spinning disks. is a multinational provider of managed infrastructure services focusing on cloud computing and storage, colocation, connectivity and business continuity for enterprise organizations.
“Flash gives us the greatest levels of performance for our mission critical applications,” said Jeffrey Slapp, CTO of “While integral, flash is only a small piece of our storage architecture. In order to help ensure our applications are using the right type of storage for peak performance, we use DataCore's SANsymphony-V platform. The software's intelligence makes sure the more demanding applications use flash and less demanding applications use hard disk. We’ve been able to reduce operational expenses by 35% because of the software’s intelligence capabilities, which allows us to tackle other key business initiatives by leveraging the time we never had previously.”
Virtualizing Storage while Accelerating Performance for Tier-One Applications
Demanding business applications like databases, ERP and mail systems create bottlenecks in any storage architecture due to their rapid activity and intensive I/O and transactional requirements. To offset this, many companies buy high-end storage systems while leaving terabytes of storage unused. Now, though, businesses are able to combine all of their available storage and virtualize it, independent of vendor – creating a single storage pool. Beyond virtualization and pooling, DataCore customers report faster application response times and significant performance increases – accelerating I/O speeds up to five times.
Pee Dee Electric Cooperative is a non-profit, electric cooperative located in Darlington, South Carolina that supplies electricity and other services to more than 30,000 consumers.
“Tier-one applications demand high performance and in the past this translated directly into expensive and overprovisioned storage,” said Robbie Howle, IT Manager at Pee Dee Electric Cooperative. “To help allocate the necessary storage that meets the performance demands of the application, without buying new storage, we leverage the SANsymphony-V platform as it accelerates and virtualizes all of the available storage within the organization. In fact, DataCore’s software-based approach to storage virtualization has drastically reduced costs by enabling us to virtualize storage devices we already had – eliminating the need to pay upwards of $500,000 for a traditional, hardware-based SAN. Moreover, the benefits of the DataCore virtualized storage infrastructure continue to manifest themselves perpetually and we have been able to get twice as much storage, better performance and achieve high availability in going with DataCore.”
Software Management of Incompatible Storage Devices and Models
Many data centers feature a wide variety of storage arrays, devices and product models from a number of different vendors – including EMC, NetApp, IBM, Dell and HP – none of which are directly compatible. Interestingly, DataCore customers report that the issue of incompatibility generally surfaces more when dealing with different hardware models from the same vendor than between different vendors, and thus have turned to management tools that treat all hardware the same.
Maimonides Medical Center, based in Brooklyn, N.Y., is the third-largest independent teaching hospital in the U.S. The hospital has more than 800 physicians relying on its information systems to care for patients around-the-clock.
“Over the past 12 years, our data center has featured eight different storage arrays and various other storage devices from three different vendors,” said Gabriel Sandu, chief technology officer at Maimonides Medical Center. “By using DataCore’s SANsymphony software and currently with its latest iteration of SANsymphony-V R9, we have been able to seamlessly go from one storage array to the next with no downtime to our users. We are able to manage our SAN infrastructure without having to worry or be committed to any particular storage vendor. DataCore’s technology has also allowed us to use midrange storage arrays to get great performance – thereby not needing to go with the more expensive enterprise-class arrays from our preferred manufacturers. DataCore’s thin provisioning has also allowed us to save on storage costs as it allows us to be very efficient with our storage allocation and makes sure no storage goes unused.”

QLogic certifies adapters for software-defined storage; announces Fibre Channel Adapters and FabricCache are DataCore Ready

QLogic FlexSuite Gen 5 Fibre Channel and FabricCache Adapters certified DataCore Ready

The emerging software-defined storage space hit a new milestone after QLogic announced that it has added support for DataCore’s SANsymphony-V virtualization offering. 

QLogic® FlexSuite™ 2600 Series 16Gb Gen 5 Fibre Channel adapters and FabricCache™ 10000 Series server-based caching adapters are now certified as DataCore Ready, providing full interoperability with SANsymphony-V storage virtualisation solutions from DataCore Software.

DataCore SANsymphony-V is a comprehensive software-defined storage platform that solves many of the difficult storage-related challenges raised by server and desktop virtualisation in data centres and cloud environments. The software significantly improves application performance and response times, enhances data availability and protection to provide superior business continuity and maximises the utilisation of existing storage investments. QLogic FabricCache adapters and FlexSuite Gen 5 Fibre Channel adapters, combined with SANsymphony-V, allow data centres to maximise their network infrastructure for a competitive advantage.

“QLogic channel partners and end-users can now confidently deploy award-winning QLogic adapters with SANsymphony-V to optimise network performance and make the most of their IT investments,” said Joe Kimpler, director of technical alliances, QLogic Corp. “Customers can choose the best QLogic solution—FabricCache adapters for high-performance clustered caching or FlexSuite Gen 5 adapters for ultra-high performance—to best handle their data requirements.”

“DataCore has a long history of collaborating with QLogic to help solve the storage management challenges of our mutual customers,” said Carlos M. Carreras, vice president of alliances and business development at DataCore Software. “QLogic high-performance Gen 5 Fibre Channel adapters and innovative, server-based caching adapters combine with SANsymphony-V to cost-effectively deliver uninterrupted data access, improve application performance and extend the life of storage investments, while providing organisations with greater peace of mind.”

Wednesday, 11 December 2013

DataCore Software Defined Storage and Fusion-io Reduce Costs and Accelerate ERP, SQL, Exchange, SharePoint Applications

BUHLMANN GRUPPE, a leader in steel piping and fittings, headquartered in Bremen, Germany has implemented a storage management and virtual SAN infrastructure based on DataCore’s SANsymphony-V software. SANsymphony-V manages and optimizes the use of both the conventional spinning disks (i.e. “SAS” drives) and the newly integrated flash memory-based Fusion-io ioDrives through DataCore’s automatic tiering and caching technology. With the new DataCore solution in place, the physical servers, the VMware virtual servers and most importantly the critical applications needed to run the business – including Navision ERP software, Microsoft Exchange, SQL and SharePoint – are now failsafe and run faster.

DataCore and Fusion-io = Turbo Acceleration for Tier 1 applications
After successfully testing the implementation, the migration of the physical and virtual servers onto the DataCore powered SAN was carried out. A number of physical servers, the Microsoft SQL and Exchange system and other file servers now access the high performance DataCore storage environment. In addition, DataCore now manages, protects and boost the performance for storage serving 70 virtual machines under VMware vSphere that host business critical applications – including the ERP system from Navision, Easy Archive, Easy xBase, Microsoft SharePoint and BA software.

"The response times of our mission-critical Tier 1 applications have improved significantly; performance has been doubled by the use of DataCore and Fusion-io," says Mr. Niebur. "The hardware vendor independence provides storage purchasing flexibility. Other benefits include the higher utilization of disk space, the performance of flash based hardware, as well as faster response times to meet business needs that we experience today – and in the future – combined they save us time and money. Even with these new purchases involved, we have realized saving of 50 percent in costs – compared to a traditional SAN solution."

Read the full Case Study:

Wednesday, 4 December 2013

A Defining Moment for the Software-Defined Data Center

Article from:

George Teixeira
For some time, enterprise IT heads heard the phrase, “get virtualized or get left behind”, and after kicking the tires, the benefits couldn’t be denied and the rush was on. Now, there’s a push to create software-defined data centers. However, there is some trepidation whether these ground-breaking, more flexible environments can adequately handle the performance and availability requirements of business-critical applications, especially when it comes to the storage part of the equation.

While decision-makers had good reason for concern, they now have an even better reason to celebrate as new storage virtualization platforms have proven to overcome these I/O obstacles.

Just as server hypervisors provided a virtual operating platform, a parallel approach to storage is quickly transforming the economics of virtualization for organizations of all sizes by offering the speed, scalability and continuous availability needed for realizing the full benefits of software-defined data centers. Particular additional benefits being widely reported include:
  • Elimination of storage-related I/O bottlenecks in virtualized data centers
  • Harnessing flash storage resources effectively for even greater application performance
  • Ensuring fast and always available applications without a major storage investment
Performance slowdowns caused by I/O bottlenecks and downtime attributed to storage-related outages are two of the foremost reasons why enterprises have held back from virtualizing their tier-1 applications, like SQL Server, Oracle, SAP and Exchange. This fact comes across clearly in the recent Third Annual State of Virtualization Survey conducted by my company.

In the survey, findings showed 42 percent of respondents noted performance degradation or inability to meet performance expectations as an obstacle preventing them from virtualizing more of their workloads. Yet, effective storage virtualization platforms are now successfully overcoming these issues by using device-independent adaptive caching and performance boosting techniques to absorb wildly variable workloads, enabling applications to run faster virtualized.

To further increase tier-1 application responsiveness, companies often spend excessively on flash memory-based solid state disks (SSDs). The survey also reveals that 44 percent of respondents found disproportionate storage-related costs were an obstacle to virtualization. Again, effective storage virtualization platforms are now providing a solution with such features as auto-tiering, which optimize the use of these premium-priced resources alongside more modestly priced, higher capacity disk drives.

Such an intelligent software platform constantly monitors I/O behavior and can intelligently auto-select between server memory caches, flash storage and traditional disk resources in real-time to ensure the most suitable class or tier of storage device is assigned to each workload based on priorities and urgency. As a result, a software defined data center can now deliver unmatched tier-1 application performance with optimum cost efficiency and maximum return on existing storage investments.

Once I/O intensive tier-1 applications are virtualized, the storage virtualization platform ensures high availability. It eliminates single points of failure and disruption through application-transparent physical separation, stretched across rooms or off-site with full auto-recovery capabilities for the highest levels of business continuity. The right platform can effectively virtualize whatever storage is on a user’s floor, whether direct-attached or SAN-connected, to achieve a robust and responsive shared storage environment necessary to support highly dynamic virtual IT environments.

Yes, the storage virtualization platform is a defining moment for the software defined data center. The performance, speed and high availability needed for mission-critical databases and applications in a virtualized environment has been realized. Barriers have been removed and there’s a clear and supported path for realizing greater cost efficiency.

Still, selecting the right platform is critical to a data center. Technology that is full-featured and has been proven “in the field” is essential. Also, it’s important to go with an independent, pure software virtualization solution in order to avoid hardware lock-in, and to take advantage of the future storage developments that undoubtedly will come.

Monday, 25 November 2013

Software Defined Storage a fancy way to say virtualization, says DataCore Software chairman

Ziya AralExecutive Editor Ellen Obrien of SearchStorage interviews DataCore Software Corp. chairman and founder Ziya Aral on software-defined storage (SDS) and how customers get into trouble, and why distinguishing between SDS and virtualization isn't so easy.
I want to get started by talking about software-defined storage. Everyone seems to have a play here. How do you define it?
Ziya Aral: Each time this subject comes up, there is a slight redefinition of the term. But software-defined storage is a continuation of what used to be called storage virtualization. Storage virtualization [by] itself lacks server virtualization. A virtual machine is one of these concepts that basically argues for creating a software emulation layer. It's an approach to breaking hardware away from software. Now, there are a million good reasons to do that. Just thinking about the question for 5 minutes gets you there. Hardware and software live different lives with different expertise.
Hardware is redefined every 18 months. Software sometimes has a half-life of 10, 15, 20 years. It makes absolute sense to architect software and hardware differently and to have a software layer that is defined in a perfect world, in a perfect universe. [In that layer], storage is perfect. There's an absolutely infinite amount of space. It has all of the positive characteristics that you want for it. It has high availability, data has integrity, [and] it can move around at will. And then there's hardware. Hardware has the real limitations that physics impose. And you would like those two broken apart from each other.
Can you explain to us how software-defined storage is different from standard virtualization?
Aral: No. I can't.
Do you believe it's one and the same?
Aral: I think that it is essentially one and the same. Now, there are always detail differences, but it's like asking the question, 'How is the cloud different from storage service providers [SSPs] of 10 years ago?' Now, conceptually many of the cloud vendors have gone for direct repetition of what the SSP guys used to say. The technology has changed; commercially it's more practical. There are elements to the story that reflect the current thinking on many things. But in essence, the idea is the same. Software-defined storage is similar.
We're still doing emulation. And frankly, emulation is a bad word in our industry because emulation equals 'slow.' It's getting one thing to do what another thing is supposed to be doing. The other thing is several times more complicated. But frankly, emulation is the beginning of storage virtualization, of server virtualization. When the power of the underlying technology, the hardware technology, reaches a certain critical mass, then the software is able to really begin to abstract in a fundamental way from those hardware limitations. And that's the real basis of software-defined storage.
How do you stand out in such a competitive market right now? What's your pitch to customers?
Aral: Typically, their problem is they've got a bunch of controllers from this vendor, a bunch of controllers from this vendor, and they don't work together. So, when these fail, these don't take over. These aren't fast enough. So, we find ourselves selling after the fact in most cases.
Now, that's not always true. DataCore Software's getting to be a reasonably sized company. Today, in many cases, we sell at the architectural level. [However], the bulk of our business still continues to be making up for the damage done previously. In those cases, we're covering disaster recovery where none exists. For example, a customer must have immediate availability of data at all times. OK, terrific, that's great. It's a requirement of many businesses. It is absolutely universal.
They bought a box. The box has two power supplies, two power cords, two sets of logic and one piece of software driving all of it. The box fails. All right, let's think about this. [They say,] 'We should have another box. Better still, another box hooked up to another power grid somewhere. This would be great. It'd be terrific 30 miles across town.' [The vendor says,] 'We don't do that.' [They say,] 'You don't do that? OK. How do I do that then?' Answer: DataCore.
[They might say,] 'But, but, but EMC does that. They do that with two boxes of their own. But we have an old box and the new EMC box. Now do I do that?' [We say,] 'Well, you don't.'
That's what we do most of the time. It is a key piece of storage architecture, but eventually software defines storage, eliminates storage as such. It reduces storage devices to peripherals.
Now, EMC builds and sells more of those devices than anyone else around. Why would they talk about software-defined storage? In fact, why would any of the larger hardware vendors talk about it? Well, they're not stupid people. They're smart people, and they understand the science. They talk about it in anticipation of an evolving architecture that leads to many opportunities.
But, I am sad to say, 15 years into it, we're still the radical. This is crazy. This is so obviously true. Yet, we're still the radical.
Where do you see it 15 years from now?
Aral: Storage operating systems will, I believe, take the place of large, high-end storage controllers today. That is, you will build your application infrastructure around them. They will be one logically combined, logically integrated entity. But the entity itself will be a ball that moves. You put it on that platform. You put it over here. You put it in Phoenix. You move it to Tokyo. You outsource by taking that whole ball and putting it out on the cloud. You pull it back in because those damn cloud providers, they're charging me too much and they don't give me the response I want.
This whole thing will be essentially nothing other than the reflection of the business process or the application architecture guy's vision of how he wants this thing to work. It will be a figment of the mind. The fact that it lives on computers -- and where those computers live -- will be an afterthought. It'll be an afterthought for a different guy, like the networking guy that you have that's basically responsible for the wires.
Will that significantly change the role of the storage manager, storage admin?
Aral: Forgive me. Those are our customers, and I love those people. But their role is changing already. That was the most rapidly growing section of the IT industry for a while. It sure isn't today. Those guys are drowning. They're drowning. And they're going to up their game and lower their tolerance of this and that box.
We have a T-shirt at DataCore. It's an internal-use T-shirt. It says 'Hardware Sucks.' This is a new introduction of an old T-shirt. What it means is not that hardware sucks. Hardware is terrific. It does all this stuff. But when you're chained to it, man, it hurts. It hurts. You've got to free yourself of that.

Tuesday, 19 November 2013

DataCore Chairman Ziya Aral on Software Defined Storage and Storage Virtualization

Ziya Aral

View the 5 minute Video

Somehow, it's become a question everyone is looking to answer -- and answer in a way that separates them from all the other people trying to succeed in the same space: What is software-defined storage? Depending on whom you ask, software-defined storage (SDS) is either a meaningless buzz phrase, a new spin on virtualization, or a truly revolutionary way of thinking about the connection between -- and necessary separation of -- hardware and software. In this video, DataCore Software Chairman Ziya Aral offers his take on SDS, and his perspective on how the technology has evolved.

"Each time this subject comes up, there is a slight redefinition of the term, but software-defined storage is a continuation of what used to be called storage virtualization,"

Aral said. It's an approach, simply put, "to breaking hardware away from software." This year, major storage players such as EMC made a play in the SDS market. EMC disclosed its plans for ViPR in May and began shipping it in August. The vendor acknowledged that defining the term was not simple, and executives agreed the term was overused.

Many industry analysts agree with Aral's point, that software-defined storage is already living up to the hype and it's because storage and server systems are looking more and more similar to each other. For companies like DataCore, the SDS craze is an opportunity to point out that major vendors are trying to fix a "broken storage model," and that DataCore believes the company's SANsymphony-V storage virtualization software, with no hardware or application-programming-interface restrictions, set the standard for SDS many years ago.

Monday, 18 November 2013

DataCore adds scale to SANsymphony-V storage virtualization software


DataCore Software Corp. upgraded its SANsymphony-V storage virtualization software, doubling its scalability and adding self-healing and high-availability features to the platform, which the vendor bills as a more mature and complete alternative to EMC Corp.'s ViPR.

SANsymphony-V brings storage features such as caching, thin provisioning, auto tiering, replication, mirroring and snapshots across different types of storage hardware. It treats storage across hardware platforms as one pool. DataCore began calling its software a storage hypervisor and then software-defined storage long before EMC introduced its ViPR platform earlier this year.

The biggest change with SANsymphony-V 9.0.4 is it scales to 16 nodes instead of the previous eight-node limit. That could bring DataCore into more enterprise implementations.

"Historically, the knock on DataCore has been that it only scales to a point, so this is a big deal," said Jim Bagley, senior analyst at Austin, Texas-based SSG-NOW.

The self-healing capability uses synchronous mirroring of data between nodes to detect nodes that have failed or that have been taken out of service. SANsymphony-V then migrates data on those nodes to other hardware without disrupting applications. Augie Gonzalez, director of product marketing at DataCore, said this feature was added mostly for solid-state drives "that have a higher propensity to flake out on you" than RAID-protected hard disk drives.

The new version can also move virtual disks non-disruptively between storage pools. This allows virtual volumes from a test/development pool to be relocated to a production pool without disrupting access to applications. SANsymphony-V's remote replication now will automatically sync the data on two sites if they get out of sync, and its audit logs will record time stamps of each administrative action that can help troubleshoot problems on the SAN.

According to Gonzalez, what has changed the most is awareness of SANsymphony-V. He said DataCore's message was often lost in the wilderness until EMC decided to push software-defined storage and the idea of storage management that works across varied hardware platforms.

"ViPR has put a whole different perspective on what we do and the kinds of people interested in what we do," Gonzalez said. "Our value proposition from the beginning was to provide a uniform platform to control storage devices. That was a hard value to push when storage hardware guys were saying, 'Do this as a hardware function; the function exists just for this device and each device brings its own piece of intelligence.'"

Gonzalez said DataCore has seen a spike in customer interest since EMC began talking about ViPR in May. He said ViPR represents a flip-flop for EMC, which used to position storage as hardware-centric and siloed. "Now they say, 'Why not just plug in this uniform software and plug in the right hardware for the job?'" he said.

DataCore has been saying that for years and has been selling to EMC customers, as well as customers of other large storage vendors. Gonzalez said SANsymphony-V is used mostly by companies that are heavily virtualized and run more than one type of storage hardware.

With ViPR on the market, EMC becomes a competitor to DataCore while also helping to highlight SANsymphony-V's value.

"All the buzz about the advantages of software-defined storage and the goodness that it brings helps DataCore," SSG-NOW's Bagley said. "The primary difference is EMC is all about pushing its own hardware, and DataCore is device-agnostic."

Sunday, 17 November 2013

An Australian municipality continues to grow and save money with DataCore Storage Virtualization Software

"Virtualizing servers was something we knew we had to pursue. And the concepts we were learning made us think that virtualization on the storage side was something we should also implement to avoid having a single point-of-failure in terms of our newly acquired SAN."
- Kevin Chan IT Infrastructure Manager, Kingston City Council
"The real beauty of DataCore is that it fulfills the full range of storage requirements - such as management, high-availability and disaster recovery - with hardware independent software that runs on any standard Intel or AMD based system."

- Anand Karan Managing Director, Lincom Solutions
To learn more, please read: The Full Case Study on Kinston City Council

DataCore Partner Lincom Designed and Implemented the Kingston City Council Solution:

Lincom Solutions holds DataCore implementation/engineering certifications and have worked with DataCore for nearly 7 years in designing and implementing DataCore solutions for various organisations, ranging from not for profit to global airlines.
One example of such an implementation is Kingston City Council.
For more information on Lincom, please see:

Tuesday, 29 October 2013

DataCore chairman Ziya Aral on Software-defined storage and SSD performance

DataCore Software Chairman Ziya Aral, a founder of the company specializing in storage virtualization software, defines software-defined storage in its simplest terms, as a way of breaking the hardware layer from the software. Aral claims his company has been doing this for years, long before software-defined storage became such a widely used phrase.
In this interview with SearchSolidStateStorage, Aral describes why this approach is a good fit for solid-state drives (SSDs) and how it can help SSD performance.
You have called SSDs a revolutionary, yet complicated, technology. Why is it so important, and why is it so complicated?
Ziya Aral: SSD is important because it is fast. For most of our time in this industry, storage has meant disks. Disks are mechanical and, as a result, they're slow. There's an entire complicated fabric for making them fast -- caching and all kinds of lore that we apply to the devices. All of a sudden, here come SSDs, and they're transparent. A customer can simply stick one in a box -- and all of a sudden, it's faster. You boot Windows. You boot Linux. It happens in seconds. Whoa! This is great. Fast storage is very important.
Storage is the black sheep of the computer family. It's always the one lagging behind, because it's fundamentally mechanical. It's the one that slows down applications. Once upon a time, a third of all applications were storage-bound. Now it's probably 95%. Everything else got faster. We didn't.
The result is that any innovation that speeds up mass storage is wonderful, and SSDs speed up mass storage in a practical, commercially viable way. That's the good news. The bad news is they don't work like disk drives. They are complicated.
So if you want to read from an SSD, that's wonderful, but if you try to write to an SSD, it's not as fast. Worse, SSDs don't like being written to. I'm sorry to all my friends in the industry, but they don't. You know it. I know it. They burn out, and they burn out pretty quickly by disk standards.
So now, here's the complexity. SSDs aren't perfect for everything. They're still expensive -- to carry the bulk of your storage. You've still got to think about what you want to do with them in a larger architecture.
Now there are people in the industry, we try to partner with those people, people like Fusion-io, who I think are brilliant. They focus SSDs in a specific application, databases in this case, and they moderate SSDs with two technologies that actually make SSDs perform. One is software. They run caching software in front of their SSD. Second is DRAM. DRAM has a great advantage in that it doesn't care how many times you write to it...
...And how big of an impact will software-defined storage have on the SSD market and the performance of solid-state drives?
Aral: Huge. It's huge because it's hard enough to make disk drives work [laughs] and keep the application going, and the whole server virtualization market seems to have gotten stuck on tier-one applications and on virtual desktops.
So along comes this great storage technology. It supplements the conventional disk drive industry in a profound way. Integrated with DRAM, it seems to be able to work, but all of it defeats the schemes that we have been working on for years and years and years.
For example, to get the proper advantage of SSDs, you want the SSD to be local to the application. Most SSDs work that way. Now people are trying to build hybrid arrays and arrays of SSDs. That really sort of loses the advantage.
There's a physics to going over a wire … a lot of the advantage of the SSD disappears in that process. So now we are talking about moving a lot of the software infrastructure into the server and into the network. Well, why [move it] into the network? Because you don't want SSDs written to.
You want the longest delays possible, but long write delays mean that the data has to be in at least two locations, which means that the data has to go over a wire. Two ends of the cycle -- direct reads and asynchronous writes -- are happening at two different locations, and the software is infinitely portable. The software that glues those two phases together has to sit on both sides of the wire.

The Register on DataCore: "Whoo-whoo. What's that sound? It's the DataCore Express passing '10k' Customer Sites"

That's 10,000 customer sites using DataCore Storage Virtualization Software...

Storage-area-network (SAN) software supplier DataCore has, we're told, notched up 10,000 customer sites. DataCore is based in Florida and ships SANsymphony and allied SAN software products that turn standalone and clustered X86 servers and their storage into SANs.

The latest, ninth, version of SANsymphony has increased the cluster node count from 8 to 16, effectively doubling the SAN size. CEO George Teixeira said the plan is to increase it stepwise to 128. Version nine also has a next-generation replication service with faster initialisation of its engine, supports Microsoft's ODX and VAAI, and provides logging and configuration alerts.

Teixeira says DataCore has been about software-defined storage for a long, long time. EMC's ViPR is no real advance on that front, merely being an API translator, and looking like the latest version of Invista to him. In effect, he asserts, EMC is saying to its customers that they should keep on buying separate storage devices and now there's an API translator sitting on top of them.

He decries the notion that array hardware suppliers such as NetApp can do software-defined storage: "Are you kidding me? SANsymphony is portable and general."

Teixeira told us: "We're seeing a lot more sales and interest in DataCore products after EMC's ViPR launch and we're now in more than 10,000 customer sites." That's after fifteen years of being in business.

Wednesday, 16 October 2013

VMworld Europe 2013: DataCore Previews Latest SANsymphony-V Software-Defined Storage Platform Release Optimized for VMware® Infrastructures and Larger Scale Virtualization Deployments

At VMworld Europe, DataCore's long term customer SES Engineering, the worlds largest satellite company, presents the strategic advantages of a DataCore SANsymphony-V Software-defined Storage Platform
DataCore at vmworld
VMworld Sponsor DataCore showcases thousands of real-world customer successes, updates storage and VDI solutions and previews latest enhancements at VMworld Europe 2013, Stand S112

DataCore Software, the leader in software-defined storage architecture and premier provider of storage virtualization software, today provided a sneak-preview of its latest enhancements to the SANsymphony™-V storage virtualization software at VMworld 2013 (Oct. 15-17, Fira Barcelona, Stand S112). The company also is featuring its recently released DataCore VDS – Virtual Desktop Server, an affordable and high-speed VDI software solution. Visit Stand S112 to learn more about the new SANsymphony-V release, available on November 4, 2013. It will provide much greater scalability, faster next-generation replication services for disaster recovery, automatic self-healing storage capabilities, expanded platform integration with VMware VAAI space reclamation services and an updated vCenter™ plug-in. In addition, the new release of SANsymphony-V supports infrastructure-wide performance tiering and acceleration services that extend across scale-out grids of flash, SSDs and spinning disks from any vendor to accelerate Tier-1 applications and virtualization workloads.

DataCore and VMware Customers Speak Out on Software-defined Storage in a Virtual World
DataCore customer SES Engineering, the world-leading satellite operator and satellite communications company, will be available at the DataCore stand and will be featured in the Solutions Exchange Theatre on Oct. 15th at 4:40 p.m. SES Engineering will share its experience with DataCore storage virtualization software and explain why they see a software-defined architecture as the only logical solution to gain the flexibility and adaptability needed to address their growth and constantly changing needs.

VMware Users “Stop Fighting Your Storage Hardware”
DataCore Software recently launched a European-wide education and information campaign to spread the word to VMware users on the benefits of storage virtualization and software-defined storage architectures. Information resources including informative webcasts, links to video series on storage virtualization, white papers, “How To” guides and regional case studies are now available on DataCore’s “Stop Fighting Your Storage Hardware” site. The site showcases how real-world customers have achieved a software-defined data center that can maximize business productivity and capitalize on their current hardware investments.

Thousands of data centers around the globe deploy DataCore to cost-effectively meet the high-performance and high-availability storage requirements of their physical and virtual servers. "By making use of the DataCore solution within the virtual infrastructure created by VMware vSphere and VDI, we can not only ensure that we meet these corporate requirements, but also guarantee optimal cost-efficiency as a result of the hardware independence of the solution. This affects both the direct investment and the indirect and long-term cost of refreshes, expansions and added hardware acquisitions. We have thus created the technical basis for our external IT services, and within this framework we are creating the most flexible and varied range of cloud services possible," states Dr. Karl Manfredi CEO at Brennercom, a leading ICT company.

DataCore SANsymphony-V is Optimized for VMware Virtual Infrastructure Deployments
DataCore SANsymphony-V delivers a simple and scalable high-availability solution to meet vSphere™ shared storage requirements. The hardware-agnostic storage virtualization software abstracts and pools internal and external disks along with flash/SSDs to yield lightning-fast response, non-stop access and optimal use of capacity. Notable features include enhanced support for fast in-memory optimizations using DRAM caching and auto-tiering flash technologies, metro-wide failsafe synchronous mirroring and enhanced higher speed replication and new services for disaster recovery and remote site recovery. An updated plug-in for VMware vCenter allows users to non-disruptively provision, share, clone, replicate and expand virtual disks among physical servers and virtual machines. SANsymphony-V is the ideal software-defined storage platform to meet the demanding availability and performance needs required by VMware Virtual Infrastructures.

New SANsymphony-V Features Preview
The SANsymphony-V release available on November 4, 2013 will include: 
  • Twice the scaling with support for up to 16 federated nodes
    – Scales performance and HA; federated nodes can span Metro-wide distances
  • Next generation remote replication services
    – Speeds up performance for disaster recovery
  • New self-healing storage services
    – Reduces business downtime from hardware failures
  • New failsafe non-disruptive data mobility services between storage pools
    – Increases overall resiliency
  • New Hypervisor integration services
    – Offloads hosts to accelerate performance and data mobility operations (VAAI, ODX)
  • New space reclamation services
    – Enables IT organizations to reclaim unused capacity they already own
  • New change-control management and audit trail logs
    – Helps prevent avoidable errors and simplifies troubleshooting 
“The growing momentum for software-defined storage has encouraged VMware customers to consider storage virtualization software solutions that address performance and availability concerns. But while hypervisor-based and device-dependent storage virtualization solve some problems, agnostic software-only storage virtualization like DataCore SANsymphony-V have been proven to deliver  the highest value by working across any physical or virtual server and any collection of spinning disks and flash technology storage devices. We are pleased to showcase our customer proven value proposition and our SANsymphony-V software-defined storage platform at VMworld Europe 2013,” says Christian Hagen, vice president EMEA at DataCore Software.

DataCore VDS 2.0 – The VDI Solution
Visit the DataCore stand (S112) to find out more about the recently announced DataCore VDS 2.0 software.

"DataCore VDS overcomes the VDI adoption issues and addresses the major market need for affordable desktop virtualization solutions in a climate where smaller budgets and the European financial crisis are impacting all IT decisions. DataCore VDS 2.0 removes many of the most painful implementation obstacles while significantly reducing the cost per virtual desktop instance, states Mr. Hagen.

Special Offer to VMware vExperts, VCPs and VCIs
DataCore Software is offering a free one-year, Not-for-resale (NFR) license to VMware vExperts, VMware Certified Professionals and VMware Certified Instructors. The NFR licenses can be used in non-production environments, for evaluations, demonstrations and training purposes. VMware vExperts, VCPs and VCIs can access and download the software from the DataCore website at

Tuesday, 8 October 2013

Software-defined storage and choices; Automated tiering is the hottest storage technology in 2013 – TheInfoPro

Interesting to see the recent results from TheInfoPro, a service of 451 Research LLC. Obviously, it was great to read that "software-defined storage transforming provisioning and capacity choices,"  this is why I wrote 36 Reasons To Adopt DataCore Storage Virtualization and Software-defined Storage Architectures .  However, what struck me the most was how Automated tiering has become the Hot technology of 2013.

I believe this is also tied to the growing use of Flash/SSD technologies and many vendors are offering what I consider a simple 2 level version of auto-tiering that works between flash and disk drives but it is almost always restricted in its use to storage that they control in their proprietary arrays.

At DataCore we see auto-tiering as a key architectural element of software-defined storage and therefore it is not limited to running within a single box of disks or on a vendor specific storage array system. Unlike the current offerings out there (e.g. HP 3PAR, Dell Compellent, etc.) DataCore takes auto-tiering to the infrastructure-wide level, our smart software spans up to 15 tiers of storage from Flash, to spinning disk drives to Cloud storage and we work across a wide mix of different incompatible platforms -basically any model or vendor storage device.

Yes we can auto-tier but we do it across all the storage assets, for example it can run over a Fusion-io card on the app sever as well as over HP 3PAR, Dell Compellent, SATA capacity disk systems and even to Cloud storage...thousands of customers are using it so it is proven technology.

Check out the DataCore Auto-tiering page for more information.

Anyway, here is a quote and a summary of the latest report that got me motivated to do this blog post:

"There are two major forces working on storage today - solid-state transforming storage architectures in datacenters, and software-defined storage transforming provisioning and capacity choices," said Marco Coulter, VP, TheInfoPro. "As enterprises move from solution designers to service brokers, the conversations with business partners are evolving from bits and bytes to services and APIs."

TheInfoPro released their latest Storage Study, revealing that enterprise storage capacity is more than doubling every two years, exceeding the rate of Moore's Law.

Consequently, automated tiering is the hottest storage technology in 2013, as it helps keep budgets under control by enabling the use of lower-cost capacity. As enterprises struggle to define an external cloud strategy, the on-premises cloud model is gaining favor.

Conducted during the first half of 2013, the study identifies the storage priorities of leading organizations to provide business intelligence about technological roadmaps, budgets and vendor performance. This semiannual study is based on live interviews with IT professionals and primary decision-makers at large and midsize enterprises in North America and Europe.

Highlights from the Storage Study include:
  • Enterprise Storage Capacity More Than Doubled Past Two Years
  • Solid-state or flash is mainly a hybrid array choice for enterprises, with 37% in use, compared with only 6% for all-flash arrays.
  • Decoupling of the storage hardware from the software controller presents a real market opportunity for software vendors looking to capitalize on enterprise interest in software defined storage. 31% of respondents viewed the coupling of storage controller hardware and software as very important or extremely important.
  • Architecting for performance is often reactive, as 48% of large and midsize enterprises have no specific IO/s targets for applications.
  • Internal cloud storage is the second most likely technology to be added to 2013 storage budgets as enterprises remain cautious about external cloud storage, which they accept as useful for email but not for general capacity. The increased demand for internal cloud storage solutions helps storage vendors as they seek to compete with Amazon S3.
  • Object storage (at the heart of many service-provider cloud storage offerings) faces an education barrier in enterprises, since most still see it as a compliance solution.
  • Enterprises are staying with FC for their core storage networks, with FCoE seen as an 'edge' solution and IB as a niche for select HPC deployments (12% in use)

Thursday, 19 September 2013

The quest for software-defined storage has branched into three distinct approaches to storage virtualization...

From Enterprise Storage Forum -The Virtual San Buying Guide:

“The quest for software-defined storage has branched three distinct approaches to storage virtualization; hypervisor-based, device-dependent, and agnostic, software-only,” states Steve Houck, COO DataCore Software. “vSAN is an example of hypervisor-based, EMC ViPR and IBM SVC are device-dependent, and DataCore SANsymphony–V is an example of software-only storage virtualization.”
DataCore has been in the virtual storage space for a long time. Its virtual SAN platform is called SANsymphony-V, which is hardware-agnostic storage virtualization software that abstracts and pools internal and external disks along with flash/SSDs. The company said thousands of data centers around the globe deploy DataCore.

Features include in-memory DRAM caching, auto-tiering, synchronous mirroring, asynchronous replication, thin-provisioning, snapshots, and CDP. Pricing for an HA (high-availability) cluster of 2 SANsymphony-V nodes is under $10,000, including 24x7 annual support.

“The SANsymphony-V platform is designed for mission-critical, latency-sensitive applications in companies large and small, where downtime is not tolerated,” said Steve Houck, COO, DataCore Software.

He believes that VMware vSAN takes advantage of inexpensive internal disk drives, which drives more customers, especially SAN-averse ones, to virtualize their apps. However, he added that it requires the use of more expensive flash memory, a minimum of three hosts, and doesn’t work on physical machine that are also be in the mix, or other hypervisors such as Microsoft Hyper-V.

SANsymphony-V, said Houck, is transparent to applications, so no code changes are necessary. It installs on a virtual machine in two or more nodes to create a tiered storage pool. The software can also run on dedicated x86 machines to offload all storage-intensive tasks from the application hosts

“The quest for software-defined storage has branched three distinct approaches to storage virtualization; hypervisor-based, device-dependent, and agnostic, software-only,” said Houck. “vSAN is an example of hypervisor-based, EMC ViPR and IBM SVC are device-dependent, and DataCore SANsymphony–V is an example of software-only storage virtualization.”

Wednesday, 18 September 2013

Software Update Notice: VMware vSphere Plug-in v1.3 for SANsymphony-V released

The following software update is now generally available for download from the DataCore Software Customer Support website:

Software: VMware vSphere Plug-in
Version: 1.3 (Replaces version 1.2)
Enhancements: Several important bug fixes and support for Internet Explorer 10.0

To download the updated plug-in from the DataCore Software Customer Support website:
Go to Software Downloads and select the following options from the pull down menus when prompted:
Select a Product Line to View Related Software Downloads: SANsymphony-V
Select a Version: 9.0
Select Software: Plug-in: VMware vSphere Plug-in for SANsymphony-V

Please download the Release Notes for VMware vSphere 1.3 Plug-in from the same page for more details on the Fixes and Enhancements included in this update.

You may also download the updated plug-in from
Select SOFTWARE > Closer Technical Look > DOWNLOADS & TEST DRIVE
Scroll down to VMware vSphere Management Plug-in for SANsymphony-V

The direct link to the download request form is:

Tuesday, 17 September 2013

36 Reasons To Adopt DataCore Storage Virtualization and Software-defined Storage Architectures

Learn the Reasons Why 1000’s of Customers Have Chosen DataCore Software

“When asked for 10 reasons, I just couldn’t stop…”
–George Teixeira, CEO & President, DataCore Software

1. No Hardware Vendor Lock-In Enables Greater Buying Power
With DataCore, you gain the freedom to buy from any hardware vendor, and aren’t forced to pick and stick with one. This is a major advantage in negotiating your best deal and “right sizing” your purchase to buy what you need instead of what just one vendor has to offer. It’s easy to add, replace or migrate across storage platforms with a minimum of IT pain and best of all you can avoid disrupting users and business applications. Purchasing flexibility is a major advantage of a software-defined infrastructure.

2. Future-Proof Your Storage with Software-defined Flexibility; Be Ready for Whatever is Next
A flexible DataCore software defined storage infrastructure also lets you bring in new storage and new storage technologies non-disruptively, so you are ready for whatever comes next. Whether it is incorporating new flash storage or cloud storage into your infrastructure or gaining the competitive advantage that the next new storage innovation brings, your infrastructure is ready to bring it in and put it to work. On the other hand, hardware vendors have no interest in allowing you to bring in new technology to work with the storage you’ve have already purchased from them. Their reason for being is to sell you, each year, year after year, on ripping and replacing your last purchase from them with the newer model.

3. Get the Most from What You Already Own Before Buying More
Virtualization = Highly Efficient Utilization. Flexible and smart, DataCore virtualization software lets you optimize the utilization of your storage resources, extending the time to hardware refresh. This is another huge advantage over a hardware-defined infrastructure that forces you to over provision, oversize and buy more hardware to meet unknown demands before you have fully utilized what you already have. When you do buy new storage hardware, the DataCore advantage is that there is no waste—you can fully utilize what you buy so you buy only what you need.

Read more: Reasons and Benefits of Storage Virtualization and Software-defined Storage Architectures.

Monday, 2 September 2013

A Virtual Discovery Delivers Ongoing: Australian municipality grows and saves with DataCore Storage Virtualization Software

"The real beauty of DataCore is that it fulfills the full range of storage requirements - such as management, high-availability and disaster recovery - with hardware independent software that runs on any standard Intel or AMD based system."
- Anand Karan Managing Director, Lincom Solutions

To learn more, please read the Full Case Study on Kinston City Council:

DataCore Partner Lincom Designed and Implemented the Kingston City Council Solution:

Lincom Solutions holds DataCore implementation/engineering certifications and have worked with DataCore for nearly 7 years in designing and implementing DataCore solutions for various organisations, ranging from not for profit to global airlines.
One example of such an implementation is Kingston City Council.
For more information on Lincom, please see:

Friday, 30 August 2013

Vancouver School Board and Tiering; DataCore Automatically Triages Storage For Performance

The Vancouver School Board (VSB) in Canada has been using storage virtualization for over 10 years.

...As the start of a new school season looms, the Vancouver School Board (VSB) is looking to automated storage tiered technology to help its systems cope with spikes in workload and storage demand.

Serving a staff of 8,000 and students numbering more than 50,000, the VSB network infrastructure is subjected to storage and workload demands that can rapidly peak at any given time during the school season. The IT department supports the email accounts of employees, human resources applications, financial systems, student records, the board’s Web site, as well as applications needed for various subjects and school projects.

“Getting optimum use of our server network is critical in providing optimum system performance and avoiding system slowdowns or outages,” said Peter Powell, infrastructure manager at VSB.

Rapidly moving data workload from one sever that may be approaching full capacity to an unused server in order to prevent an overload is one of the crucial tasks of the IT team, he said.

“The biggest single request we get is IOs for storage, getting data on and off disks,” said Powell. “We normally need one person to spend the whole day monitoring server activity and making sure workload is assigned to the right server. That’s one person taken away from other duties that the team has to do.”

Recently, however, the VSB IT team has been testing the new auto tiering feature that was introduced by DataCore Software in its SANsymphony-V storage virtualization software.

The VSB has been working with DataCore for more than 12 years in other projects. The board based its storage strategy on SANsymphony-V back in 2002 when the IT department found that it was getting harder to keep up with the storage demands of each application. SANSymphony-V’s virtualization technology allowed the IT team to move workload from servers that were running out of disk space to servers that had unavailable an unused capacity.

“In recent test runs we found that the system could be configured to monitor set thresholds and automatically identify and move workloads when ascribed limits are reached,”Powell said. “There’s no longer any need for a person to monitor the system all the time.”

Automatic Triage for Storage: Optimize Performance and Lower Costs
George Teixiera, CEO of DataCore, separately stated that the Auto-tiering feature acts “like doing triage for storage.”

“SANsymphony V can help organizations optimize their cost of storage for performance trade-offs,” he said. “Rather than purchase more disks to keep up with performance demand, auto tiering helps IT identify available performance and instantly moves the workload there. DataCore prioritizes the use of your storage to maximize performance within your available levels of storage."

In terms of capacity utilization and cost, DataCore's ability to auto-provision and pool capacity from different vendors is key. He said, some surveys indicate that most firms typically use only 30 per cent of their existing storage, but DataCore can help businesses use as much as 90 per cent of their existing storage.

Post based on the recent article in ITworld: Vancouver school board eyes storage auto-tiering

Wednesday, 28 August 2013

Financial services EOS benefits from a storage virtualization environment

EOS Group turned to Fujitsu and DataCore to modernize their storage infrastructure.
Also please see: The Fusjitsu case study on EOS Group posted in German.
Fujitsu DataCore

The global EOS Group, one of Europe's leading providers of financial and authorization services for the banking and insurance business sectors, employs about 5,000 people and has experienced rapid growth over the last few years. And at its headquarters in Hamburg, to meet their increasing business demands they faced a number of IT challenges: These include the processing and verification of sensitive customer data, more automation of how to acquire new clients, tracking and maintaining accounting information and maintaining and alerting their clients on important follow-ups and reminders, to processing payables and collections of receivables and the buying and selling of portfolio items - all these services are top priority jobs for the IT headquarters team.

The Challenge:
To get control of their ever-growing amounts of data, EOS decided it needed to find a flexible, extensible and easy-to-manage solution to improve their storage and storage area network (SAN). They also need a very high availability solution and this aspect would play a critical role in their decision, obviously they also needed to find a way to save money on operational costs and were looking for a simultaneous reduction in energy and maintenance costs.

They needed to do a massive expansion of their data centers to keep up with growth and they knew they needed to modernize their systems to achieve greater productivity. They decided a completely new solution was a necessity.

The Solution:
The financial services EOS opted for Storage Virtualization Software solution from DataCore to virtualize, manage and work across their Fujitsu ETERNUS DX80 storage systems and PRIMERGY servers within the SAN.

The result: The EOS Group benefited from increased performance, the highest levels of data protection and reduced maintenance costs.


Eight Fujitsu ETERNUS DX80 storage systems, each with 36 terabytes of net disk space and four Fujitsu PRIMERGY RX600 servers allowed the EOS Group to scale up well beyond its original 30 terabytes of capacity. After the expansion they now manage 288 terabytes of net capacity under DataCore storage virtualization control - and if they have additional data growth it will no longer be a problem since they can easily scale the architecture. And thanks to the DataCore virtualization solution, they now all can manage all their storage in a common way and scale independently of all the underlying physical hardware.

What else did EOS want? Greater business continuity, resiliency from major failures and disaster recovery, of course. Therefore Fujitusu and DataCore combined to achieve an even higher level of protection and business continuity and deal with disaster events. The system was set up with mirrored data to a remote site that was reflected fully and kept in sync with the main site. The remote site was located across town and took advantage of DataCore's ability to stretch mirrors at high-speed and the software automatically updates and mirrors the data to create identical virtual disks that reside on different drives and can be arbitrarily far apart. With DataCore, EOS has achieved its goal for greater business continuity and disaster recovery.