Thursday, 25 April 2013

Are you a DCIE? Please join the DataCore Certified Technical Professionals Group on Linkedin

As a DataCore Certified Implementation Engineer (DCIE) you may join the DataCore LinkedIn group to network and exchange best practices with other DCIEs.

Also, as a DCIE you are eligible to additional membership benefits including:

· Use of the DCIE logo on your business cards, letterhead, resumes, social media profiles

· Opportunities to Network, Share, Exchange with other DCIE members

· Promote yourself as a DataCore storage virtualization professional

· Receive exclusive invitations to DCIE Round Tables and technical events

Please connect with other DCIE professionals by joining: DataCore Certified Technical Professionals Group

Wednesday, 24 April 2013

How a virtualized storage infrastructure improves performance

Is using flash a good way to improve performance in a virtual environment? Can a virtualized storage infrastructure be an option?

By Jon William Toigo Read the full article at: http://searchvirtualstorage.techtarget.com/answer/How-a-virtualized-storage-infrastructure-improves-performance

Flash does some wonderful things. I like flash as cache in the hybrid role, personally. You can expedite a certain number of disks by writing files -- or data sets -- that are being accessed a lot on the disk drive. [You can do that] instead of writing that to a flash device temporarily and serving those requests out of the flash layer. That's a perfectly acceptable use of flash, and I've seen it used very wisely…. I'm not down on flash completely, but I think there's a lot of oversell right now around the idea of tier-zero arrays and the idea of having all-flash arrays.

For the money, I could probably do things a lot faster by virtualizing my storage. And that sounds weird; I'm saying don't virtualize or be cautious about how you virtualize your servers, but I'm not saying be cautious about how you virtualize your storage. Storage is a lot easier to virtualize than the workloads that run on servers…

Now, if I can go above all that, go above the layers of value-added software and just go across the storage hardware itself, everybody is selling a box with Seagate hard drives. There's no difference between brand X and brand Y at the hardware level. So we can virtualize that. We can surface those value-added functions that maybe you want to preserve at that virtualization layer and basically spread that goodness to all the storage rigs and only do the ones that have the name X, Y or Z on the side of the box. That drops your storage costs considerably. The best implementation of a virtualized storage infrastructure I've seen is at DataCore Software in Fort Lauderdale. I use their DataCore SANsymphony R9 product on about 4 petabytes of storage that I have in my lab. So basically, what we do is we virtualize the storage, which is nothing but aggregating it all under the control of the software controller. I have a dual redundant server that runs this controller software so there's failover in case one of the server heads dies, and I carve virtual volumes out of all the massive amount of disk I've got and read and write to them through a layer of memory cache. And then, instead of using flash, I use DRAM, which is much more resilient than flash and doesn't lose half of its performance when you write it. The first time you're writing to your flash card, you're going to get full speed out of it. The second time, you have to erase the cells that have been written before you write to them again. So, you have a decrease of 50% of the performance of a flash card.

There have been some other kinds of technologies introduced to try and spoof that a little bit, but the bottom line is that's how flash works. So flash is a bit of an oversell…

So do the math. Go into it with your eyes wide open and tell the vendor you want a test first. The other nice thing about a virtual storage infrastructure is that when you want to move workloads around using vMotion or something, you can move the data that goes with it around too -- that virtual volume can move with the workload, so it's going to save you a lot of money in terms of your basic storage spend, and a lot of money in terms of how many times you need to replicate the same data to take advantage of all those cool things they talked about in the VMware brochure.

Saturday, 20 April 2013

Software-defined Storage Makes Economic Sense and Frees You From Hardware-defined Lock-in!


-George Teixeira, CEO & President, DataCore Software

What does “software-defined” really mean?      Download: "Software-defined Storage" Paper

Beware of Storage Hardware Vendors’ Claims That They Are “Software-defined”
It has become “it’s about the software, dummy” obvious. But watch what the sales pitches claim. You’ll see storage hardware heavyweights leap for the “we are really software” bandwagon, claiming that they are “software-defined storage,” hoping to slow the wheels of progress through their marketing talk. But, it’s the same old song they sing every year. They want you to ignore the realities driving today’s data centers and diverse IT infrastructures – they want you to not change from your past buying practices – they want you to buy more hardware! They want you to forget that the term “software-defined” is being applied selectively to what runs only on their storage hardware platforms and that when you buy their feature set it will not work across other components and vendors storage systems. Beware, their clever sales pitches may sound like “software-defined” but the end objective is clear: “buy more hardware.”

Software is the basis for flexibility and smart storage virtualization and management software can improve the utilization of storage resources so that you optimize and right-size to meet your needs. Hardware-defined by definition is rigid and inflexible therefore it leads to purchasing more than you want since you don’t want to underestimate your needs. Software can also allow the latest innovations like Flash-memory SSDs to be easily incorporated into your infrastructure without having to “rip and replace” your existing storage investments.

In other words, hardware-defined is the mantra for storage hardware vendors who want you to “buy more hardware” and repeat the same process every year versus getting the most value from your investments and “future-proofing” your infrastructure. Software-defined means optimize what you already have, whereas “Hardware-defined = Over Provisioning and Oversizing.”

Software Is What Endures Beyond Hardware Devices that “Come and Go”
Think about it. Why would you want to lock yourself into this year’s hardware solution or have to buy a specific device just to get a software feature you need? This is old thinking, and before virtualization, this was how the server industry worked. The hardware decision drove the architecture. Today with software-defined computing exemplified by VMware or Hyper-V, you think about how to deploy virtual machines versus are they running on a Dell, HP, Intel or IBM system. Storage is going through this same transformation and it will be smart software that makes the difference in a “software-defined” world.

So What Do Users Want From “Software-defined Storage,” and Can You Really Expect It to Come From a Storage Hardware Vendor?
The move from hardware-defined to a software-defined virtualization-based model supporting mission-critical business applications is inevitable and has already redefined the foundation of architectures at the computing, networking and storage levels from being “static” to “dynamic.” Software defines the basis for managing diversity, agility, user interactions and for building a long-term virtual infrastructure that adapts to the constantly changing components that “come and go” over time.

Ask yourself, is it really in the best interest of the traditional storage hardware vendors to go “software-defined” and avoid their platform lock-ins?

Remember One Thing: Hardware-defined = Over Provisioning and Oversizing
Fulfilling application needs and providing a better user experience are the ultimate drivers for next generation storage and software-defined storage infrastructures. Users want flexibility, greater automation, better response times and “always on” continuous availability. Therefore IT shops are clamoring to move all the applications onto agile virtualization platforms for better economics and greater productivity. The business critical Tier 1 applications (ERP, databases, mail systems, Sharepoint, OLTP, etc.) have proven to be the most challenging. Storage has been the major roadblock to virtualizing these demanding Tier 1 applications. Moving storage-intensive workloads onto virtual machines (VMs) can greatly impact performance and availability, and as the workloads grow, these impacts increase, as do costs and complexity.

The result is that storage hardware vendors have to over-provision, over-size for performance and build in extra levels of redundancy within each unique platform to ensure users can meet their performance and business continuity needs.

The costs needed to accomplish the above negate the bulk of the benefits. In addition, hardware solutions are sized for a moment in time versus providing long term flexibility, therefore enterprises and IT departments are looking for a smarter and more cost-effective approach and are realizing that traditional “throw more hardware” solutions at the problem are no longer feasible.

Tier 1 Apps Are Going Virtual; Performance and Availability Are Mission Critical
To address these storage impacts, users need the flexibility to incorporate whatever storage they need to do the job at the right price, whether it is available today or comes along in the future. For example, to help with the performance impacts, such as those encountered in virtualizing Tier 1 applications, users will want to incorporate and share SSD, flash-based technologies. Flash helps here for a simple reason: electronic memory technologies are much faster than mechanical disk drives. Flash has been around for years, but only recently have they come down far enough in price to allow for broader adoption.

Diversity and Investment Protection; One Size Solutions Do Not Fit All
But flash storage is better for read intensive applications versus write heavy transaction-based traffic and it is still significantly more expensive than a spinning disk. It also wears out. Taxing applications that prompt many writes can shorten the lifespan of this still costly solution. So, it makes sense to have other choices for storage alongside flash to keep flash reserved for where it is needed most and to use the other storage alternatives for their most efficient use cases, and to then optimize the performance and cost trade-offs by placing and moving data to the most cost-effective tier that can deliver acceptable performance. Users will need solutions to share and tier their diverse storage arsenal – and manage it together as one, and that requires smart and adaptable software.

And what about existing storage hardware investments, does it make sense to throw them away and replace them with this year’s new models when smart software can extend their useful life? Why “rip and replace” each year? Instead, these existing storage investments and the newest flash hardware devices, disk drives and storage models can easily be made to work together in harmony; within a software-defined storage world.

Better Economics and Flexibility Make the Move to “Software-defined Storage” Inevitable
Going forward, users will have to embrace “software-defined storage” as an essential element to their software-defined data centers. Virtual storage infrastructures make sense as the foundation for scalable, elastic and efficient cloud computing. As users have to deal with the new dynamics and faster pace of today’s business, they can no longer be trapped within yesterday’s more rigid and hard-wired architecture models.

“Software-defined” Architecture and Not the Hardware is What Matters
Clearly the success of software-defined computing solutions from VMware and Microsoft Hyper-V have proven the compelling value proposition that server virtualization delivers. Likewise, the storage hypervisor and the use of virtualization at the storage level are the key to unlocking the hardware chains that have made storage an anchor to next generation data centers.

“Software-defined Storage” Creates the Need for a Storage HypervisorWe need the same thinking that revolutionized servers to impact storage. We need smart software that can be used enterprise-wide to be the driving force for change, in effect we need a storage hypervisor whose main role is to virtualize storage resources and to achieve the same benefits – agility, efficiency and flexibility – that server hypervisor technology brought to processors and memory.

Virtualization has transformed computing and therefore the key applications we depend on to run our businesses need to go virtual as well. Enterprise and cloud storage are still living in a world dominated by physical and hardware-defined thinking. It is time to think of storage in a “software-defined” world. That is, storage system features need to be available enterprise-wide and not just embedded to a particular proprietary hardware device.

Summary
Be cautious and beware of vendors’ claims that deliver hardware but talk “software-defined.”

Monday, 15 April 2013

Thoughts on Private Clouds and Storage Virtualization by Roni Putra, CTO DataCore Software

By: Manish Sharma, General Manager & Vice President for APACDataCore has nearly 10,000 customer deployments ranging across Geographies, Verticals and a wide cross section of customers including the largest Government and Enterprise Organizations to medium sized companies. Therefore, I was very eager to learn more about our technology and use cases and meet one of the Company’s Founders, CTO Roni Putra who was in Singapore earlier this year. At the time, I had only a couple of months in my new role and I was eager to learn as much as possible from his short visit. Below are a few of the key questions that I addressed to Roni:

Manish (MS): Roni, what is the main value that DataCore provides to our customers?Roni Putra (RP): We designed DataCore SANsymphony-V to provide powerful storage features that manage, accelerate and auto-tier the full range and diversity of storage devices ranging from Flash/SSD to SAS/SATA drives to Enterprise Storage Arrays and Cloud storage. The compelling value is that we are ‘pure’ software – the customers deploy DataCore once and they can then continue to use their existing infrastructure or refresh their hardware and storage systems without worrying about compatibility in the future. We ‘future-proof’ our customer investments. Our customers can enjoy the benefits of Auto-Tiering, Centralized-Management, Faster Performance by Caching and Asynchronous Replication etc. using their existing storage hardware investments and infrastructure.

MS: Does the new storage virtualization layer add new complexity while delivering these benefits?RP: We have always designed various versions of our software with the singular goal that the resulting architecture reduces the burden on storage administrators through greater automation and is greatly simplified after deploying DataCore. Our software automates many tasks and provides powerful wizards and tools like ‘Heat maps’ that show bottlenecks in the infrastructure. Within a datacenter, we free up management resources required to administer multiple storage systems and wherever possible we leverage the use of existing system management tools. Therefore, we have introduced plug-ins for VMware vCentre under our broader integration with vSphere and integration to Microsoft Hyper-V in addition to other systems management tools. We also have a DataCore Ready Program to allow an even greater ecosystem of partners to provide proven solutions to DataCore customers.

MS: How does DataCore fit with the software defined datacenter vision?RP: We are software. We are therefore complementary to the Software centric Datacenter.  Our technology can be deployed in a myriad of ways providing storage services to meet the needs of applications within the Datacenter: Performance, Availability, Replication, Snapshots, CDP, Auto-provisioning, etc. Our software can be deployed either as traditional SAN storage or within the applications hosts, alongside the base operating system or as a Virtual Storage Appliance. The flexibility of the solution allows for deploying hierarchical storage services architecture. Enabling appropriate storage services at the various levels; caching for performance closest to the application, replication at the SAN storage endpoint. We can run DataCore on a VM or on a server.

MS: Roni, we designed DataCore to run with and on Microsoft Windows, what were the considerations at that point when we first designed DataCore?RP: Let me start by saying that today we run on top of Microsoft Windows Servers and that we use Microsoft as a compatibility layer to the outside world, however when we run as a storage server we optimize and serve storage resources to all the name brand operating systems including Linux, UNIX, Microsoft, Netware, VMware and Apple systems. In effect anything that recognizes a SCSI LUN. Windows is usually the first platform to support innovations and technological advancements in hardware and peripherals.  Thereby assuring that our software will leverage the same alongside Windows. Our background in systems architecture and operating systems (Unix, Real-Time and massively parallel architecture) provided the team ample experience to choose a platform that could evolve and meet diverse needs as well as balance the difficult compatibility and performance demands of storage. Knowing the pedigree of the windows kernel architecture team was also a key.

MS: In Addition to delivering performance to Business Critical Applications – what are other sweet spots where our customers get biggest value?RP: In addition to Application performance, customers are deploying Virtual Desktops and DataCore caching along with Solid State Drives to serve Virtual Desktops from Servers at response times that make VDI viable. Continuous Availability is another key benefit that DataCore delivers to our customers.

We bring the needed Storage Agility to Data Centers that allows them to achieve the vision and benefits they expect from a dynamic and service oriented Private cloud.

RP: Manish, I have a question for you – when will I get some lunch – you promised me some good eats.MS: Right away Roni, Right away – lets go and eat J

Friday, 12 April 2013

Storage and Virtualization for everyone: Why is storage so important to any infrastructure, and why should it be highly available and virtualized?


Check out the recent: A primer on availability in virtualized storage environments
written by Richard Jenkins of DataCore Software.
Why is storage so important to any infrastructure, and why should it be highly available and virtualized?
...
For businesses looking toward the era of "software defined storage", it means that the solution to the most common problems faced by anyone administrating storage (performance, flexibility, availability, manageability, scalability) is already here, you just have to look.

Read the full post: http://storageoz.blogspot.com/2013/03/a-primer-on-availability-in-virtualized.html

The evolution of virtual storage: A movement...


Richard Jenkins of DataCore Software recently explored IT evolution in relation to virtual storage environments, noting that virtual server provisioning has emerged as a cost effective way to create highly available systems. It also enabled companies to make more strategic hardware choices by raising utilization rates. Rather than invest in a completely new server, data center operators can now deploy a new VM to handle more workloads, and tasks are more easily shifted from one VM to another to maximize reliability. Although storage has always been an essential component of these systems, Jenkins pointed out that it took longer for companies to begin exploring virtualization in this area.

Wednesday, 10 April 2013

Real World Storage Virtualization Use Cases: Companies Realize Faster Performance and Five-Nines Reliability for their Tier-1 Business Critical Applications and Databases

DataCore SANsymphony-V Storage Hypervisor Supercharges I/O Intensive Virtualized Tier-1 Applications; SQL, Exchange and SharePoint Lead the Way 

DataCore Software is being used by 1000's of companies and organizations around the globe. In this post we will showcase 3 companies that are realizing the benefits of the software-defined data center and simultaneously improving the performance of their Tier-1, mission-critical business applications. DataCore's SANsymphony-V storage hypervisor boosts the speed, throughput and availability of their virtualized, I/O intensive Tier-1 applications like Microsoft SQL Server, SharePoint and Exchange, as well as SAP, Oracle and others. These and many more customers report two to five times faster application performance and the achievement of better than 99.999 percent uptime after virtualizing their existing storage with SANsymphony-V.

Three Real World Use Cases:

San Gorgonio Memorial Hospital: 500 Percent Improvement in Microsoft SQL Database Performance; Better Response Times Enable More Precise Treatment and Care

Download full customer presentation: 
http://www.datacore.com/San-Gorgonio-Memorial-Hospital

San Gorgonio Memorial Hospital in Banning, California has achieved a completely software-centric data center and is reaping the benefits of having virtualized its critical healthcare and database applications. Richard Trower, senior programmer and database administrator, is able to deliver non-stop business operations serving more than 500 physicians, nurses and healthcare professionals who rely upon the system’s availability to fulfill San Gorgonio’s mission to “provide safe, high quality, personalized healthcare services.”

When San Gorgonio migrated its data center to a new facility, Trower took the opportunity to virtualize his infrastructure. He took all the physical servers offline, virtualized and restored everything on virtual machines, with all of the hospital’s primary servers now running on VMware and SANsymphony-V. The storage hypervisor-powered SANs support the mission critical databases for the hospital: McKesson-Paragon, McKesson HPF and all of the virtual supporting servers. The McKesson-Paragon database runs on a high performance, RAID-10, while the others are on RAID-5s. According to Trower, with his DataCore™ powered RAID-10, tasks like defragmentation of indexes, which previous took him an hour and a half, can be completed in about three minutes.

“I cannot stress how absolutely paramount uptime and speed are in healthcare,” stated Trower. “When systems go down, our ability to deliver services to our patients is affected, which is unacceptable. By virtualizing San Gorgonio’s infrastructure with VMware and DataCore, we can now assure non-stop performance and availability to the healthcare professionals who rely on IT to provide proper care and treatment for our patients.”

Trower further explains that in moving to a software-defined data center environment, he was not only able to reduce the infrastructure’s physical footprint by half, but he was enabled with greater freedom of choice and flexibility in future hardware investments. “SANsymphony-V is the backbone of our storage infrastructure,” furthers Trower. “Virtualization has allowed us to increase efficiencies, performance and availability, which not only makes our faculty happy, but also our business office. This freedom of choice allows us to make wiser purchasing decisions and gain truly maximum ROI from our hardware investments.”

Great Plains Communications Powers Its Applications with DataCore Storage Virtualization and Fusion-io
 to Achieve Unmatched Performance and No Downtime
Telecommunications and internet service provider,
Great Plains Communications is enhancing the performance and resiliency of virtualized corporate applications using VMware and DataCore’s SANsymphony-V. The company’s software-defined data center is split between two facilities, located 10 miles apart. Their multi-tiered storage infrastructure incorporates cutting edge, high performance flash drives from Fusion-io, along with a variety of other SAS and SATA drive devices. Corporate applications such as Oracle, Microsoft Exchange, Lync and SQL Server are powered by VMware vSphere and user mailboxes are hosted by LinuxMagic MagicMail.
Chris Jones, information technology manager, shares that Great Plains Communications’ infrastructure is 90 to 95 percent virtualized. “Our databases, applications, web servers and email servers are a critical part of our business and we rely upon them to provide services to our tens of thousands of customers,” explains Jones. “Our ability to deliver on-demand internet, emails and unified communications to not only our employees, but more importantly our customers, is positively critical. In this industry, there is plenty of competition and service downtime and outages are the fast track to lost customers.”

“In today’s competitive service provider market,” continues Jones, “response times and always-on availability are king. Having a storage environment powered by the
SANsymphony-V Storage Hypervisor and Fusion-io has had an enormous impact on performance and our business critical applications respond at a simply blinding speed. I feel we have the fastest I/O performance available on the market and running various reports have gone from taking several minutes to mere seconds.”
Thorntons Accelerates Microsoft SQL, Exchange; Reduces 10-Hour Backup Times to 3.5 Hours; Continuous Availability Keeps Businesses Moving and Drives Sales

Download the full case study: 
http://www.datacore.com/Testimonials/Thorntons.aspx

DataCore customer Thorntons operates 167 gasoline and convenience retail stores, car washes and travel plazas in Kentucky, Illinois, Ohio, Tennessee, and Florida. Thorntons dove into server virtualization in 2007 and has now virtualized more than 90 percent of the company’s systems. Senior Network Engineer, James Haverstock, reports that Thorntons has virtualized all of their production SQL servers and data. Six production SQL VMs maintain around 150 SQL databases consuming several terabytes (TB) of storage. Their Exchange server also runs on virtual machines, all powered by the SANsymphony-V virtualized storage platform and hypervisor.

“We have many Tier-1 applications virtualized through VMware and DataCore,” said Haverstock. “Of particular importance to us, were the uptime and performance of Exchange and SQL. Once we made the move to a SANsymphony-V powered storage environment, our data and system response improved dramatically. Needless to say, our user community, which depends upon the availability of these applications, was very pleased.”

Bottom-line: The deployment of SANsymphony-V significantly boosted overall systems performance. SANsymphony’s RAM cache vastly decreased latency for all of Thorntons’ storage, resulting not only in faster application performance but also much faster backups. A monthly profit/loss report backup previously required about 10 hours to run. “We’ve got it down to about three and a half now,” explained Haverstock. “Overall backup times, whether for VMs, SQL databases or full system, were cut nearly in half.”

Learn More:
For more on virtualizing and running I/O intensive Tier-1 applications, go to: http://www.datacore.com/Solutions/Applications.aspx

Thursday, 4 April 2013

Packed Room Attends Software-defined Storage Panel at SNW Spring 2013 to Learn About Proven Storage Hypervisor Use Cases

Yesterday at SNW Spring 2013 in Orlando, there was a packed house attending the Storage Hypervisor panel session moderated by Mark Peters, senior analyst, Enterprise Strategy Group.

The event was lively and many attendee questions were answered in this educational and informative session.

The SNW attendees were pleased to learn how far the technology has come and how it has moved into the mainstream, being proven within tens of thousands of customers who have already deployed storage hypervisors to achieve dramatic economic and productivity benefits.

While the discussion got spirited at times, it was clear that all the panelist were passionate that the time for 'Software-defined Storage' versus 'Hardware-defined' has arrived.

Check out the post: "Beware Storage Hardware Vendors’ Claims for Virtualization and Software-Defined Storage; Hardware-defined = Over Provisioning and Oversizing"

The handpicked panel of experts, industry pioneers included executives from VMware, Hitachi Data Systems, IBM and of course DataCore Software.

The SNW session - “Analyst Perspective: The Storage Hypervisor: Myth or Reality?” - took place on Tuesday, April 2 at 5:00 pm ET at Rosen Shingle Creek in Orlando, Florida, as part of SNW’s Virtualization Track.

DataCore Software’s President and CEO, George Teixeira, was invited to participate in a panel discussion at SNW Spring 2013 on the storage hypervisor. With trends such as software-defined storage and managing Flash-based technologies and tiers of storage for business-critical application performance the subject of industry discussion, the panel provided an overview on the role of the storage hypervisor, benefits, different approaches, current realities and its future, including its ability to transform the economics of storage.


DataCore Software is the storage hypervisor leader and premier provider of
storage virtualization software. A storage industry veteran, Teixeira co-founded the company and has served as president and CEO since 1998.
Introduced in June 2012, DataCore’s SANsymphony™-V 9.0 – “the Storage Hypervisor for the cloud” –transformed the economics of virtualization for organizations of all sizes worldwide by delivering flexibility, performance, value and scale, regardless of the storage hardware used. Recently, in February 2013, DataCore unveiled its latest version of SANsymphony-V, providing expanding enterprise IT environments, virtualized applications and hybrid clouds extended scalability, added performance optimization and greater cost savings courtesy of robust, new configurability choices.

 
SNW is the world’s largest independently produced conference series focused on the evolution of architecture for a new world of Big Data and business agility. Produced by Computerworld - and co-owned by Computerworld and the Storage Networking Industry Association - SNW remains unbiased and vendor agnostic. SNW provides a forum of open thought leadership and practical education that defines the spectrum of storage, data and infrastructure solutions available to a highly qualified audience of enterprise technology decision-makers.

Tuesday, 2 April 2013

DataCore CEO to Speak at SNW Spring 2013 Virtualization Panel on the Role of the Storage Hypervisor and How it Transforms Storage Economics

George Teixeira on Panel to Highlight Performance, Cost Savings and Flexibility of Hardware-independent Storage Hypervisor Software – Detailing Latest Innovations, Benefits and Customer Use Cases

DataCore Software’s President and CEO, George Teixeira, has been invited to participate in a panel discussion at SNW Spring 2013 on the storage hypervisor. With trends such as software-defined storage and managing Flash-based technologies and tiers of storage for business-critical application performance the subject of industry discussion, the panel will provide an overview on the role of the storage hypervisor, benefits, different approaches, current realities and its future, including its ability to transform the economics of storage.
 
For a summary of the main points, please read the recent blog post by Mr. Teixeira: Storage Hypervisors and Software-defined storage .
 
Mark Peters, senior analyst, Enterprise Strategy Group will moderate a handpicked panel of experts, industry pioneers and executives from VMware, Hitachi Data Systems, IBM and DataCore. The SNW session - “Analyst Perspective: The Storage Hypervisor: Myth or Reality?” - will take place on Tuesday, April 2 at 5:00 pm ET at Rosen Shingle Creek in Orlando, Florida, as part of SNW’s Virtualization Track.

DataCore Software is the storage hypervisor leader and premier provider of
storage virtualization software. A storage industry veteran, Teixeira co-founded the company and has served as president and CEO since 1998.

Introduced in June 2012,
DataCore’s SANsymphony™-V 9.0 – “the Storage Hypervisor for the cloud” –transformed the economics of virtualization for organizations of all sizes worldwide by delivering flexibility, performance, value and scale, regardless of the storage hardware used. Recently, in February 2013, DataCore unveiled its latest version of SANsymphony-V, providing expanding enterprise IT environments, virtualized applications and hybrid clouds extended scalability, added performance optimization and greater cost savings courtesy of robust, new configurability choices.

SNW is the world’s largest independently produced conference series focused on the evolution of architecture for a new world of Big Data and business agility. Produced by
Computerworld - and co-owned by Computerworld and the Storage Networking Industry Association - SNW remains unbiased and vendor agnostic. SNW provides a forum of open thought leadership and practical education that defines the spectrum of storage, data and infrastructure solutions available to a highly qualified audience of enterprise technology decision-makers.
 

Monday, 1 April 2013

IDC Analyst and Program Director Carla Arend Discusses Storage Priorities, Cloud and Value of DataCore Storage Virtualization and Tiered Storage for Investment Protection

Carla Arend, Program Director Information Management and Storage Software at IDC on IDC latest findings on storage priorities, Cloud and virtualization and discusses the role of storage virtualization in investment protection and how it can avoid the classic ‘Rip and Replace’ approach to storage investments by integrating and getting more utilization from existing storage.

View Video: https://www.youtube.com/watch?v=5jkdwGoeTWk

















View Video: https://www.youtube.com/watch?v=5jkdwGoeTWk

Park Resorts Upgrades Data Center and Speeds up Application Performance Using DataCore Storage Virtualization Software and Dot Hill Solution

Park Resorts www.park-resorts.com, the UK’s leading providers of family holidays with over 39 coastal parks, are using DataCore’s SANsymphony-V storage hypervisor solution running alongside Dot Hill’s Assured SAN Arrays to lead the great escape away from slow I/O speeds and unpredictable application responses.

Re-engineers the Park Resorts Storage Infrastructure with Performance, Agility and Scalability
Having successfully virtualized their server room with VMware some years prior, Park Resorts remain fully aware of the compelling benefits of virtualization to achieve higher service levels through reduced server overhead.
“Our key requirement was to ensure that we put in place a foundation that would not only deliver on our immediate requirements for increased performance and capacity, but would allow us to scale as the business applications, user demands and data sets all continue to grow in the future,”
states Ajay Patil he Infrastructure Manager responsible for the storage and virtualisation infrastructure.

NCE is the trusted advisor for Parks Resorts and they recommended installation of the latest version of the DataCore’s storage hypervisor with small factor, but high performance, Dot Hill arrays. As Ajay notes, affordability remained a critical acceptance item. “We had to maintain a balance between the features that we needed and the features that although were interesting and attractive, would simply never be used. Reassuringly, the approved solution retained a realistic TCO price point.”

Consolidation and reduction of resources:Even though Park Resorts had used virtualization for many years, the knock-on effect of consolidation of footprint, resources, and carbon efficiencies still resonates with the small form factor offered in the solution. Ajay comments, “Onsite, it’s about maximizing every foot of floor space and with the scalable 2U 24bay rack and the transparent software hypervisor layer reducing the need for additional storage hardware, we once again see that our footprint has been dramatically reduced. That’s a great feeling and a significant reduction in resources and ongoing management.”

And looking further afield, Ajay reflects: “With the new infrastructure, we have successfully taken Park Resorts into the next generation of virtualisation efficiency, overcoming the user issues of slower performance of our Microsoft SQL database and eradicating I/O bottlenecks in the process. We know that we have a scalable solution that meets all of our objectives both now, and in the future.”

For more information, please read the recent announcment.