2013
is the year of the phrase “software defined” infrastructures. Virtualization
has taught us that the efficiency and economy of complex, heterogeneous, IT
infrastructures are created by enterprise software that takes separate
infrastructure components and turns them into a coherent manageable
whole—allowing the many to work as ‘one.’
Infrastructures
are complex and diverse, and as such, no one device defines them. That’s why
phrases like “we’re an IBM shop” or “we’re an EMC shop,” once common are heard
less often today. Instead, the infrastructure is defined where the many pieces
come together to give us the flexibility, power and control over all this
diversity and complexity—at the software virtualization layer.
Beware
of “Old Acquaintance” Hardware Vendors’ Claims that They are
“Software-Defined.”
It’s
become “it’s about the software, dummy” obvious. But watch- In 2013, you’ll see
storage hardware heavyweights leap for that bandwagon, claiming that they are
“software-defined storage,” hoping to slow the wheels of progress under their
heft. But, like Auld Lang Sine, it’s the same old song they sing every year:
ignore the realities driving today’s diverse infrastructures—buy more hardware;
forget that the term ‘software-defined’ is being applied exclusively to what
runs on their storage hardware platforms and not all the other components and
players—beware, the song may sound like ‘software-defined’ but the end
objective is clear: ‘buy more hardware.’
Software
is what endures beyond hardware devices that 'come and go.'
Think
about it. Why would you want to lock yourself into this year’s hardware
solution or have to buy a specific device just to get a software feature you
need? This is old thinking, before virtualization, this was how the server
industry worked. The hardware decision drove the architecture, today with
software-defined computing exemplified by VMware or Hyper-V, you think about
how to deploy virtual machines versus are they running on a Dell, HP, Intel or
IBM system. Storage is going through this same transformation and it will be
smart software that makes the difference in a ‘software-defined’ world.
So
What Do Users Want from “software-defined storage,” and Can You Really Expect
It to Come from a Storage Hardware Vendor?
The
move from hardware-defined to a software-defined virtualization-based model
supporting mission-critical business applications is inevitable and has already
redefined the foundation of architectures at the computing, networking and
storage levels from being ‘static’ to ‘dynamic.’ Software defines the basis for
managing diversity, agility, user interactions and for building a long-term
virtual infrastructure that adapts to the constantly changing components that
‘come and go’ over time.
Ask
yourself, is it really in the best interest of the traditional storage hardware
vendors to go ‘software-defined’ and avoid their platform lock-ins?
Hardware-defined
= Over Provisioning and Oversizing
Fulfilling
application needs and providing a better user experience are the ultimate
drivers for next generation storage and software-defined storage
infrastructures. Users want flexibility, greater automation, better response
times and ’always on’ availability. Therefore IT shops are clamoring to move
all the applications onto agile virtualization platforms for better economics
and greater productivity. The business critical Tier 1 applications (ERP,
databases, mail systems, OLTP etc.) have proven to be the most challenging.
Storage has been the major roadblock to virtualizing these demanding Tier 1
applications. Moving storage-intensive workloads onto virtual machines (VMs)
can greatly impact performance and availability, and as the workloads grow,
these impacts increase, as does cost and complexity.
The
result is that storage hardware vendors have to over provision, oversize for
performance and build in extra levels of redundancy within each unique platform
to ensure users can meet their performance and business continuity needs.
The
costs needed to accomplish the above negate the bulk of the benefits. In
addition, hardware solutions are sized for a moment in time versus providing
long term flexibility, therefore enterprises and IT departments are looking for
a smarter and more cost-effective approach and are realizing that traditional
‘throw more hardware’ solutions at the problem are no longer feasible.
Tier
1 Apps are Going Virtual; Performance and Availability are Mission Critical
To
address these storage impacts, users need the flexibility to incorporate
whatever storage they need to do the job at the right price, whether it is
available today or comes along in the future. For example, to help with the
performance impacts, such as those encountered in virtualizing Tier 1
applications, users will want to incorporate and share SSD, flash-based
technologies. Flash helps here for a simple reason: electronic memory
technologies are much faster than mechanical disk drives. Flash has been around
for years, but only recently have they come down far enough in price to allow
for broader adoption.
Diversity
and Investment Protection; One Size Solutions Do Not Fit All
But
flash storage is better for read intensive applications versus write heavy
transaction-based traffic and it is still significantly more expensive than a
spinning disk. It also wears out. Taxing applications that prompt many writes
can shorten the lifespan of this still costly solution. So, it makes sense to
have other choices for storage alongside flash to keep flash reserved for where
it is needed most and to use the other storage alternatives for their most
efficient use cases, and to then optimize the performance and cost trade-offs
by placing and moving data to the most cost-effective tier that can deliver
acceptable performance. Users will need solutions to share and tier their
diverse storage arsenal – and manage it together as one, and that requires
smart and adaptable software.
And
what about existing storage hardware investments, does it make sense to throw
them away and replace them with this year’s new models when smart software can
extend their useful life? Why ‘rip and replace’ each year? Instead, these
existing storage investments and the newest Flash hardware devices, disk drives
and storage models can easily be made to work together in harmony; within a
software-defined storage world.
Better
Economics and Flexibility Make the Move to ‘Software-defined Storage’
Inevitable
Going
forward, users will have to embrace ‘software-defined storage’ as an essential
element to their software-defined data centers. Virtual storage infrastructures
make sense as the foundation for scalable, elastic and efficient cloud
computing. As users have to deal with the new dynamics and faster pace of
today’s business, they can no longer be trapped within yesterday’s more rigid
and hard-wired architecture models.
‘Software-defined’ architecture and not the hardware is what matters.
Clearly
the success of software-defined computing solutions from VMware and Microsoft
Hyper-V have proven the compelling value proposition that server virtualization
delivers. Likewise, the storage hypervisor and the use of virtualization at the
storage level are the key to unlocking the hardware chains that have made
storage an anchor to next generation data centers.
‘Software-defined
Storage’ Creates the Need for a Storage Hypervisor
We
need the same thinking that revolutionized servers to impact storage. We need
smart software that can be used enterprise-wide to be the driving force for
change, in effect we need a storage hypervisor whose main role is to virtualize
storage resources and to achieve the same benefits – agility, efficiency and
flexibility – that server hypervisor technology brought to processors and
memory.
Virtualization
has transformed computing and therefore the key applications we depend on to
run our businesses need to go virtual as well. Enterprise and cloud storage are
still living in a world dominated by physical and hardware-defined thinking. It
is time to think of storage in a ‘software-defined’ world. That is, storage
system features need to be available enterprise-wide and not just embedded to a
particular proprietary hardware device.
For
2013, be cautious and beware of “old acquaintance” hardware vendors’ claims
that they are “software-defined.”