Thursday, 3 November 2016

Performance, Availability and Agility for SQL Server

Introduction - Part 1

IT organizations must maintain service level agreements (SLAs) by meeting the demands of applications that work on a SQL Server. To meet these requirements they must deliver superior performance and continuous uptime of each SQL Server instance.  Furthermore, application dependent on SQL Server, such as agile development, CRM, BI, or IOT,  are increasingly dynamic and require faster adaptability to performance and high availability challenges than device level provisioning, analytics and management can provide. 
In this blog, which is the first of a 3 part series, we will discuss the challenges IT organizations face with SQL server and solution that helps them overcome these challenges.
Challenges
All these concerns can be associated to a common root cause, which is the storage infrastructure. Did you know that 62% of DBAs experience latency of more than 10 milliseconds when writing to disks1? Not only does this slowdown impact the user experience, but also has DBAs spending hours tuning the database. Now that is the impact of storage on SQL Server performance; so what about its impact on availability? Well, according to surveys, 50% of organizations don’t have an adequate business continuity plan because of expensive storage solution2. When it comes to agility, DBAs have agility on at the SQL Server level, but IT administrators don’t have the same agility on the storage side – especially when they have to depend on heterogeneous disk arrays. Surveys shows that a majority of enterprises have 2 or more types of storage and 73% have more than 4 types3
A common IT trend to solve the performance issue is it to adopt flash storage4. However, moving the entire database to flash storage significantly increases cost. To save on cost, the DBAs end up with the burden of having to pick and choose the instances that require high-performance. The other option to overcome the performance issue is to tune the database and change the query. This requires significant database expertise, demands time, and changes to the production database. Most organizations either don’t have dedicated database performance tuning experts or the luxury of time or are sensitive to making changes to the production database. This common dilemma makes the option of tuning the database a very farfetched approach.
For higher uptime, DBAs utilize Cluster Failover Instance (formerly Microsoft Cluster Service) for server availability, but clustering alone cannot overcome storage-related downtime. One option is to upgrade to SQL Server Enterprise, but it puts a heavy cost burden on the organization (Figure 1.) This leaves them with the option to either not upgrading to SQL Server Enterprise or choosing only few SQL Server instances to be upgraded to the SQL Server Enterprise. The other option is to use storage or 3rd-party mirroring, but neither solution guarantee a Recovery Point Objective (RPO) & Recovery Time Objective (RTO) of zero.
SQL-server-challenges-part1
Figure 1
Solution
DataCore’s advanced software-defined storage solution addresses both the latency and uptime challenges of SQL Server environments. It is easy to use, delivers high-performance and offers continuous storage availability. DataCore™ Parallel I/O and high-speed ‘in-memory’ caching technologies increases productivity by dramatically reducing the SQL Server query times.

Next blog
In the next blog, we will touch more on the performance aspect of DataCore.

No comments: