Getting Started with SQL Server 2014 Administration
Article Index
Getting Started with SQL Server 2014 Administration
Chapters 4, 5 & 6

Author: Gethyn Ellis
Publisher: Packt Publishing
Pages: 114
ISBN: 978-1782172416
Audience: DBAs, Architects
Rating: 4.2
Reviewer: Ian Stirk

Another SQL Server 2014 book quickly follows the product’s release, how does it fare?

This is a small book of less than 100 pages spread over six chapters. The introduction says “This book is for people who want to learn the new features of SQL Server 2014…” this is at odds with the title which doesn't mention it relating to only new features,

I wonder how many people will be caught out by this misleading title?

People that want to read only about new features may ignore it, and people that want a general administration book may buy it! “Getting Started with SQL Server 2014’s New Administration Features” would have been a more honest title.

Below is a chapter-by-chapter exploration of the topics covered.

 

Banner

 

Chapter 1 SQL Server 2014 and Cloud

The chapter opens with a few definitions, Windows Azure being Microsoft’s public cloud offering, and Microsoft Azure SQL Database the underlying database. It notes that using the cloud reduces concerns about hardware resources, and provides improved scalability.

The database provides Software as a Service (SaaS) functionality. A step-by-step walkthrough of how to create a Windows Azure SQL database is given, together with its various settings. Connecting to the newly created database via SQL Server Management Studio (SSMS) is shown next. Finally, the process of migration an existing on-premises database to Microsoft Azure is illustrated.

The next section discusses storing your SQL Server database files in Microsoft Azure, combining on-premise and cloud storage solutions. The files are stored as blobs in Microsoft Azure. Various benefits include low cost and elastic storage, together with improved High Availability (HA) and Disaster Recovery (DR). There’s a step-by-step walkthrough of creating a database that uses cloud storage (including creating the relevant storage account and storage container).

The chapter ends with a walkthrough of creating a Microsoft Azure Virtual Machine (VM), instead of building the VM on-premises.

All the sections provide walkthroughs with good descriptions and screenshots. All the sub-chapter headings would have been easier to understand if they specified their function e.g. “Integrating Microsoft Azure SQL Database” would have been better as “Storing your SQL Server database files in the cloud”.

Chapter 2 Backup and Restore Improvements

The chapter highlights that perhaps nowhere more than with backup usage has the division between on-premise and cloud services become so blurred. Storing backups in Microsoft’s cloud means reduced resource investment, the ability to backup to a URL, and encryption benefits.

A step-by-step walkthrough of backing up to a URL and Microsoft Azure storage is provided. SSMS has been updated to provide a URL as a database backup option. Some limitations exists e.g. the maximum backup size is 1TB, and a helpful Microsoft link provided for further information. This is followed, as expected, by a step-by-step walkthrough of restoring a backup that has been stored in the cloud. In both cases (backup/restore), the associated SQL code is also given.

Managed backups allow you to automate your database backups to the cloud, this involves using the new smart_admin.sp_set_db_backup routine. Sample code is provided to perform this.

Security is often cited as a major reason for not using cloud solutions. For the first time, it is possible to encrypt backups using Microsoft native backup tools. Various encryption algorithms are provided (e.g. Triple DES, and AES 256). Example SQL code is provided to create a database master key, certificate, and asymmetrical key, which will enable you to create an encrypted database backup.

All the sections provide walkthroughs with good descriptions and screenshots. There are some useful references back to the primary cloud chapter (chapter 1). There’s a useful list of cloud backup limitations, with nothing too onerous (except maybe the 1TB size limit?!). I suspect Microsoft will use backing up to the cloud as a way for clients to sample the delights of the cloud (e.g. no DR resources, offsite) in the hope of making further inroads with their other cloud offerings…

 

 

Chapter 3 In-Memory Optimized Tables

Recent SQL Server versions have tended to concentrate on new Business Intelligence (BI) functionality, however SQL Server 2014 is largely defined by its performance improvements. The working-name of this functionality during the betas was Hekaton, reflecting the hope for a x100 increase in performance.

To use In-Memory Optimized tables the following need to be fulfilled:

 

  • Use the Enterprise edition of SQL Server 2014

  • Have enough memory to hold the In-Memory Optimized tables

  • The CPUs need to support the cmpxchg16b instruction set

  • Additional considerations exist for VMs, a Microsoft link expands on this

 

In-Memory Optimized tables are stored in memory to give improved performance. They are compared and contrasted with traditional disk-based tables e.g. both support ACID (Atomic, Consistent, Isolated, Durable) transactions. In-Memory Optimized tables can exist solely in memory and additionally on disk (the default), the former option provides an additional performance benefit.

Example SQL code is provided to create a memory-optimized filegroup needed for the in-memory tables, additionally the ‘create database’ wizard now also includes this functionality. Next, sample code is provided to create in-memory tables. A very useful list of supported datatypes is given (e.g. varchar(MAX) and XML are not supported), together with a list of table limitations (e.g. no identity columns, no foreign keys etc).

The creation of in-memory tables using non-clustered hash indexes and non-clustered indexes is discussed. Hash indexes are not the traditional b-trees (balanced trees) instead they use a hash with the linked list. In-memory tables must specify an index, it is not possible to have a heap. A brief outline of the structure of in-memory tables (header and payload) is given.

The chapter then progresses into another area of performance improvement, natively compiled stored procedures. These stored procedures can only access in-memory tables, they’re compiled to make them faster. Example SQL code is provided to create a natively compiled stored procedure.

The chapter ends with a brief discussion of concurrency, defining ‘dirty read’, ‘non-repeatable read’, and ‘phantom read’, together with the various isolation levels. The point of this seems to be to say that in-memory tables support snapshot, repeatable read, and serializable isolation levels.

Overall this was a useful and interesting chapter. There’s a good overview of how to create the in-memory filegroup, table and natively compiled stored procedure. There are some useful links for further information. It would have been useful to have a link to the various in-memory table limitations. When describing ACID the author used ‘independent’ for the ‘I’ attribute, whereas ‘isolated’ is the more traditional term used. The text mentions creating a table called dbo.shoppingcart, but this does not exist elsewhere in the text or code.

Perhaps the most obvious omission was that of the Analyse, Migrate, and Report (AMR) tool, that is built into SSMS. This allows you to identify which tables are suitable for moving into memory, and the top candidate stored procedures to migrate, based on your system’s processing. At the very least, this very useful tool should have been mentioned, especially since it is likely to be the first point of contact with SQL Server 2014’s premier piece of new functionality – in-memory tables.

 

 

 



Last Updated ( Wednesday, 28 May 2014 )