Pro SQL Server Administration
Article Index
Pro SQL Server Administration
Parts III, IV, V; Conclusion

Author: Peter Carter
Publisher: Apress
Pages: 1004
ISBN: 978-1484207116
Print: 1484207114
Kindle: B0157W4JNE
Audience: DBAs and developers
Rating: 5
Reviewer: Ian Stirk

This big book aims to explain SQL Server 2014’s administration capabilities. How does it fare? 

Each release of SQL Server provides ever greater functionality. This wide-ranging book aims to explain both the old and new features.

The book is aimed at existing DBAs, and perhaps curious developers, that want to know the capabilities of SQL Server 2014.

Since this is a large book (1004 pages, spread over 27 chapters, in five parts), I will discuss the content of each part, highlighting the main content, together with any salient observations. 

Part I: Installing and Configuring SQL Server

This part opens with a chapter that looks at planning your SQL Server deployment. Factors discussed include SQL Server editions and licensing models, hardware (e.g. local, SAN), and operating system settings (e.g. power plan, Instant File Initialization). The default Windows power plan is inappropriate for SQL Server, and this is discussed. Azure options are briefly outlined (i.e. a restricted Azure SQL database or SQL Server on a virtual machine [VM]). The chapter ends with a very brief outline of the many components that can be installed.

The next chapter outlines the various installation options. Each of the Installation Center’s 7 tabs are outlined (i.e. planning, installation, maintenance, tools, resources, advanced, options), this is followed by the step-by-step installation of a stand-alone database engine instance.

The next chapter is concerned with installing SQL Server on a Windows Server Core, this doesn’t have a GUI, and so installation is via the command line. Various considerations are discussed (e.g. .NET version), before providing installation and testing steps. The process can be automated with a config file and this option is briefly discussed.

The last chapter in this part is concerned with configuring the instance. The chapter opens with a review of the sp_configure utility and how it’s used to change configuration options. Next, processor and memory settings are examined, and guidelines for memory required for the OS are given. MAXDOP and parallelism is examined, the author correctly states that ideally you should test various MAXDOP values for your own workloads to determine the optimal setting. The various system databases (i.e. resource, MSDB, master, model, tempdb) are briefly described.

This part provides a wide-ranging and detailed look at installing and configuring SQL Server. The section is generally easy to read, with good explanations and discussions, step-by-step walkthroughs, and plenty of relevant screenshots. The author provides lots of real-world experience, with sensible and practical advice. Useful SQL and PowerShell scripts are provided throughout. The chapter ends with a useful summary. These traits apply to the whole of the book

The author suggests the “Cost Threshold for Parallelism” as an alternative to lowering MAXDOP, I would argue the setting is an adjunct rather than an alternative to MAXDOP. Sometimes, features are introduced without sufficient explanation (e.g. DMVs, page types), but I guess this depends on the book’s intended audience - which appears to be experienced DBAs.

 

 

Part II: Database Administration

This part opens with a chapter that looks at a core aspect of administration, the base files and filegroups. Filegroups allow functionality to be applied to a group of files (e.g. partial restores). The various types of filegroup are discussed, namely: standard, FIL ESTREAM, and Memory-Optimized. There’s an interesting discussion on strategies for filegroups, in relation to performance, backup/restore, and storage tiers. The chapter continues with a look at maintenance (add, expand, shrink files), there’s a good point that autogrowth should be a fail-safe feature only. The chapter ends with a look at log maintenance, in relation to the recovery model, Virtual Log Files (VLFs), and shrinking the log.

The next chapter in the part looks at features that improve the performance and maintainability of tables. Partitioning allows a vertical slice of a table to be processed, very useful for table/index maintenance, and typically improves performance too. Template SQL code is provided to illustrate partitioning. The use of sliding windows to create, merge and delete partitions is discussed and code provided. Table compression is discussed next, and structures described for both row and page compression. The chapter ends with a look at memory-optimized tables, which can offer significant performance benefits (x30) owning to their in-memory nature and their lock and latch free nature. The Table Memory Optimizer Advisor, a utility that performs the migration of a disk-based table to in-memory is discussed. The chapter ends with a look at natively compiled objects, these work against in-memory tables and use fewer CPU instructions, resulting in faster code. This was a very useful chapter, but somehow, I felt the memory-optimized tables didn’t sit well with partitioning and compression, maybe they need a distinct chapter?

The next chapter relates to indexes and statistics. The chapter opens by discussing heaps and clustered indexes. The use of the clustered index as the common primary key is discussed, together with occasions when you might want to have the primary key on other columns.

Next, nonclustered indexes are examined, these duplicate data and are typically used to improve performance. Covered and filtered indexes are discussed, together with performance considerations (modifications may be slowed by nonclustered indexes). Code is provided to identify unused or rarely used indexes – removing these might improve performance.

Various atypical indexes are briefly discussed next, these include columnstore, in-memory, and XML indexes.

Next, index maintenance is discussed. Changes to the data in indexes can lead to fragmentation, which can degrade performance. The importance of fillfactor and regular defragmentation is discussed. Sample code to identify missing indexes via the DMVs is provided – with the sensible proviso not to implement them blindly.

The chapter ends with a look at statistics, these describe the distribution and density of column values, and are used by the optimizer for row estimates, in creating efficient query plans. Automatic update of statistics together with manually managing statistics is discussed.

This chapter provides lots of useful background discussion together with useful code for managing indexes and improving query performance.

The final chapter in this section discusses database consistency, ensuring at the physical level the underlying pages and files are ‘correct’. Various common errors are described together with settings that can help identify problems more quickly (e.g. page_verify). DBCC WRITEPAGE is used to intentionally corrupt the database (never do this in a live environment). Useful instructions on rebuilding the system databases are provided, and restoring the latest copy of the master database. For instance corruption, use the repair option of the maintenance tab in Installation Center.

The chapter continues with a look at DBCC CHECKDB, a utility that can identify and fix errors. Its various parameters are discussed, together with scheduling it to run periodically via a SQL Agent job – and associated alert notification when errors are detected. There’s a section of fixing errors, this should be read with caution since fixing data errors can mean losing the data. Having good database and log backups is crucial. Other DBCC corruption commands are briefly discussed. The chapter ends with a look at options for consistency checking with very large databases.

This was another hands-on practical part. Lots of GUI walkthroughs and SQL code, tips, warnings and solutions are provided. Much of the everyday work of the DBA can be found here. Useful examples support the assertions made.

In addition to the Table Memory Optimizer Advisor, maybe the part could have discussed the AMR tool that identifies tables that would benefit most from migration to in-memory? In identifying unused indexes, there should be a warming that this information has been collected since the machine was last restarted, so the index might be used monthly etc.

Banner



Last Updated ( Tuesday, 12 April 2016 )