Professional Microsoft SQL Server 2014 Administration
Article Index
Professional Microsoft SQL Server 2014 Administration
Chapters 9 -17
Chapters 18 -25, Conclusion

 

Chapter 9 In-memory OLTP

In many ways, in-memory OLTP defines the SQL Server 2014 release. It potentially provides significantly faster processing, by using in-memory tables in place of disk-based tables, having a lock and latch free design to allow faster throughput, and the use of natively compiled stored procedures that run faster. Performance gains of between x9 to x150 are documented in various case studies.

The chapter opens with a discussion of the structure of in-memory OLTP structures, and then shows how to create in-memory tables, natively compiled stored procedures, and use the Analyze, Migrate, and Report AMR tool. The AMR tool takes a workload and identifies the most appropriate tables and stored procedures to migrate. A step-by-step walkthrough of creating the workload via the Management Data Warehouse (MDW) is given – this contains plenty of useful screenshots.

In-memory OLTP has been integrated seamlessly into the other SQL Server subsystems. Various limitations are listed, these include restricted data types and table relationships, and a link to the Microsoft website provides further information. Typically you will need additional memory to hold the in-memory tables, and this can be managed via the Resource Governor. Monitoring is described via PerfMon and the sys.dm_resource_governor_resources_pool DMV.

This was an interesting chapter, providing a good introduction to what in-memory OLTP is, with practical example code provided. The AMR tool walkthrough was very useful, since it is likely to be the entry point to migrating to a faster system. There was some confusion over AMR, described variously as ARM.

Chapter 10 Configuring the Server for Optimal Performance

The chapter opens with a discussion of the Performance Tuning Cycle, defining good performance, and focusing on what’s important. This continues with what development DBAs need to know (i.e. who are the users, what SQL is running, data usage patterns and physical database design).

There’s a somewhat theoretical discussion of CPU (cores, cache, hyper-threading, and architecture), memory (2TB limit, virtual memory manager, and page faults), and I/O (RAID types etc).

There are useful links to other chapters in this book. I felt I wanted more from this chapter (that’s said there is much in the subsequent related chapters). There’s a bad edit that should have been caught i.e.”Following are the three most important pillars...”, but 5 items are listed, the previous 2012 version of book had 3 items.

Chapter 11 Optimizing SQL Server 2014

I/O is often the major bottleneck on many systems. I/O problems often show themselves as a slow system even when the CPUs are not particularly busy. Consideration is given to physical file placement and tempdb (used by all databases, so potentially the most important). Table and index partitioning are discussed as a means of providing better management, with the typical side-benefit of improved performance. The benefits of compression are explained, together with implementation via the GUI and T-SQL.

An insight into CPU architecture, cache coherency, affinity masks, and max degree of parallelism (MAXDOP) is provided (and happily cost threshold for parallelism is also discussed), much of this has a theoretical tone. Memory is discussed in a similar manner, including the use of the new Buffer Pool Extensions to improve performance. There’s a nice upgrade to the Page Life Expectancy rule (a ‘good’ value used to be 300 seconds, now it is calculated as MaxServerMem x 75!).

The Resource Governor limits the amount of resources (memory, CPU, and now I/O) for a set of users, this can protect from runaway queries, and provide a fairer distribution of resource usage. Example code is provided to create resource pools for different workload groups.

Again there are some useful links to Microsoft resources for further, deeper information. Overall this is a useful background chapter of things to consider when thinking about SQL Server optimization.

Chapter 12 Monitoring Your SQL Server

The goal of this chapter is for you to know your server, what is running on it, what is normal (the baseline), what is the trend? The aim of this monitoring is to be proactive rather than reactive.

Many monitoring tools are outlined, these include: Performance Monitor (PerfMon), Extended Events, SQL Profiler, SQL Trace, the Default trace, Activity monitor, DMVs/DMFs, system stored procedures, standard reports, and System Center Advisor (an extension to the Best Practices Analyzer). Several of these tools are explained in greater detail later in the chapter. You can see there are plenty to learn about!

This chapter provides a good overview of the tools available for monitoring, and when and where they’re appropriate to use. The content is enough to get you started, but will not make you an expert e.g. the PerfMon section doesn’t mention that you can analyse the logs using Performance Analysis for Logs (PAL).

 

Banner

 

Chapter 13 Performance Tuning T-SQL

This chapter opens with an overview of how a SQL query is broken down (parsing, algebrization, and optimisation) and executed. Factors affecting the execution plan are examined, including statistics, and the new cardinality estimator. The estimated plan and actual plan are explained, and then examined using some typical scenarios - table scans, indexes, various join types [hash, merge, and nested loop], and data modifications.

There’s an interesting section on examining query performance in a production environment. Owing to its sensitive nature, here the analysis centres on the use of DMVs.

The chapter provides good advice on various considerations for improving SQL performance, in many cases, useful example SQL is provided. While all the information is useful in tuning your SQL, to me it seemed at times a little disjoined. Some of the more traditionally methods of improving your SQL should have been included here.

Chapter 14 Indexing Your Database

Indexes are perhaps the major method of improving the performance of your queries. This chapter opens with a discussion of the new columnstore index functionality, namely they can now be updated and clustered. There’s an interesting grid showing index functionality together with the version of SQL Server it was introduced in. Other features discussed include: memory optimized indexes, filtered indexes and statistics, compression, spatial indexes, use of INCLUDE, parallel index operations, version store and the Database tuning Advisor (DTA).

The different types of indexes are explained (clustered, non-clustered, covering, filtered), together with the different ways indexes can be used (seek, scan, lookup). This is followed by a discussion of the advantages of partitioning tables and indexes. Index maintenance is examined, looking at page splits, fill factor, and fragmentation.

Query performance can be improved by implementing missing indexes, removing unused indexes, and defragging any fragmenting indexes. Additionally the DTA can be used to recommend indexes based on a workload. All these methods have drawbacks, so be careful when implementing them.

Overall, this is a useful chapter, some areas felt a bit light (e.g. only one page for the DTA), but is more than ok for introductory/intermediary level. There was no link to the index maintenance section given in the previous chapter.

Chapter 15 Replication

Replication is a means of copying data. It needs to be specified on a per object basis. An interesting overview is provided, with an analogy of replication and the magazine publishing process. The outline includes articles, selection, publishing, and subscribers. Additionally a distributor is used for delivery and monitoring.

Much of replication depends on the use of SQL Server Agent jobs e.g. snapshot agent, log reader, distribution, merge, and queue reader. Again links to previous SQL Agent job usage should have been made.

The main replication types (snapshot, transaction, and merge) are discussed in details, as are the various topographies (one or more subscribers to one or more publishers). Implementation details are given, together with tools to monitor replication.

This chapter contains a good overview of what replication is, and its various configurations. It might have been worthwhile to explain it in the context of other high availability (HA) / disaster recovery (DR) options.

Chapter 16 Clustering SQL Server 2014

Clustering is another High Availability (HA) option. The chapter opens with what clustering can and cannot accomplish, closely followed with a discussion of other HA options (cold/warm backup server, log shipping, replication, mirroring, and AlwaysOn). A theoretical discussion of clustering follows. Then comes a more practical section, preparing for clustering, and a step-by-step walkthrough of installing clustering (both Windows and SQL Server), with plenty of relevant screenshots. This is followed by a troubleshooting section, which is probably inadequate (the bottom line seems to be ‘…work with Microsoft to resolve your problems’).

The practical step-by-step installation example should prove very useful. This chapter contained a good discussion of the various HA options, but I think these are relevant to all the HA options (there are chapters on each option), so perhaps having them in one place would have been better.

Chapter 17 Backup and Recovery

Backup and recovery are the most important tasks for a production DBA, downtime can be both costly and unacceptable, thus it is critical to have an in-depth understanding of the topic.

The chapter starts with an overview of the backup and restore enhancements in 2014, namely: backup to a URL, managed backups to Windows Azure, and natively encrypted backups.

There is a detailed section on the various types of data backup (full, partial, file/filegroup, differential, partial differential, file differential, and copy-only), and transaction log backup (pure, bulk, and tail). Methods of copying databases are discussed (detach/attach, BCP, and scripting wizard) with plenty of screenshots and example T-SQL provided. Recovery models (full, bulk logged, and simple) are compared and contrasted. The importance of verifying backups is also discussed.

There is a practical step-by-step example of creating and executing a backup plan, using both SSMS (plenty of screenshots), and T-SQL. This is followed by a corresponding example of a restore. The importance of archiving and partitioning data is explained.

This chapter is quite wordy in parts, but I feel in this case it is necessary. The majority of backup/restore work is about planning and preparation (and please, test you plans regularly!). I cannot understand why this backup and restore chapter is wedged between the HA chapters.

<ASIN:1118859138>



Last Updated ( Sunday, 19 October 2014 )