SQL Server Query Tuning and Optimization (Packt) |
Page 3 of 4
Chapter 7 In-Memory OLTP This chapter starts with a review of the history of database architecture. Initially, memory was expensive, so small amounts of RAM were used, and data persisted to disk. However, memory is now much cheaper, meaning this model can be changed, making more use of RAM – this is especially useful for OLTP systems, which can typically fit into memory. In-Memory OLTP is optimized for memory, and consists largely of compiled in-memory tables and indexes, compiled stored procedures, and a lock and latch free concurrency model – all of which can increase performance significantly. Unlike other vendors, Microsoft’s In-Memory OLTP technology is fully integrated into the existing architecture (i.e. can use old and new together). The chapter examines In-Memory OLTP tables and indexes. These can be accessed quickly via natively compiled stored procedures, and the more traditional interpreted SQL. The lock and latch free multiversion concurrency control (MVCC) model permits multiple versions of a row to exist, but only one is eventually valid, the rest will be removed via the garbage collector. Indexes are never stored on disk or logged, and are rebuilt after a restart. The chapter continues with example code to create an In-Memory OLTP filegroup, tables and indexes. The table data can be persisted to disk (this is the default), for recoverability purposes, via the schema_and_data durability setting. In-Memory OLTP tables and natively compiled stored procedures have limitations, these are briefly discussed. The structure of the row is briefly discussed, being a row header and payload. There are 2 types of indexes, hash indexes and memory-optimized nonclustered indexes (previously called range indexes). Hash indexes are used in equality predicates and lookups, you need to supply all the index’s columns to use this index. Hash indexes have a bucket_count that pre-allocates the number of buckets for the hashes – it is important to get this value correct, too few will mean duplicates and too many will waste space. Information about hash indexes can be obtained from the DMV sys.dm_db_xtp_hash_index_stats. Memory-optimized nonclustered indexes are used in equality or inequality predicates, they can return sorted data, and are typically better for inequality searches and scans. It is possible to search using some of the (leading) columns of the index. A new Bw-tree structure is used to navigate the index. Useful example code shows how index access is linked to the plan content. Natively compiled stored procedures can only access In-Memory OLTP tables. They are compiled, and use fewer CPU instructions, giving improved performance. The syntax for creating natively compiled stored procedures is discussed, and the impact of isolation level is explained. Statistics on In-Memory OLTP tables/indexes are now automatically updated (since SQL Server 2016), to take advantage of new data. However, any associated stored procedures will need to be recompiled to take advantage of the new statistics. Various limitations of tables and natively compiled stored procedures are described. This chapter provides useful information about the In-Memory OLTP engine, which is probably the main reason to migrate from pre-SQL Server 2014 - since it potentially offers big performance improvements. There are some useful diagrams included e.g. index structures. Although there’s a brief discussion of row structure, there is no discussion of the columns in the row header – this could have been useful in explaining how MVCC works in greater detail. Chapter 8 Understanding Plan Caching After going through the optimization process, the optimizer produces a plan. Creating this plan can be expensive, so it makes sense to reuse a plan where possible. This chapter discusses plan caching, plan reuse and problems associated with plan reuse. The plans are stored in the plan cache. The chapter opens with a look at the steps in the compilation and recompilation process. There’s a helpful diagram of the compile and recompile process. Performance Monitor (perfMon) can be used to determine the number of recompilations, if there are too many, SQL trace or XEs can be used to find their causes – a useful short example of this is provided. The plan cache can be explored using the DMV sys.dm_exec_query_stats, examining accumulated IO, CPU, etc. While this DMV has limitations, it tends to be quick and easy to use. The use of the DMV sys.dm_exec_cached_plans is also discussed, including its columns usercount, cacheobjtype, and objtype. There’s a useful note, stating that Query Store is now the preferred method to capture performance and plan information. The section ends with a review of the methods of removing plans from the cache, these include clearing all of the cache, the cache for a given database, and for a given query. The chapter moves on to discussing parameterization. Queries that are the same typically reuse the same plan, however SQL Server tends to be quite conservative when the SQL differs slightly. It is possible to encourage/ force SQL Server to relax these rules and thus reuse the same plans for related queries. The section discusses these methods together with autoparameterization, the “optimize for ad hoc workloads” setting, forced parameterization, and use of stored procedures. The plan is typically created based on the value of the parameters used, this is called parameter sniffing, and is generally a good thing. If these values are typical, then good reuse should occur. However, if the values are atypical, subsequent runs can be slow. Various methods of overcoming the problems associated with parameter sniffing are discussed including:
SQL Server 2022 introduces parameter sensitive plan optimization, which should eliminate parameter sniffing problems. In essence, multiple plans relating to multiple parameter values are cached, and as part of Intelligent Query Processing, the appropriate one is selected at runtime. A useful example is given. This chapter provided a good overview of how a plan is cached and reused, how it can be encouraged to be reused, the problems parameter sniffing can cause and options for how it can be resolved. There’s a useful diagram on the compile and recompile process, and various decision steps. Perhaps, the chapter should have noted that in addition to PerfMon, you can get the SQL Server performance counters for re/compiles from the DMV sys.dm_os_performance_counters. Chapter 9 The Query Store Query plans inform SQL Server how to physically implement your SQL code. Sometimes, for a variety of reasons (e.g. statistics updated), a new query plan is created for the same query, and the query can run slower. The Query Store allows you to troubleshoot and solve problems relating to changing execution plans. New in SQL Server 2022 is the use of Query Store to provide feedback for memory grant, cardinality estimation, and degree of parallelism. Query Store can now be enabled on secondary replicas in AlwaysOn Availability Groups. Additionally, it is possible to supply hints to the execution plan using Query Store. The chapter starts with an overview of the Query Store, which automatically captures the history of queries, execution plans, and runtime statistics. It is possible to force the query processor to use one of the previously faster plans, and thus restore performance. Next, there’s a look at how to configure and use Query Store, using both T-SQL and a GUI. There’s a brief look at some of the Query Store built-in reports (e.g. Top Resource Consuming Queries). You’re encouraged to explore its features for yourself. Lastly, querying the Query Store is discussed. There’s an interesting look at obtaining queries that have timed-out or have been cancelled. The catalog views that underly Query Store are examined, with useful example code. This is a brief chapter, and much more could have been included, indeed the author refers to his book High Performance SQL Server for further information, see my review here. |
|||||
Last Updated ( Wednesday, 23 November 2022 ) |