Enhance your SQL Server performance by following the SQL Server Storage Best Practices advice in this month’s SQL Server Pro.
The items in bold below correspond to the name of the article in the March issue of SQL Server Pro, an online subscription-based monthly magazine.
On many database systems, the limiting factor for performance is Input/Output (I/O). With this in mind, you will appreciate the significance of Micheal Otley's SQL Server Storage Best Practices. Please don’t be put off by the mundane title, this is important stuff! The article opens with an overview of data and log files, and some best practices for them:
- Place data and log files on different drives
- Enable AutoGrow (not sure I agree with this)
- Disable AutoShrink
- Enable Instant File Initialization
The various disk RAID types are examined next i.e. RAID 0 (striping), RAID 1 (mirroring), RAID 5 (striping with parity), and RAID 10 (mirroring with striping). The importance of RAID types are discussed in terms of performance and protection.
Tempdb is then examined. It can be used by all the other databases and also by SQL Server itself internally, so it is often the most active database. The article suggests for tempdb:
- Log and data files be placed on separate drives
- RAID 1 or 10 is used to improve its performance
- Adequately sized to avoid AutoGrow events
- Split over a number of files
Solid State Disks (SSDs) are increasingly being used to boost performance. The article discusses the advantages of moving indexes, data, logs and tempdb to SSDs, whilst highlighting their shorter life cycle compared with hard disks.
Finally, the importance of performance baselines is described i.e. providing support in troubleshooting and identifying growth trends. Various storage-related Performance Monitor counters are discussed together with their typical values (so you can identify if something is awry).
If you want to enhance the performance of your SQL Servers, investigating its storage is essential, so you should check the suggestions given in this article, together with links provided. This is a very useful and well-balanced article, with plenty of range and depth.
Recent issues have looked ahead to SQL Server 2014 and this month has further coverage in More FAQs about SQL Server 2014 In-Memory OLTP. The questions answered include:
- Will it be included in the Standard Edition? (not expected)
- Is it the same as the old PINTABLE capability? (no, there are no locks or latches)
- Will it solve any type of performance problem? (no, not bad design etc)
- Can I run the ARM tool on pre-2014 database versions? (yes, but need 2014 SSMS)
I’m sure everyone is looking forward to the significant (and relatively cheap) performance improvements the new version will bring.
How can you be sure that your changes have improved performance? Michael K. Campbell shows this can be answered by Creating Simple Performance Baselines with SQL Server Profiler. The article provides step-by-step guidance, together with screenshots, on how Profiler can be used to measure aggregated performance metrics before (i.e. a baseline) and after a change. Some useful SQL code is provided to determine if the change is indeed advantageous. Now there should be no excuse for knowing if your changes are for the better!
The editorial convincingly argues The PC Is Far From Dead. The crux of the argument is that smartphones and tablets are becoming increasingly popular, in the consumer space, but PCs have the advantage in the business space. Additionally, PCs tend to be much better at content creation. I suspect the future will show these devices are complimentary rather than competitors.
Last month’s puzzle Identifying a Subsequence in a Sequence provided three iterative solutions. This month, the puzzle is revisited with three set-based solutions, two of which are slower than the fastest iterative solution, and one is equally as fast. The set-based solutions are based around: FOR XML PATH, COUNT, and NOT EXISTS. Set-based solutions are generally preferred since they fit well with the relational model, use less code, and are typically more elegant. As always, articles by Itzik Ben-Gan provide plenty of food for thought and both part of this one are online at the SQL Server Pro site and available to all.
Allen White argues that knowing more tools will allow a greater range of problems to be solved using the most appropriate tool. He demonstrates this in Create Database Tables in PowerShell with SMO, another article that can be found online. The author shows how PowerShell can be used to create tables by interacting with SQL Server Management Objects (SMO). While acknowledging PowerShell is not the best tool for creating tables, this article will form the base for a series, with examples that are less easily implemented in SQL Server Management Studio. I can’t help thinking a better initial example should have been provided – in essence he’s using the wrong tool…
The database corruption series continues with The Importance of Backups. However the brief article says little, except that backups are very important for recovering from corruption – the article should have been wider, perhaps included with another topic, this was a missed opportunity.
The cover story takes A look at the SSIS Object Model, and in the process builds a custom data flow component. Data flow components let you connect to various data sources, and transform and redirect the data quickly. The extensibility of the object model allows you to, for example, create a source component that can read from various NoSQL databases. The types of data flow components, their structure and modes are also discussed. If you need to write a custom data flow component, perhaps for a new data source, this article, with its step-by-step walkthrough, informative diagrams, and plenty of sample code, has done much of the hard work for you.
Collations in SQL Server define the rules for sorting and comparing values, for a given language. Sometimes you may need to change to collection order, and unless this is done correctly, you’ll encounter some unusual problems. A detailed step-by-step walkthrough of the change process is provided in A Seven-Step Process for Changing a Database’s Collation. The steps discussed are:
- Check the Database (DBCC CHECKDB)
- Perform a Backup
- Change the Collation at the Database Level
- Find All Table Columns Whose Collations Must Be Changed
- Change the Collation at the Table Level
- Change the Collations of the Views
- Rebuild the Indexes
Although this is a relatively rare change, when it is needed it is essential it is done correctly, so this detailed article is to be welcomed.
This month’s magazine was a mixed bag. I loved the storage best practices article, containing plenty of suggestions to improve system performance. Of course, anything on SQL Server 2014 is to be welcomed. I’m not sure many people will find the collection article of interest, but be assured, it could be essential for you one day. The article on backups lacked detail. That said, overall, I found this month’s magazine an interesting read.
More Information
SQL Server Pro March 2014
Related Articles
Record Drop In PC Shipments
Tablets Lure Users Away From Desktops and Laptops
Looking Forward to SQL Server 2014 (September issue)
In-Memory OLTP In Depth
|