Tempdb Tweaking: Best Practices to Improve SQL Server Performance

The tempdb system database is a global resource that is part of the system database and available to all connected users in SQL Server. As such, the tempdb can be seen as the worksheet of SQL server. Any improvement to the system will automatically effect all your custom databases. It is therefore highly recommended to fine-tune the tempdb configuration as much as you can. In this blog post, I’d like to highlight some interesting ways or best practices on how to increase its performance! 

The tempdb system is an internal mechanism to store:

  • Temporary user objects , tables, variables, cursors,.. (e.g., tables starting with “#”);
  • Internal objects created by SQL server, as worktable (e.g., to sort a result set);
  • Row versions generated by data transactions such as online indexing, after triggers or other internal work accomplished in SQL server;
  • Row versions generated by data transactions using read-committed, isolation or snapshot isolated transactions.

1. Split files according number of CPUs

To fine-tune your tempdb’s performance, you first need to create additional data files of tempdb to maximize disk bandwidth. The amount of tempdb data files required should equal the number of CPUs in your server. Note that a dual-core CPU is considered to be two CPUs! In any event, the number of data files must not exceed eight, no matter how many additional cores are available on the computer.

The snippet can be used to add additional data files to the tempdb database.

[USE Master]
ALTER DATABASE tempdb
ADD FILE (NAME = tempdev2, FILENAME = 'X:\tempdb2.mdf', SIZE = 256);
ALTER DATABASE tempdb
ADD FILE (NAME = tempdev3, FILENAME = 'X:\tempdb3.mdf', SIZE = 256);
ALTER DATABASE tempdb
ADD FILE (NAME = tempdev4, FILENAME = 'X:\tempdb4.mdf', SIZE = 256);
GO

From SQL Server 2016, the creation of multiple tempdb data files is covered in the installation. Microsoft did a good job by encapsulating this in their installation wizard.

Number of tempdb files

2. Move to a separate fast disk and measure performance

Lower down the read and write latency by placing the tempdb data and log files on a separate fast (local) storage. The preferred choice here is a high rotational speed HHD disk or SSD local disk. Having lower access times to the logs and data files will impact the performance instantly. Next to that, the data should not float over the network anymore. This will save broadband as well.

If your enterprise only offers virtual machines with SAN access, there is no benefit to move those files into another disk. The SAN is responsible to arrange the disks internally and therefore spreads the load. This means that you’ll be writing to the same storage system with the same performance – even if you assign another virtual disk.

• Involve the storage team

I recommend involving the storage team in your story and asking them the question: “How do we get a faster disk access, if direct access to the local disk is impossible?”. They might have a separate container, offering lower read/write latency or can give you some other useful alternative. Consider using “Raw Device Mapping” (RDM) if you run on VMWare platform or “Pass-Through Disks” on Hyper-V. Join the conversation on why some technology has or has not been selected and provide the storage team with your technological needs. Since this tempdb should be treated differently than any other data or log file, I can guarantee that the conversation will become very interesting!

• Performance speed test

In case your options are limited, a performance speed test can help you figure out which disk performs the best. Local disks, however, do not always guarantee the best read/write access times in comparison with a SAN. Pay attention to this when performing a comparison test!

• Failover clustered installations

Since SQL server 2012, tempdb is supported on local disks for failover clustered installations as well. Once a failover has been taken place, the file will be recreated from scratch. Another side effect of this is that those files should not be recovered as well, which will decrease the downtime in case of failover.

The following snippet can be used to add tempdb data files and log files
[USE Master]
GO
ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, FILENAME = 'C:\tempdb1.mdf', SIZE = 1GB);
GO
ALTER DATABASE tempdb MODIFY FILE (NAME = templog, FILENAME = 'C:\tempdb.ldf', SIZE = 5GB);
GO

3. Allocate the size of tempdb correctly

Avoid that SQL server needs to grow your log file or data file of tempdb when there’s is no free space left. This is a heavy operation, which consumes time and data fragmentation. It is therefore quintessential to allocate enough disk space to your data and log files!

Additionally, ensure that the data files used for the tempdb are of equal size. If those files are not equally created (or some file has been grown), SQL server will use the proportional fill algorithm, preferring the largest file for Global Allocation Map allocations rather than spreading the allocations over all files.

Happy tempdb tuning!

Based on the internal purpose of tempdb, optimization will clearly benefit any database used in your SQL server. Personally, I find that Microsoft is doing a great job with SQL Server 2016, since some of the remarks have are already been tackled or introduced in the installation wizard.

Intensively used databases with a high amount of transactions (such as, e.g., BizTalkMsgbox) do have a huge importance to investigate these values and optimizations into detail, as they will gain benefit for sure.