Blog | Feb 26, 2014
Knowing Tempdb – Bringing the Best Out Of Tempdb
There are many misconceptions & myths about tempdb and the purported best practices are inconsistent at best. The purpose of this blog is to help make the right decisions for managing tempdb and to determine the best solution for a given scenario.
In order to gain a better understanding, we will look at what tempdb is, how it is used and some of the major common problems as well as methods to prevent them. At the end we will provide some best practices on configuring and monitoring Tempdb.
What is Tempdb?
The Tempdb system database is similar to other user databases except that it doesn’t persist after SQL server shuts down; which implies that with every restart the tempdb gets recreated and is copied from a model database.
A few more points on Tempdb:
• Only one file group is allowed for data and one file group for logs.
• Auto shrink is not allowed for tempdb.
• The database CHECKSUM option cannot be enabled.
• A database snapshot cannot be created on tempdb.
• DBCC CHECKALLOC and DBCC CHECK CATALOG are not supported.
What uses Tempdb?
• Temporary user objects
- Temp tables
- Table variables.
- Temp procedures.
- Global Temp tables.
• Work tables/ work files / intermediate results
• Index Rebuilds
• Some group by, order by or union operations:
- Hash joins and hash aggregates.
- Spills to tempdb.
- Temporary LOB storage.
- Common table expressions(CTE)
• Version store for data modification transactions from:
- Read committed snapshot isolation.
- Snapshot isolation.
- Online Indexes.
- After triggers.
- Multiple Active Results Set (MARS)
Tempdb space usage:
The following 3 types of objects can occupy Tempdb:
1. Internal Objects: Created internally by SQL server.
2. Version Stores: Used to store row versions generated by transactions for features such as snapshot isolation, triggers, MARS and Online index rebuild.
3. User Objects: Includes both user defined tables and indexes, system catalog tables and indexes.
Now, let’s understand what tempdb is used for followed by performance issues and ways to prevent them.
One of the major performance problems with tempdb is contention on allocation pages. There are three types of pages that experience this issue:
- Page Free Space (PFS): Tracks the allocation status of each page and approximately how much free space it has. There is one PFS page for every ½ GB of data file. The first PFS page is page number one of the data file.
- Global Allocation Map (GAM): Tracks which extents have been allocated. There is one GAM page for every 4 GB of the data file. The first GAM page is page number two in the data file.
- Share Global Allocation Map (SGAM): Tracks which extents are being used as mixed (shared) extents. There is one SGAM page for every 4 GB of data file. The first SGAM page is page number three in the data file.
Ways to prevent the above problem:
Starting with a proper configuration of tempdb is a must. Plus follow the recommendations as given below:
1. Data File per CPU ratio:
Microsoft recommends one data file per logical CPU (1:1 ratio).
2. Pre- size files to avoid auto-growth:
The number of data files for tempdb is only one best practice for tempdb that affects tempdb contention. It is equally important that the data files are of the same size and maintained at the same size. The data files should be pre-sized to avoid any auto-growth.
3. Disk Performance and RAM:
Put the tempdb on the fastest disk possible to avoid any disk IO issues. Secondly, help tempdb maintain objects in RAM by giving it plenty of RAM to use.
4. Version Store monitoring:
Version store creates issues if the version cleanup rate is slower than the version growth rate. The SQL will only clean up versions that are older than the oldest active transaction in the version store. A long running transaction can easily cause the version store to grow very large because unused versions must be maintained as long as an older transaction than itself exists.
Tempdb is a critical component of SQL Server and can be a major performance pain point if not managed properly. To ensure that tempdb performs well, proper configuration along with a consistent base lining and monitoring of tempdb is required.
A few best practices:
• Multiple data files pre-sized equally to avoid auto-growth.
• Use one data file per CPU.
• Provide plenty of RAM for tempdb to use.
• Make sure queries are optimized to avoid tempdb spills.
• Monitor for tempdb contention.
• Monitor the version store for long running transactions.
• Disable auto growth for data files.
• Set auto growth for log file to fix value like 512 MB.
When the above best practices are followed performance is improved, user queries run faster, there is a low latch contention and many common problems can be avoided.