sql server consulting logo
sql server consulting is our only business

Free SQL Server Consultation

Sometimes you only need the answer to a question or two. If 15 minutes of our time can help you solve a SQL Server problem, we are happy to help. No charge, no strings attached.

Call 877.201.3428 toll-free

Using SQL Server in the Enterprise - Archives

Sign up for your FREE subscription to this SQL Newsletter

Log File Fragmentation

A hidden cause of poor performance

Issue 2 Number 5 May 2010

Internal fragmentation in a database log file is a frequently overlooked cause of poor performance in a SQL Server database. It is simple to find out if you have this problem. Run this command in a query window in the database you want to check:


Each row returned by this command represents a Virtual Log File or VLF. VLF’s are internal structures that hold transaction records. Too many VLF’s can have a serious effect on database performance. If this statement returns 20 rows or fewer, you are OK. If it returns between 20 and 50, it may be having some effect on performance. If the statement returns more than 50 rows, it is likely to be affecting performance significantly.

This problem is often the result of accepting the default settings for initial log file size and growth increment when the database is created. If you find that you have this problem, here are the steps to fix it.

The TSQL code that executes each step is at the end of this article, although most of this can be done through the Management Studio interface if you prefer. All instructions assume SQL Server 2005 or later but should not be significantly different for SQL Server 2000.

If the database is in full recovery mode, back up the log. If it is in simple mode, checkpoint the log by issuing the command ‘CHECKPOINT’ in the database. This will clear completed transactions from the log and allow you to shrink the file.

Shrink the log file to a very small size. Shrinkfile will not shrink the log below the minimum size required to hold existing log records no matter what target size you specify. You may have to repeat these first two steps a couple of times to get the maximum shrinkage. You don’t have to take the database offline to shrink it although it may impact performance for users while it runs.

Set the initial size of the log file and the growth increment to reasonable sizes. A discussion about “reasonable size” is below. You can also change these settings by opening the database properties in Management Studio, select ‘Files’ and change the settings there or you can run the TSQL code below.

What is a Reasonable Size?

There are many ways to calculate the proper size for your log file when you re-size it in step 3 but I have never found them to be more accurate than the educated guess of someone who knows the system. You can get a clue from the current size of the log, but it may have been unnecessarily inflated. I think you are safe in the range of 15% - 20%of your database size. Properly sized autogrowth will make the final adjustments for you.


Finding the optimum file growth increment is a bit trickier and requires some knowledge of how SQL Server performs an autogrowth.

When SQL Server grows your log it does so by adding VLFs. The size and number of these VLFs varies disproportionally to the size of the growth increment. For a growth increment less than 64 megabytes it adds 4 VLFs. For an increment over 64 MB but not over 1 GB, it adds 8 VLFs. Any increment larger than 1 GB will add 16 VLFs. In all cases the space allocated for the increment is distributed evenly across the new VLFs. This algorithm also applies to the initial creation of the log file and to manual resizing of the log file.

It is easy to see that autogrowth increments above 2 GB will result in many fewer VLFs per gigabyte of log. In fact the larger the increment, the better the ratio becomes. However, you can easily carry this too far.

Too small an increment results in a higher number of VLFs and higher fragmentation. Too large an increment results in fewer VLFs, but they can be so large that they effect performance when they are allocated and when the log is backed up.


As a rule of thumb, the size of each VLF should not exceed 512 MB. Note that we are not referring to the size of the increment. We are referring to the size of each VLF within the increment. For example, lets assume an increment of 1 GB. From the figures above we know that this will create 8 VLFs. Simple division tells us that each VLF will be 128 MB in size.

I prefer fixed size growth increments to percentages because they are more predictable. A percentage growth interval usually starts out too small and ends up too large. For example, imagine a case where the database creator took the default values for the log file’s initial size (2 MB) and growth increment (10%).

The first growth operation would add 200kb of space and 4 more VLFs, the second would add 220 KB and another 4 VLF’s. Before the log reaches a usable size it will already be badly fragmented. After it grows to several gigabytes the increments could then be large enough to cause performance problems when they are allocated and when the log is backed up. Then you would have the worst of both extremes.

Of course rules of thumb only apply in the mythical ‘typical’ installation. Very small, very large and very unusual databases will require individual consideration.

Sample Code

You will need to run these statements seperately in the database you want to fix. It is not a bad idea to run ‘DBCC LOGINFO’ between each step to follow what is happening to the log file. Change all the database references in the code below to the name of your database.

As always, back up your database before making substantial changes like this. This code should only be executed after evaluation by a knowledgeable person in your organization. Because of the variety of situations in which this code could be used, SQL Consulting, Inc. will not be responsible for any problems that arise from the use of this code.

  • -- Use this command if your database is in SIMPLE recovery mode
  • -- otherwise back up the log
  • 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Backup\MyLog.bak'
  • /*The following statement shrinks the log file to 5 mb or as close as it can get to that.
  • (the first log file is always number 2 and you shouldn’t have another log file.
  • If you prefer you can substitute the logical name of the log file for the first parameter.)*/
  • /* At this point you might want to run dbcc loginfo again
  • to see the number of VLF’s left in the file. If there are
  • more than a handful, perform the log backup
  • and shrinkfile again to get maximum shrinkage */
  • /* The following statement changes the size of the log file and
  • the size of the growth increment. Set the respective sizes
  • before running this code */
  • USE master;
  • GO
  • ( NAME = MYDATABASE _log
  • , SIZE = xx MB
  • ,FILEGROWTH = xx MB -- set autogrowth to an appropriate number)

At the end of this process your log file should be re-organized with many fewer VLFs and should stay that way if you have selected an optimum filesize and autogrowth increment.

Afterward by the Author

I hope this information has been helpful to you. I would appreciate any feedback you can give me about this article or recommend future articles you would like to see. If you enjoyed reading this article, you might like to receive this monthly newsletter via Twitter. If so, follow me.

As always, if you have questions or comments give me a call toll-free at 1.877.201.3428 or send me email.

Kurt Survance, SQL Consulting, Inc.

Return to Top

Go to SQL Consulting Home

On this page:

Log File Fragmentation

Elsewhere on this site:

Call SQL Consulting
mcse microsoft certified systems engineer
mcdba microsoft certified database administrator