Sometimes you only need the answer to a question or two. If 15 minutes of our time can help you solve a SQL Server problem, we are happy to help. No charge, no strings attached.
Issue 6 Number 3 March 2016
In my work as a SQL Server consultant I have viewed hundreds of SQL Server maintenance plans. I have seen a lot of good plans but I still see the same old problems in a significant number of plans. In some cases, these problems are quite dangerous. Here they are as I see them:
When I am looking at a client's Maintenance Jobs I often find that the Reindex task is followed by an Update Statistics task. This is a waste of time because the built-in Reindex task also updates stats with a full scan. At the end of its run your stats should be in good shape.
Here is a quiz for you:
Question: How often should you shrink your database?
To explain: the ShrinkDB command will shrink both the data files and the log file. When data files are shrunk, your indexes will be completely fragmented. You will have to run a reindex job before your database will be performing well again.
If you still think you need to shrink your log file you should use the SHRINKFILE procedure, specifying that it is only the log file that will be shrunk. Here is the syntax:
DBCC SHRINKFILE (2, 5) WITH NO_INFOMSGS
As written, this command will attempt to shrink the log to 5mb but the command will not shrink the file below what is necessary no matter what size you specify. Note that the log file ID will always be 2. You might have to run this command a couple of times.
This will keep your indexes from being trashed. However, if you still need to shrink the log regularly to avoid runaway file growth something is else is wrong with your SQL instance. Most often it is that your database is in FULL recovery mode and you are not backing up the log.
There are other things that can cause a continuously growing log file. It is a bit complex to explain in this short article but here is a url to an article that will explain it all. http://www.sqlconsulting.com/news1102.htm
I often see that the Database Integrity Task is left out of the maintenance plan. This is a very dangerous mistake. The second most dangerous mistake is failing to check the output from the database integrity check if you do run the command.
SQL Server is not going to wake you up and tell you there is a problem. You can look in the SQL error log or set up an alert to tell you, but there is nothing automatic built into the Data Consistency task . You have to make it happen.
To further complicate matters, some levels of corruption still allow the database to function for quite some time before you hit a corrupt page and get your first clue that something is wrong.
Normally, in dealing with corruption your best option is to recover from the last known-good backup. (for 2014 Memory-optimized tables, it is your only option)
If the last-known-good backup was taken two weeks ago for example, then you will lose two weeks of your most current data if you restore it. That may an unacceptable amount of data loss. Our next best option is to run DBCC CHECKDB and hope that it will tell you that the corruption can be removed without data loss. (Again, this can't be done where memory-optimized tables are concerned.)
If the CHECKDB message says that you can repair the corruption without losing data then you are a very lucky DBA.
If your database cannot be repaired without data loss then your only option is to run CHECKDB with the repair_allow_data_loss option. But there is a huge risk in running this command.
Paul Randal, who wrote the DBCC commands when he was at Microsoft, states flatly that this is your very last option. Do not run it unless all else has failed. He compares this option to "taking a chainsaw to your database". It might get rid of the corrupt data but it might also render your database unusable.
Your best bet is to never get in this position. One way or another you need to check the dbcc results each time the task runs.
As long as we are on the subject of data corruption you should be sure that the page_verification option of your database is set to CHECKSUM which provides better protection against corruption than TORN_PAGE_DETECTION.
Since SQL 2005, CHECKSUM has been the default setting for the page_verify_option but if your database was created in SQL 2000 or earlier and then subsequently upgraded we may still see this obsolete page verification option because it is not changed automatically when you update your version of SQL Server. You have to go in and change it manually.
If this article has been useful to you, how about tweeting it to your Twitter followers?Tweet This Article.
It takes a lot of time every month to write this newsletter. Unlike other SQL newsletters, this one does not accept advertising nor does it disclose your personal information to 3rd party SQL vendors. My only reward is to see my articles read by more SQL people.
As always, if you have questions or comments give me a call at 503.914.6442 or send me email.
Kurt Survance, SQL Consulting, Inc.
On this page:
Elsewhere on this site: