While I was searching for a subject for this month’s newsletter the phone rang. A client was having a problem that sounded very much like bad parameter sniffing and that is what it turned out to be. Using the methods described below we fixed his problem. In addition, the event suggested a topic for this month’s newsletter.
Like my client, every DBA or SQL developer has run into this situation: A procedure that normally runs in a few milliseconds suddenly starts taking much, much longer to complete. in many cases no other procedures are affected.
Any number of conditions can cause degraded performance in SQL Server: Stale statistics, fragmented indexes, unintended recompiles, disk or network problems, etc. but normally those problems arise gradually and affect many more than one procedure.
When this problem occurs, the first thing I tell my client to check is whether someone has just made changes to the procedure. If not, then it is possible that their problem is bad parameter sniffing
About Parameter Sniffing
The phrase “Bad Parameter Sniffing” implies that there exists “Good Parameter Sniffing” and that is true. But Good Parameter Sniffing is not on our agenda today so I will only say that Parameter Sniffing in general is necessary for good performance and we couldn’t turn it off even if we wanted to.
When Good Parameter Sniffing Goes Bad
When SQL Server first compiles a plan for a procedure having parameters, it caches the value of the parameters in the query plan that is created. This allows the query plan to be executed quickly in subsequent executions.
Good parameter sniffing is based on an assumption that the data in the parameter column is distributed somewhat evenly, meaning that any value set as the parameter will return roughly the same number of rows and run well using the same execution plan as any other possible value. In that case, the execution plan would be optimal or close to optimal for any possible value of the parameter.
However, If the column data is unevenly distributed , meaning that one parameter value might return two rows and another parameter value might return 50,000 rows you might find that the compiled parameter value driving the execution plan does not work very well for some parameter values. In fact, performance might be horrible for some values of the parameter.
For example, if the first execution of the procedure is optimized for a query returning only a few rows then that is the parameter value and execution plan that is cached.
A plan like this might be supported by a seek on a nonclustered index and a few lookups to return a few records.
But what happens when another value for the parameter is used in this procedure and that value will return 50,000 rows instead of 2? Well, I hate to tell you this but SQL Server will use the same plan with which it was compiled, the one optimized for returning just a few rows.
For this execution returning 50,000 rows the query engine might struggle through with a nonclustered index and 50,000 lookups. A clustered index scan on the table would be many times faster than this. This is an example of bad parameter sniffing at work.
Another way that bad parameter sniffing can hurt you is when your procedure is recompiled with a different parameter value. Many things can cause recompilation, stale statistics, an index on the parameter column that is created,altered or dropped, etc.
What Can I Do About This Stupid Parameter Smelling?
We are getting to the good part of the story. There are a lot of things you can do about it, depending on what other constraints you have.
The first order of business is to fix the immediate problem. If you have a maintenance window of an adequate size, the first thing you might want to do is to rebuild indexes. The Reindex task will also update statistics with a full scan. This might fix the problem. If not, then recompile all procedures with the DBCC FREEPROCCACHE command.
CAUTION: BOTH THE REINDEX AND THE RECOMPILE COMMANDS WILL CAUSE SEVERE PERFORMANCE .ISSUES WHILE RUNNING.. RUN THESE IN A MAINTENANCE WINDOW.
After you have the situation controlled, you have several options to help avoid a recurrence.
In many cases, forcing a recompile by adding OPTION RECOMPILE to the CREATE PROC statement is a good option. That will compile the procedure with the current parameter value prior to execution.
The downside is that compilations use a lot of CPU. If this procedure is executed 10 times per second throughout the day, then recompilation is probably not the best route to take. If it is executed twice a day, then you can consider recompilation.
Variations on a Theme
you have a couple of options when you use OPTION RECOMPILE. By putting the OPTION RECOMPILE below the create statement in your procedure code it will affect the procedure as a whole. Alternatively you could put OPTION RECOMPILE on a single statement in the procedure and that statement is the only thing that will recompile.
If you know where your problem lies, this is a viable option that would presumably be faster and use less CPU.
Add Local Variable
this is my favorite. If you declare a local variable at the top of your procedure code and initialize it with the current parameter value, then the rest of your code can use the local variable rather than the parameter itself and you will get a better execution plan.
Using your knowledge of the data, you can use this hint to find a value in the your column data where the execution plan produced is OK for most values of the parameter. It may not be optimal for all values, but nothing is perfect. If you can live with the results of this hint, so be it. here is the syntax:
OPTION (OPTIMIZE FOR (@UserID=835));
OPTIMIZE FOR UNKNOWN
I have never used this hint. In reading the description in BOL I don’t see its advantages over OPTIMIZE FOR.