Ok, finally I got to it myself. The two query plans are different in a small detail which I missed at first. The slow one uses a nested loops operator to join two subqueries together.
And that results in the high number at current row count on the index scan operator which is simply the result of multiplicating the number of rows of input a with number of rows of input b. I still don't know why the optimizer decides to use the nested loops instead of a hash match which runs 1000 timer faster in this case, but I could handle my problem by creating a new index, so that the engine does an index seek statt instead of an index scan under the nested loops.
Glad you sorted it, Jan, and good spotting! Next time, maybe try outputting the plans in Text mode, and then using a textual diff or merge tool to compare them. You'll spot the difference(s) faster that way.
– Rick Oct 24 '08 at 3:33.
It sounds like a case of Parameter Sniffing. Here's an excellent explanation along with possible solutions: I Smell a Parameter! Here's another StackOverflow thread that addresses it: Parameter Sniffing (or Spoofing) in SQL Server.
Yes, im aware of parameter sniffing. But I think my problem isn't about parameter sniffing, because I get the exactly same execution plan with using parameters and with copying parameter values to local vairables! – Jan Oct 22 '08 at 12:28.
Make sure to click Query, Include Execution Plan, and then run each query. Compare those plans and see what the differences are.
I'm comparing the acutal plans. They are definitely the same. And its not a copy/paste version which runs quick, its the same proc, but using local var copies of the parameter values.
I will edit my post to clarify. – Jan Oct 22 '08 at 14:20.
To me it still sounds as if the statistics were incorrect. Rebuilding the indexes does not necessarily update them. Have you already tried an explicit UPDATE STATISTICS for the affected tables?
Yes, I already executed UPDATE STATISTICS on the table. As I wrote, the output from DBCC Show_Statistics is showing the right row count. I just wonder wtf SqlServer gets this large value on actual row count.
There are no deletes on that table, so row count was never that high! – Jan Oct 22 '08 at 11:43.
I believe in SQL 2000 the engine used to use that sort of metadata when building execution plans. We used to have to run DBCC UPDATEUSAGE weekly to update the metadata on some of the rapidly changing tables, as SQL Server was choosing the wrong indexes due to the incorrect row count data. You're running SQL 2005, and BOL says that in 2005 you shouldn't have to run UpdateUsage anymore, but since you're in 2000 compat mode you might find that it is still required.
Sp_spaceused tells me correct data: ED_Transitions rows 1145711 data 160048 KB index_size 106048 KB – Jan Oct 22 '08 at 22:23.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.