0

I have a table that's being inserted every night and then queried as a reporting table.

The stored proc that works for the query, has dynamic SQL string, paging and two temporary tables within it.

It works well for the first week

Starting from the second week, its performance starts to drop sharply (takes 3.5 minutes to return)

I have acquired and then run the output string of the dynamic SQL, it's dramatically faster(2 Seconds), so I guess it could be related to compiler

Then I did some optimization, like changing count(*) to count(event_id), the performance is immediately back, but the next morning the performance is down again.

Then I changed select into to declaring the temp table explicitly, the performance is immediately back, but the next morning the performance is down again.

Then I changed declaring the temp table explicitly back to select into, the performance is immediately back, but the next morning the performance is down again.

So I guess it has nothing to do with code optimization, it seems every time the SP got compiled, the performance can be better for only less than 24 hours

I am thinking about the nightly insertion, which is also 24 hours' cycle, then I found this with (nolock) thing, which could have locked up Table1

After adding Nolock, the stored procedure ran well for a week, after which we got the same problem again, except that this time, only the web page calling the SP is slow, running SP from DB is fast…

Here is the dynamic sql stored proc:

CREATE PROCEDURE [dbo].[fs_1_usp_query]
 @paramerter_client_id   int = null,   
 @paramerter_event_type_id  int = null,   
 @paramerter_start_date         datetime = null,   
 @paramerter_end_date           datetime = null,   
 @paramerter_page_index   int = 1,
 @paramerter_sort_direction varchar(20),   
 @paramerter_page_count   int = 30   
AS   
BEGIN       
 SET ARITHABORT ON;   
 SET NOCOUNT ON;   

 declare @sql nvarchar(max)   

 set @sql =      '

    create table #output2    
    (   
            page_index                                      int,       
                rownumber                                         int, 
            page_count                          int,    
            client_id                           int,    
            date                                            datetime,   
      )   




        --insert into #output1   
  select   
    page_count = count(event_id) over(),       
    table1.*
    into #output1'   
  set @sql = @sql + '   
   from   
    table1 table1 with (nolock)   
   inner join   
    table2 table2 with (nolock)   
   on   
    ............................
   inner join   
    table3 table3 with (nolock)   
   on   
    ............................
   inner join   
    table4 table4
   on   
    ............................
   where   
    ............................

 if (@paramerter_client_id is not null)   
  set @sql = @sql + ' and table2.client_id = @paramerter_client_id'   

 if (@paramerter_event_type_id is not null)   
  set @sql = @sql + ' and table2.event_type_id = @paramerter_event_type_id'   

 if (@paramerter_start_date is not null)   
  set @sql = @sql + ' and table2.created_date >= @paramerter_start_date'   
 if (@paramerter_end_date is not null)   
        set @sql = @sql + ' and table2.created_date <= @paramerter_end_date'   

 declare @lv_begin_index int   
 declare @lv_end_index int   
 set @lv_begin_index = ((@paramerter_page_index - 1) * @paramerter_page_count) + 1     
 set @lv_end_index = @lv_begin_index + @paramerter_page_count    

 set @sql = @sql +   ' 

 UPDATE #output1
    SET osat_rating = ''-''   
    WHERE LEFT( osat_rating , 1 ) = ''-''         

 insert into #output2 
 select    
  page_index = ' + convert(varchar, @paramerter_page_index) + ',       
     row_number() over (order by [' + @paramerter_sort_expression + '] '+ @paramerter_sort_direction + ') as rownumber, 
     #output1.* 
 from #output1

 select #output2.* 
 from #output2 
 where   
  rownumber >= ' + convert(varchar, @lv_begin_index) + '   
 and   
  rownumber < ' + convert(varchar, @lv_end_index)    '   

 set @sql = @sql + '    
 drop table #output1    
 drop table #output2  '*

Here's a snapshot of static SQL as an attempt to follow your suggestions:

Where
    Column3 = Coalesce(@parameter3, Column3)
   and
    (@start_date is null or Column_created_date >= @start_date)

   and
    (@param_1 is null 
            or 
                (@param_1 not in (‘ConstantString1’, 'ConstantString2') and Column1 = @param_1)
        or 
            (@param_1 = ‘ConstantString1’ and Column1 like 'ConstantString1%')
        or 
            (@param_1 = ‘ConstantString2’ and (Column1 is null or Column1 = ''))
)
If(@parameter_sort_direction = 'DESC')
 Begin
     insert into #temp_table_result
     select     
      page_index = convert(varchar, @parameter_page_index),        
      row_number() over 
        (
            order by CASE 
                    WHEN @parameter_sort_expression = 'Column1' THEN Column1 
                    WHEN @parameter_sort_expression = 'Column2' THEN Column2 
                    WHEN @parameter_sort_expression = 'Column3' THEN Column3 
                    WHEN @parameter_sort_expression = 'Column4' THEN Column4 
                    WHEN @parameter_sort_expression = 'Column5' THEN Column5 
                    WHEN @parameter_sort_expression = 'Column6' THEN Column6 
                    WHEN @parameter_sort_expression = 'Column7' THEN Column7
                    WHEN @parameter_sort_expression = 'Column8' THEN Column8 
                END desc--CASE 
                --      WHEN @parameter_sort_direction = 'ASC' THEN asc 
                --      WHEN @parameter_sort_expression = 'DESC' THEN     desc              
                --END
        ) as rownumber,  
      #temp_table_staging.*  
     from #temp_table_staging 
 END
6
  • 1
    Can you define "fast" and "slow" for us? I have procs that take days to run, and there are some that would consider .5s slow. Commented Jan 30, 2012 at 13:08
  • 2
    And stay away from Nolock in production code - it can cause you to miss rows already committed. Commented Jan 30, 2012 at 13:37
  • Thanks for your question. With same dataset, it takes 2 seconds when it's working correctly, and it takes around 3.5 minutes when it's not ok. Commented Jan 30, 2012 at 14:54
  • For the nolock thing, yes we are tolerating "Dirty Read" situation to ensure fastest data retrieval... Commented Jan 30, 2012 at 14:54
  • No lock can do more than allow dirty reads. It can cause double counting, and query exceptions. Look here: sqlservercentral.com/blogs/sqltact/2012/01/21/… Commented Jan 30, 2012 at 15:47

3 Answers 3

2

It is likely the statistics the query is using to create a plan are gradually becoming out of date as time goes on.

Consider updating the statistics every 6 hours on the tables affected by the query - test this out in a dev environment if possible.

Sign up to request clarification or add additional context in comments.

Comments

0

I suggest you try use option WITH RECOMPILE to refresh execution plan every time sp starts.

And there is some more technics for optimize execution plan for such cases: http://msdn.microsoft.com/en-us/library/ms181714.aspx

for example:

OPTIMIZE FOR

or

PARAMETERIZATION

Hope that will help.

1 Comment

I just added some example of the code to find out that the formatting is messed up, so please ignore this comment.
0

I suggest you to completely avoid dynamic SQL, you can replace some code like this one :

     if (@paramerter_client_id is not null)   
         set @sql = @sql + ' and table2.client_id = @paramerter_client_id'  

by

and (@paramerter_client_id IS NULL OR table2.client_id = @paramerter_client_id)

Of course, don't forget to create an index on table2.client_id !

3 Comments

Thanks for the tips, :=) Yes, regarding Plan Caching, I am taking off dynamic SQL and doing static SQL, hopefully within the next two weeks I will get back here with an update whether this will work out:
Just added the draft changes in the question text body, if you have time, kindly please help me take a look, if there's anything wrong with it, thanks!
Hello, your SQL sounds better :) If your stored procedure does not return too many rows, you can use in-memory temporary tables (odetocode.com/code/365.aspx) in order to avoid the use of the tempdb database. To increase the spped of your application, you can maybe avoid to sort the datas in SQL Server and do it on the application side. If you use the .NET platform, you can for example implement the IComparer interface.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.