2

I've got a stored procedure which seems to be my bottle neck in my application. The issue is that the tables it is being applied on are updated very frequently (about once a second with tens of records) - so indexing is not trivial.

It seems that for every X runs of the SP - there is one that takes about 1.5 seconds to run (where as the others run for about 300-400ms or less). In my understaning, it's the indexing tree being updated.

THE RBDMS is SQL Server 2008 R2.

Here is the SP:

THE PK for the archive and live table is "pk1" (for example) - which is not being used here.

the FK is userid (which is a PK in Table_Users)

INSERT INTO Table_Archive 
       SELECT userid, longitude, latitude, direction, RcvDate 
       FROM Table_Live 
       WHERE userid = @userid

DELETE FROM Table_Live WHERE userid = @userid

-- write down the new location

INSERT INTO 
       Table_Live (userid, longitude, latitude, direction) 
    VALUES (@userid, @lon, @lat, @dir)

UPDATE Table_Users 
    SET location = 'true' 
    WHERE loginid = (SELECT MAX(loginid) as loginid 
                     FROM Logins 
                     WHERE userid = @userid)

Any idea what could be done to make it run optimally? Preferably it should run under 200ms.

8
  • @marc_s - all the table rows are being showed in the first statement. same rows for the Archive and the Live tables. Commented May 2, 2011 at 7:53
  • @marc_s - it's millions of rows for the Archive table and Thousands for the Live table. Commented May 2, 2011 at 7:53
  • yes - but what types are those columns?? Are those all the columns?? Which indices are already in plcae??? Commented May 2, 2011 at 7:53
  • 1
    @btilly - Well, the tree should be updated after X insertions. As I see in the profiler - - after some executions of the SP which take about 200ms, there is one that takes about 1.5sec. Sounds like some "special" update going on :) Commented May 2, 2011 at 8:22
  • 3
    @roman: The technical term for what you are doing is "guessing". Pursuing random guesses is a bad way to debug. The index tree is actually updated in memory after every insertion. It gets flushed to disk later in the background. The flush should not run in the background and not block the rest of the server. Commented May 2, 2011 at 8:34

2 Answers 2

1

It isn't the index tree being updated: that happens as part of ACID. When the DML completes, all internal structures (which includes indexes, checks, foreign key checks etc) will be completed too. There is no deferral of such checks in SQL Server

This is probably statistics update and compile time (plans are invalidated when stats are updated). A statistics update (IIRC) is caused by 500 rows + 20% changes. So for if you are inserting "tens of rows per second" on a table with "thousands" of rows, you'll require statistics refreshed

My first thought would be to set asynchronous statistics: don't disable them

Sign up to request clarification or add additional context in comments.

1 Comment

ooohh.... that was a pretty bad one :) tried that - almost doubled the query time on some occasions (couldn't say when). Have no idea how this could be...
0

The only obvious thing would be: are there indices on loginid and userid in Table_Users ??

Both are being used for the WHERE clause in the UPDATE statement, and also there's a MAX() function applied to loginid

Another thing that would help quite a bit: don't actually delete the rows inside your stored proc. That'll save a you a lot of time. Try to do the update asynchronously - separately from your database. E.g. write the @userid values into a "command table" and have a SQL job delete those rows e.g. once an hour or so.

7 Comments

Can't it be the B-Tree being built after several indices were added after the insert?
@roman: no - it might be the statistics being updated or something like that - this happens occasionally
@marc_s - the idea to delete it later is problematic because in another SP I need to query the Table_Live very very frequently. So I try to leave only one record for each @userid. Maybe it's a bad idea after all? :) I always need the last record for a given @userid.
@roman: maybe you could just flag the rows as deleted by setting a BIT field - ignore those in your SP later on, and clear them out with a job nightly.
@marc_s - that a pretty neat idea. liked it. though - wouldn't it hurt the super fast reaction I need on that table - because it'll grow pretty fast? (maybe BIT field is very fast? i don't really know)
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.