0

I have a scenario where I query my SQL Server DB, obtain the results, and based on the results, make subsequent queries to the DB again. Following is how I've structured my code for the same:

What I'm interested in knowing is, that is this the correct way to deal with such scenarios?

Should I be doing something else alternatively? Like, make the first call to the DB, load all the results in a dictionary, then make the next calls and use the result stored in the dictionary to make these next calls

(If you feel you need context on what my code does - I want to add a uniqueness constraint and index over columns ColA, ColB, and ColC on MyTable, but I can't directly apply the uniqueness constraint. There are some existing violations over these columns. So I first resolve these violations by changing the value of ColC for the entries that cause the violation, and after fixing all violations, I add the constraint)

void Main() {

    using(SqlConnection connection = new SqlConnection(@"Data Source=localhost; Initial Catalog=mydatabase; Integrated Security=True; MultipleActiveResultSets=true")) 
    {
        connection.Open();

        //Check if the index exists over Columns ColA_ColB_ColC without the uniqueness constraint
        SqlCommand myCommand = new SqlCommand(@"SELECT 1 FROM sys.indexes 
                                WHERE name = 'UQ_ColA_ColB_ColC' 
                                AND object_id = OBJECT_ID('MyTable')
                                AND is_unique = 0");
        SqlDataReader myReader = myCommand.ExecuteReader();
        if(myReader.HasRows)
        {
            try {

                //Get the unique values that exist (ColA,ColB,ColC) tuple
                myCommand = new SqlCommand(@"select count(*) as count,
                                                        ColA,ColB,ColC 
                                                        from [apimanagement.local].[dbo].[MyTable] 
                                                        group by ColA,ColB,ColC ", connection);

                SqlDataReader myReader = myCommand.ExecuteReader();
                while (myReader.Read()) {

                    //For each of the unique values, get all the rows that have that value
                    SqlCommand myCommand2 = new SqlCommand(@"select Id,ColA,ColB,ColC from MyTable
                    where ColA=@ColA and ColB=@ColB and ColC=@ColC", connection);
                    myCommand2.Parameters.AddWithValue("@ColA", myReader["ColA"].ToString());
                    myCommand2.Parameters.AddWithValue("@ColB", myReader["ColB"].ToString());
                    myCommand2.Parameters.AddWithValue("@ColC", myReader["ColC"].ToString());

                    int index = 2;
                    SqlDataReader myReader2 = myCommand2.ExecuteReader();
                    myReader2.Read(); //Read the first row off the results

                    //If more rows exist, then we have violations for the uniqueness constraint over (ColA,ColB,ColC)
                    //fix these violations by appending indices to the ColC value
                    while (myReader2.Read()) {
                        SqlCommand myCommand3 = new SqlCommand(@"UPDATE MyTable 
                                                                SET ColC=@NewColC
                                                                WHERE Id=@Id", connection);

                        myCommand3.Parameters.AddWithValue("@Id", myReader2["Id"].ToString());
                        myCommand3.Parameters.AddWithValue("@NewColC", myReader2["ColC"].ToString()+index);

                        bool changedSuccessfully = false;
                        while(!changedSuccessfully)
                        {
                            try
                            {
                                myCommand3.ExecuteNonQuery();
                                index++;
                                break;
                            }
                            catch(SqlException e)
                            {
                                if((uint)e.HResult == 0x80131904)
                                {
                                    index++;
                                }
                                else
                                {
                                    throw e;
                                }

                            }
                        }
                    }
                }

                //After all the violations are fixed, we create an index over (ColA,ColB,ColC) with the uniqueness constraint
                myCommand = new SqlCommand(@"DROP INDEX UQ_ColA_ColB_ColC on [MyTable];
                CREATE UNIQUE NONCLUSTERED INDEX [UQ_ColA_ColB_ColC] ON [MyTable]([ColA] ASC, [ColC] ASC, [ColB] ASC) WHERE [ColB] != 3");

                myCommand.ExecuteNonQuery();

            } catch (Exception e) {
                Console.WriteLine(e.ToString());
            }
        }

    }
}
5
  • 2
    sorry i didnt understand it fully but cant you use a stored proc and temp table for that? Commented Sep 24, 2014 at 6:05
  • I can do a stored proc, but this is a quick one-time fix that I won't be using again, so I'm just trying to do it via C# Commented Sep 24, 2014 at 6:12
  • I would look to write a single UPDATE query that corrects all rows in one go, rather than writing multiple queries and fixing each row individually. Commented Sep 24, 2014 at 6:22
  • @Damien_The_Unbeliever, can't do that. You don't know which value of 'index' will need to be appended to which row. Commented Sep 24, 2014 at 6:33
  • With something this 'big' I would definitely go for a stored procedure with parameters. Using while loops while reading should give a noticeable latency increase which you can fix with in one go with parameters to the database and prevents optimizations made possible by the database. "But this is a quick one-time fix". Writing SQL should be just as quick for you. Get familiar with it :). Commented Sep 24, 2014 at 6:43

1 Answer 1

3

Well - I'd say your SqlDataReader handling is wrong.
Wrap them in a usingto avoid connection leaks:

using (SqlDataReader myReader = myCommand.ExecuteReader())
{
//do stuff with myReader here.
} //using clause will ensure dispose

//do remainder stuff outside here.

Also only have the connection open as short a time as possible. Connection pooling will limit most all of the overhead with opening/closing connections while leaving you free to not worry about. Keeping connections open too long can hinder performance.

Outside those tips - the "recommended" way to structure is extremely subjective and logic dependant.
Basically - it comes down to how much data needs to be moved around between the server and your application. Also letting the database do the work it is optimized for and letting your code do the work .NET is optimized for - meaning how much of it can/should be kept in the database and how much data should be moved to your code layer.

A lot of that comes from experience and basically trying it out and then performance tuning it to see what it does best.

Edit: Saw your comment that this is a one time thing you'll not run again. Then I wouldn't worry at all and just do it the easiest way for you now and then move on.
Because then it's rarely cost effective to fiddle about too much and time is better spend on actual problems :)

Sign up to request clarification or add additional context in comments.

2 Comments

Thanks. Even though this is a one time thing, I wanted to be aware of if there's a neater way to do this so that I'm more informed for these things in the future.
I know - but "one off" situations are often the time for hacks and short cuts, so to get the job done fast so you can move on to the things that need to run often and therefore are much more important :D

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.