We're about to start a new process. It will involve over a fifty tables with a total of more than 2 million rows. The process will loop in a For/For each box. Inside, the tables will suffer different procceses. Basicaly updates (the most called will be a delete looking for duplicates). In the end we'll get a new table with the full content of all those 50 tables with all the updates done.
So my question is: Is it better, in terms of speed, to look for duplicates in the tables during every loop. Or is it better to do a full delete at the end of the process checking the full result? The amount of work will be more or less the same.
Thanks a lot.
EDIT.
More info.
The loop is kind of needed. Those 50 tables are located in two different servers. Oracle and Access. The loop is to populate them in the SQL server, the local one. On every population (loop) I do some updates and work on the tables so they are ready.
The question is if the work we do on the tables is faster if its ran into the loop or outside it.
Thanks, hope it gets clearer.