1

I'm running into the following issue in Oracle SQL. gt_general2 is a global temp table.

INSERT INTO gt_general2 (n_1, n_2, n_3)
select p.x_id, a.y_id, a.dollar_amt
from table_P p, table_A a
where p.x_id = a.x_id
  and a.h_date = i_date
  and a.dollar_amt > 0
for update of p.x_id, a.dollar_amt nowait; --this is the problem statement

Oracle doesn't like when I try to use the INSERT...SELECT syntax with the SELECT... FOR syntax. I would like to lock the rows while doing this insert because the real query is actually fairly complex (and does some checks against other views to make sure parent records exist, etc.) and this way I can insert into the temp table while doing the row lock in a single pass.

Is there some other syntax I'm missing that would allow me to do this? On a conceptual level, I don't see any reason why Oracle wouldn't allow me to select rows from tables P and A, lock them in the process, and insert the values into another table Z. I already can do this after all:

select p.x_id, a.y_id
BULK COLLECT into x_id_list, y_id_list
from table_P, table_A
where ...
for update nowait;

It's of course possible to lock the rows by doing this query twice- once to lock, and once to insert. Previously, I did this and used the lock query to retrieve the "x_id" values in an array to help out the INSERT INTO query. But the problem I ran into was that the x_id value wasn't enough to identify the rows I wanted properly; I needed the pair of (x_id, y_id) and can't create a new nested table type to store it.

1
  • "can't create a new nested table type to store it" ... because of technical limitations, or privileges? Commented Jan 20, 2014 at 16:22

2 Answers 2

3

You can't combine an INSERT with a SELECT FOR UPDATE in a single statement. You can however use a regular SELECT FOR UPDATE and use bulk collect with bulk insert to achieve nearly similar performance.

You also don't need to create a new permanent SQL type object if you declare the collection in the PL/SQL code.

Here's an example, first the setup:

SQL> CREATE TABLE table_a (x_id NUMBER, y_id NUMBER, dollar_amt NUMBER);
Table created
SQL> CREATE TABLE table_p (x_id NUMBER);
Table created
SQL> INSERT INTO table_a VALUES (1, 1, 0);
1 row inserted
SQL> INSERT INTO table_a VALUES (2, 1, 10);
1 row inserted
SQL> INSERT INTO table_a VALUES (3, 1, 20);
1 row inserted
SQL> INSERT INTO table_p VALUES (2);
1 row inserted
SQL> INSERT INTO table_p VALUES (3);
1 row inserted
SQL> INSERT INTO table_p VALUES (4);
1 row inserted
SQL> CREATE GLOBAL TEMPORARY TABLE gt_general2(
  2     n_1 NUMBER, 
  3     n_2 NUMBER, 
  4     n_3 NUMBER);
Table created

The lock and insert procedure (tested on 11.2.0.2.0):

SQL> DECLARE
  2     CURSOR cc IS
  3        SELECT p.x_id, a.y_id, a.dollar_amt
  4          FROM table_P p, table_A a
  5         WHERE p.x_id = a.x_id
  6           --AND a.h_date = i_date
  7           AND a.dollar_amt > 0
  8           FOR UPDATE OF p.x_id, a.dollar_amt NOWAIT;
  9     TYPE table_rec IS TABLE OF cc%ROWTYPE;
 10     l_table table_rec;
 11  BEGIN
 12     OPEN cc;
 13     LOOP
 14        -- bulk fetch
 15        FETCH cc BULK COLLECT INTO l_table LIMIT 100;
 16        EXIT WHEN l_table.count=0;
 17        -- bulk insert
 18        FORALL i IN 1..l_table.count
 19           INSERT INTO gt_general2 (n_1, n_2, n_3)
 20              VALUES (l_table(i).x_id,
 21                      l_table(i).y_id,
 22                      l_table(i).dollar_amt);
 23     END LOOP;
 24     CLOSE cc;
 25  END;
 26  /
PL/SQL procedure successfully completed

This will do bulk select/insert in batches of 100 rows in a single pass. This should be faster than a lock statement followed by an insert statement. Plus you're guaranteed that the rows inserted are exactly the same as the rows locked (whereas the two statements executed one after the other could return inconsistent results if the table has been modified).

Sign up to request clarification or add additional context in comments.

4 Comments

Hey, thanks for the suggestion. I ran into a problem with this, though. It compiled fine, but got a crash when I ran it: Error starting at line : 7 in command -Error report -No more data to read from socket. Meaning this caused some sort of unhandled exception I'm guessing, since I have an exception block in the small test program I created with this concept.
Consistently get the error. It's caused by the code which fetches the cursor bulk collect into the NT, since after commenting all the subsequent bits out, that part still causes the error. actually even without the bulk collect, this still fails: x_count := 0; x_lock_updts.EXTEND; --LOCK ROWS FOR UPDATE; open cx_lock; loop x_count := x_count+1; fetch cx_lock into x_lock_updts(x_count); exit when cx_lock%NOTFOUND; x_lock_updts.EXTEND; end loop; close cx_lock;
Something else is wrong with your setup, I've included a full-working example. What version are you using? If you're using 10g or an older version, you might want to split the table of records into 3 nested tables and bulk fetch into them instead.
it's 11G R2. I spoke to our DBA and he said there are some bugs in the Oracle version we have and it's probably one of those, he also said it's a pain in the ass to upgrade to the next version so I think I'll have to work around it with this one. This is very useful info tho as I've been playing around with collections in general to try and speed things up and I didn't know you could define NTs on cursor rowtypes.
2

It really depends on why you want to do this.

If you need access to the values in the rows as-of the beginning of the transaction instead of as-of the beginning of each query, regardless of whether they change or not, then I would look into altering the session to change the read consistency mode to "serializable", instead of the default "read committed".

If you really are going back to update those rows then I'd look at using common table expressions (CTE's) instead of global temporary tables, to encapsulate all the logic in a single query. For many cases, GTT's and CTE's amount to pretty much the same optimisation, although of course you can't run an analysis against a CTE part way through a process, or create an index against it.

1 Comment

It's the second case- the process is to 1) find the rows in the core tables through a query and lock them for update, 2) insert a bunch of values derived from those rows into a temp table, 3) perform a lot of redistribution calculations between the dollar amounts in the temp table through a cursor, 4) use the temp table to also store parent sequence NEXTVAL values for access by the child records and insert into a separate transaction log table, 5) go back and update the core tables with the new values. I tried using WITH () tables but the process is too complicated for them to be of much use.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.