0

I have a stored procedure which accepts a Table Type and inserts the value into a table. I'm calling this stored procedure by passing 1000 books in the parameter. But I want to handle exceptions such that, even if one or two rows in the parameter fail insertion, other rows should get inserted. How do I do this?

CREATE PROCEDURE InsertBooks
    @newBooks BooksTable READONLY
AS
    INSERT INTO book 
        SELECT @newBooks

Thanks in advance.

3
  • what kind of exceptions are you getting? Commented Apr 8, 2016 at 20:19
  • I'm getting exceptions on a UNIQUE filed. Some of the values are repeating. I want to ignore the repeated rows and continue the inserts. Commented Apr 9, 2016 at 3:05
  • Specifying IGNORE_DUP_KEY or whatever that option is on the Unique Constraint / Index might be what you are looking for. Commented Apr 9, 2016 at 5:41

2 Answers 2

1

You accomplish this by setting up your insert statement to prevent individual rows from causing errors. There are two reasons a single row might fail to insert properly from your table parameter into the actual table:

  1. Schema mismatch. You prevent this by ensuring the schema of the table parameter exactly match those of the target table.

  2. Constraint violations. Prevent this by running a filter query against the input, so bad rows get removed from the data prior to insertion. The most likely version of this would be invalid keys, but the same rule applies regardless of the constraint type.

For example, let's say your book table has a foreign key to authors, authorID. You'd filter the input like so:

SELECT nb.* 
FROM @newBooks nb 
INNER JOIN authors a
ON nb.authorID = a.authorID

Then you could insert the result of this query into books without worrying that there might be invalid authors in the new data.

One additional note: if you're trying to insert specific primary key values, you must filter out duplicates and also SET IDENTITY_INSERT book ON before inserting any data. Don't forget to turn IDENTITY_INSERT back off after you're done.

Sign up to request clarification or add additional context in comments.

Comments

0

If you simply want duplicates (as per the UNIQUE Index / Constraint) ignored and not tracked separately, then that can be done rather easily by enabling the IGNORE_DUP_KEY option of the Unique Index or Unique Constraint, as shown in the following example:

IF (OBJECT_ID(N'tempdb..#tmp') IS NOT NULL)
BEGIN
  DROP TABLE #tmp;
END;
CREATE TABLE #tmp
(
  TmpID INT IDENTITY(1, 1) NOT NULL PRIMARY KEY CLUSTERED,
  Code  VARCHAR(10) NOT NULL UNIQUE WITH (IGNORE_DUP_KEY = ON),
  OtherValue INT NOT NULL
);

INSERT INTO #tmp (Code, OtherValue)
  SELECT tab.ColA, tab.ColB
  FROM   (
          VALUES ('A', 1), ('B', 1), ('C', 1), ('B', 2),
                 ('B', 3), ('C', 2), ('D', 1), ('B', 4)
         ) tab(ColA, ColB)
  --ORDER BY tab.ColB DESC;

SELECT *
FROM   #tmp;

Returns:

-- no ORDER BY (i.e. ORDER BY is commented out):

TmpID   Code   OtherValue
1       A      1
2       B      1
6       C      2
7       D      1


-- using the ORDER BY:

TmpID   Code   OtherValue
4       C      2
6       D      1
7       B      1
8       A      1

Please be aware:

  • As shown in the example output above, when there are duplicates, you cannot fully control which item is inserted first and which ones are considered the duplicates.

  • There is a performance hit for enabling this option. Please see the following blog post for additional details: Maintaining Unique Indexes with IGNORE_DUP_KEY

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.