0

My application is generating the ID numbers when registering a new customer then inserting it into the customer table.

The method for generating the ID is by reading the last ID number then incrementing it by one then inserting it into the table.

The application will be used in a network environment with more than 30 users, so there is a possibility (probability?) for at least two users to read the same last ID number at the saving stage, which means both will get the same ID number.

Also I'm using transaction. I need a logical solution that I couldn't find on other sites.

Please reply with a description so I can understand it very well.

3
  • 3
    Are you aware of the auto_increment field property? It creates an automatically incrementing number that will be managed by the database, so duplicates are impossible Commented Nov 25, 2011 at 14:36
  • Make sure you have a unique constraint (or primary key constraint) on the ID number column. That should be there regardless of other mechanics. With that constraint in place, even if two client processes deduce the same number for the 'next ID', only one of them will succeed on INSERT. The other would have to detect the 'duplicate key in a unique constraint' error and try again. But an autoincrement column is much the better solution. Commented Nov 26, 2011 at 21:36
  • Also, the autoincrement mechanism avoids the locking issues which can affect SELECT MAX(ID)+1 type mechanisms. Other DBMS have equivalent mechanisms - SERIAL columns, SEQUENCE types, etc. Commented Nov 26, 2011 at 21:38

1 Answer 1

2

use an autoincrement, you can get the last id issued with the mysql_insert_id property.

If for some reason that's not doable, you can craete another table to hold the last id used, then you increment that in a transaction, and then use it as the key for your insert into the table. Got to be two transctions though, otherwise you'll have the same issue you have now. That can get messy and is an extra level of maintenance though. (reset your next id table to zero when ther are still some in teh related table and things go nipples up quick.

Short of putting an exclusive lock on the table during the insert operation (not even slightly recomended), your current solution just can't work.

Okay expanded answer based on leaving schema as it is.

Option 1 in pseudo code

StartTransaction
try
  NextId = GetNextId(...) 
  AddRecord(NextID...)
  commit transaction
catch Primary Key Violation
  rollback transaction
  Do the entire thing again
end

Obviously you could end up in an infinite loop here, unlikely but possible, probably run out of stack space first.

You could some how queue the requests and then attempt to process them, if successful remove from queue.

BUT make customerid an auto inc the entire problem dispappears. It will still be the primary key, you just don't have to work out what it needs to be any more, in fact you don't supply it in the insert statement, mysql will just take care of it for you.

The only thing you have to remember is if you need the id that has been automatically created is to request it in one transaction.

So your insert query needs to be in the form

Insert SomeTable(SomeColumns) Values(SomeValues)
Select mysql_insert_id

or if multiple statements gets in the way wrap two statements in a start stransaction commit transaction pair.

Sign up to request clarification or add additional context in comments.

2 Comments

Thank, but as I'm using transaction the customer ID is Primary key one of the transactions will Rolled Back correct?!!! so is there a way to detect the roll back & reorder the save statement for the user who his/here transaction has been rolled back without make the end user know what is going on?
Answert expanded to explain. Erm re-order the save statement?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.