That design would be deadlock prone - typically (not always) one connection is unlikely to deadlock itself, but multiple connections that do inserts and aggregates against the same table are very likely to deadlock. That's because while all activity in one transaction looks complete from the point of view of the connection doing the work -- the db won't lock a transaction out of "its own" records -- the aggregate queries from OTHER transactions would attempt to lock the whole table or large portions of it at the same time, and those would deadlock.
That design would be deadlock prone - typically (not always) one connection is unlikely to deadlock itself, but multiple connections that do inserts and aggregates against the same table are very likely to deadlock. That's because while all activity in one transaction looks complete from the point of view of the connection doing the work -- the db won't lock a transaction out of "its own" records -- the aggregate queries from OTHER transactions would attempt to lock the whole table or large portions of it at the same time, and those would deadlock. Read Uncommitted is not your friend in this case, because it basically says "ignore locks," which at some point will mean violating the rules you've set up around the data.I.E.The count of records in the table will be inaccurate, and you'll act on that inaccurate count.
Your count will return 10 or 13 when the real answer is 11. The best advice I have is to rearrange your insert logic such that you capture the idea of the count, without literally counting the rows. You could go a couple of directions.
One idea I have is this: literally number the inserted vouchers with a sequence and enforce a limit on the sequence itself. Make a sequence table with columns (I am guessing) MallID, nextVoucher, maxVouchers Seed that table with the mallids, 1, and whatever the limit is for each mall Change the insert logic to this pseudo code: Begin Transaction Sanity check the nextVoucher for Mall in the sequence table; if too many exist abort If less than MaxVouchers for Mall then { check, fetch, lock and increment nextVoucher if increment was successful then use the value of nextVoucher to perform your insert. Include it in the target table.
} Error? Rollback No Error? Commit A sequence table like this hurts concurrency some, but I think not as much as counting the rows in the table constantly.Be sure to perf test.
Also, the check, fetch, lock and increment is important - you have to exclusively lock the row in the sequence table to prevent some other connection from using the same value in the split second before you increment it. I know the SQL syntax for this, but I'm afraid I am no n For read uncommitted data errors, check this out: http://sqlblog.com/blogs/merrill_aldrich/archive/2009/07/29/transaction-isolation-dirty-reads-deadlocks-demo.aspx (disclaimer: Merrill Aldrich is me :-).
Bingo! I knew the aggregate following the insert was likely the source of the deadlock, but I wasn't sure the best way to avoid it. Despite it being somewhat clunky, I've added a sequence table that I retrieve the mall record from each iteration with an update lock.
I then increment that sequence number, write the row back, and perform the voucher insert all within a Read Committed transaction. Extensive performance testing shows this will not be a problem and does indeed enforce the business rule correctly. Thanks for the input!
– Chris Aug 5 '09 at 19:18 No Prob - glad it works for you – onupdatecascade Aug 6 '09 at 2:49 could you please explain hw to approach the prob of one table having many inserts and updates? – Baz1nga Mar 11 at 4:40.
Theoretically, vouchers would never be deleted or even updated. Also, I tend to avoid triggers for performance reasons (although purity is a good one too). – Chris Aug 4 '09 at 21:00 But let's say vouchers get deleted often.
What would be the proper way to ensure that by inserting a voucher, I've never exceeded some arbitrary limit? – Chris Aug 4 '09 at 21:01 If vouchers didn't get deleted, i'd just check (mall.Vouchers. Count TotalVouchers) before attempting the insert.
– JBland Aug 4 '09 at 22:00 JBland - what if 100 other connections are inserting rows, and (say) three inserts happen in the time between the count and and the insert for our connection? – onupdatecascade Aug 4 '09 at 5:35.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.