How about obtaining a lock on the record first and then update the record to show "locked". This would avoid the 2nd instance to get a lock successfully and thereby the update of record fails The point is to make sure the lock and update as one atomic step.
How about obtaining a lock on the record first and then update the record to show "locked". This would avoid the 2nd instance to get a lock successfully and thereby the update of record fails. The point is to make sure the lock and update as one atomic step.
I'm using jdbc on the frontend... Is java.sql.Connection. TRANSACTION_SERIALIZABLE what I'm after here? – user85116 Feb 25 '10 at 18:59 I have done this a few yrs back in Oracle PL/SQL procedure.
First you obtain a lock on the row. On successful locking, check the value if it is already locked, otherwise update as "locked" and release your lock. This will make sure the second thread will obtain a lock only after the 1st thread releases it and it would be "locked" by now.
– Pradeep Feb 26 '10 at 22:29 Quoting this: "One example of a transaction isolation level is TRANSACTION_READ_COMMITTED, which will not allow a value to be accessed until after it has been committed. In other words, if the transaction isolation level is set to TRANSACTION_READ_COMMITTED, the DBMS does not allow dirty reads to occur. The interface Connection includes five values which represent the transaction isolation levels you can use in JDBC.
" from the link: java.sun. Com/j2se/1.4.2/docs/api/java/sql/Statement. Html – Pradeep Feb 26 '10 at 22:47.
Make a stored procedure that hands out the lock, and run it under 'serializable' isolation. This will guarantee that one and only one process can get at the resource at any given time. Note that this means that the second process trying to get at the lock will block until the first process releases it.
Also, if you have to get multiple locks in this manner, make sure that the design of the process guarantees that the locks will be acquired and released in the same order. This will avoid deadlock situations where two processes hold resources while waiting for each other to release locks. Unless you can't deal with your other processes blocking this would probably be easier to implement and more robust than attempting to implement 'test and set' semantics.
It appears that this might work, but it also appears to be a little more complicated (and possibly less "cross platform")... I could be wrong... – user85116 Feb 25 '10 at 19:01 You might have to write a different sproc for each platform you support, but it's not hard to write some sort of plugin mecahnism to support a handful of platform specific operations. The key point of this approach is that it uses the native locking mechanisms and semantics (i.e. One process is blocked until the lock becomes free).
– ConcernedOfTunbridgeWells Feb 26 '10 at 7:29.
I've been thinking about this, and I think this is the simplest way of doing things; I just execute a command like this: update settings set settingsValue = '333' where settingsKey = 'ProcessLock' and settingsValue = '0' '333' would be a unique value which each server process gets (based on date/time, server name, + random value etc). If no other process has locked the table, then the settingsValue would be = to 0, and that statement would adjust the settingsValue. If another process has already locked the table, then that statement becomes a no-op, and nothing get's modified.
I then immediately commit the transaction. Finally, I requery the table for the settingsValue, and if it is the correct value, then our lock succeeded and we continue on, otherwise an exception is thrown, etc. When we're done with the lock, we reset the value back down to 0. Since I'm using SERIALIZATION transaction mode, I can't see this causing any issues... please correct me if I'm wrong.
What I do is, before any server starts the hour-long process, I set a boolean flag in the ProcessTable which indicates that this record is 'locked' and is being processed (not all records in this table are processed / locked, so I need to specifically mark each record which is needed by the process). So when the next server instance comes along while the previous instance is still processing, it sees the boolean flags and throws an exception. The problem is, that 2 server instances might both be activated at nearly the same time, and when both check the ProcessTable, there may not be any flags set, but both servers are actually in the process of 'setting' the flags but since the transaction hasn't yet commited for either process, neither process will see the locking done by the other process.
This is because the locking mechanism itself may take a few seconds, so there is that window of opportunity where 2 servers might still be able to process at the same time. It appears that what I need is a single record in my 'Settings' table which should store a boolean flag called 'LockInProgress'. So before even a server can lock the needed records in the ProcessTable, it first must make sure that it has full rights to do the locking by checking the 'LockInProgress' column in the Settings table.
So my question is, how do I prevent two servers from both modifying that LockInProgress column in the settings table, at the same time... or am I going about this in the wrong manner? Please note that I need to support both postgresql and ms sql server as some servers use one database, and some servers use the other. Thanks in advance...
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.