"YOU AND THE ART OF ONLINE DATING" is the only product on the market that will take you step-by-step through the process of online dating, provide you with the resources to help ensure success. Get it now!
It appears that you are trying to do something like grab the highest priority item in a queue that is not already being taken care of by another process A likely solution is to add a where clause limiting it to unhandled requests: select * from queue where flag=0 order by id desc for update; update queue set flag=1 where id=:id; --if you really want the lock: select * from queue where id=:id for update Hopefully, the second transaction will block while the update to the flag happens, then it will be able to continue, but the flag will limit it to the next in line It is also likely that using the serializable isolation level, you can get the result you want without all of this insanity Depending on the nature of your application, there may be better ways of implementing this than in the database, such as a FIFO or LIFO pipe. Additionally, it may be possible to reverse the order that you need them in, and use a sequence to ensure that they are processed sequentially.
It appears that you are trying to do something like grab the highest priority item in a queue that is not already being taken care of by another process. A likely solution is to add a where clause limiting it to unhandled requests: select * from queue where flag=0 order by id desc for update; update queue set flag=1 where id=:id; --if you really want the lock: select * from queue where id=:id for update; ... Hopefully, the second transaction will block while the update to the flag happens, then it will be able to continue, but the flag will limit it to the next in line. It is also likely that using the serializable isolation level, you can get the result you want without all of this insanity.
Depending on the nature of your application, there may be better ways of implementing this than in the database, such as a FIFO or LIFO pipe. Additionally, it may be possible to reverse the order that you need them in, and use a sequence to ensure that they are processed sequentially.
That works. Consider having an "immediate" status of "locked". Let's say your table is like that: id | name | surname | status And possible statuses for example are: 1=pending, 2=locked, 3=processed, 4=fail, 5=rejected Every new record gets inserted with status pending(1) Your program does: "update mytable set status = 2 where id = (select id from mytable where name like '%John%' and status = 1 limit 1) returning id, name, surname" Then your program does its thing and if it cames up with the conclusion that this thread shouldn't had processed that row at all, it does: "update mytable set status = 1 where id =?"
Otherside it updates to the other statuses.
If you are trying to implement a queue, take a look at PGQ, which has solved this and other problems already. wiki.postgresql.org/wiki/PGQ_Tutorial.
Looks like you're looking for a SELECT FOR SHARE. postgresql.org/docs/8.3/interactive/sql-... FOR SHARE behaves similarly, except that it acquires a shared rather than exclusive lock on each retrieved row. A shared lock blocks other transactions from performing UPDATE, DELETE, or SELECT FOR UPDATE on these rows, but it does not prevent them from performing SELECT FOR SHARE.
If specific tables are named in FOR UPDATE or FOR SHARE, then only rows coming from those tables are locked; any other tables used in the SELECT are simply read as usual. A FOR UPDATE or FOR SHARE clause without a table list affects all tables used in the command. If FOR UPDATE or FOR SHARE is applied to a view or sub-query, it affects all tables used in the view or sub-query.
Multiple FOR UPDATE and FOR SHARE clauses can be written if it is necessary to specify different locking behavior for different tables. If the same table is mentioned (or implicitly affected) by both FOR UPDATE and FOR SHARE clauses, then it is processed as FOR UPDATE. Similarly, a table is processed as NOWAIT if that is specified in any of the clauses affecting it.
FOR UPDATE and FOR SHARE cannot be used in contexts where returned rows cannot be clearly identified with individual table rows; for example they cannot be used with aggregation.
I don't think this solves my problem, which I may not have clearly stated above. Each thread wants to get the first row that it is able to update, and then update that row. Continued below – idontwanttortfm Dec 23 '08 at 18:00 So if there are rows with IDs 1,2,3, and 4 in the table that match the WHERE clause conditions, the first thread would update the row with ID 4, the next thread would update the row with ID 3, etc.– idontwanttortfm Dec 23 '08 at 18:00 Hmm, sorry I don't know then.
– Steven Behnke Dec 23 '08 at 19:51.
This can be accomplished by SELECT ... NOWAIT; an example is here.
This won't work well if the relevant portion of the table is being updated frequently; you'll still have collisions but only during heavy insert load. Select... order by id desc offset THREAD_NUMBER limit 1 for update.
I don't think so, which leaves me doing a select followed by an update. Basically any way I do the select the second thread will end up with a failed query because it selects the same row as the first. – idontwanttortfm Dec 23 '08 at 18:15.
Since I haven't found a better answer yet, I've decided to use locking within my app to synchronize access to the code that does this query.
It might be treated more atomically than the other examples but should still be tested to make sure my assumptions aren't wrong. UPDATE users SET flags = 1 WHERE id = ( SELECT id FROM users WHERE flags = 0 ORDER BY id DESC LIMIT 1 ) RETURNING ...; You'll probably still be stuck with whatever locking scheme postgres uses internally to supply consistent SELECT results in the face of a simultaneous UPDATEs.
I faced the same problem in our application and came up with a solution that is very similar to Grant Johnson's approach. A FIFO or LIFO pipe was not an option because we have a cluster of application servers accessing one DB. What we do is a SELECT ... WHERE FLAG=0 ... FOR UPDATE immediately followed by a UPDATE ... SET FLAG=1 WHERE ID=:id as soon as possible in order to keep the lock time as low as possible.
Depending on the table column count and sizes it might help to only fetch the ID in the first select and once you've marked the row to fetch the remaining data. A stored procedure can reduce the amount of round-trips even more.
I use something like this: select * into l_sms from sms where prefix_id = l_prefix_id and invoice_id is null and pg_try_advisory_lock(sms_id) order by suffix limit 1; and don't forget to call pg_advisory_unlock.
My solution is to use the UPDATE statement with the RETURNING clause. Users ----------------------------------- ID | Name | flags ----------------------------------- 1 | bob | 0 2 | fred | 1 3 | tom | 0 4 | ed | 0 Instead of SELECT .. FOR UPDATE use BEGIN; UPDATE "Users" SET ... WHERE ...; RETURNING ( column list ); COMMIT; Because the UPDATE statement obtains a ROW EXCLUSIVE lock on the table its updating you get serialized updates. Reads are still allowed, but they only see data before the start of the UPDATE transaction.
Reference: Concurrency Control Chapter of Pg docs.
No No NOOO :-) I know what the author means. I have a similar situation and I came up with a nice solution. First I will start from describing my situation.
I have a table I which I store messages that have to be sent at a specific time. PG doesn't support timing execution of functions so we have to use daemons (or cron). I use a custom written script that opens several parallel processes.
Every process selects a set of messages that have to be sent with the precision of +1 sec / -1 sec. The table itself is dynamically updated with new messages. So every process needs to download a set of rows.
This set of rows cannot be downloaded by the other process because it will make a lot of mess (some people would receive couple messages when they should receive only one). That is why we need to lock the rows. The query to download a set of messages with the lock: FOR messages in select * from public.
Messages where sendTime >= CURRENT_TIMESTAMP - '1 SECOND'::INTERVAL AND sendTime = CURRENT_TIMESTAMP - '1 SECOND'::INTERVAL AND sendTime Adding more things to this function can make it only slower. I takes longer to check this clause anyways so there is no need to make it even slower. For me this the complete solution and it works perfectly.
Now when I have my 50 processes running in parallel every process has a unique set of fresh messages to send. Once the are sent I just update the row with sent = TRUE and never go back to it again. I hope this solution will also work for you (author).
If you have any question just let me know :-) Oh, and let me know if this worked for you as-well.
Every process selects a set of messages that have to be sent with the precision of +1 sec / -1 sec. The table itself is dynamically updated with new messages. So every process needs to download a set of rows.
This set of rows cannot be downloaded by the other process because it will make a lot of mess (some people would receive couple messages when they should receive only one). That is why we need to lock the rows. A process with this query is started every 0.5 sec.
So this will result in the next query waiting for the first lock to unlock the rows. This approach creates enormous delays. Even when we use NOWAIT the query will result in a Exception which we don't want because there might be new messages in the table that have to be sent.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.