Lock the rows until next select postgres?

I don't think this is possible. You cannot block a read only access to a table (unless that select is done FOR UPDATE ) As far as I can tell, the only chance you have is to use the pg_advisory_lock() function postgresql.org/docs/current/static/funct... But this requires a "manual" release of the locks obtained through it. You won't get an automatic unlocking with that To lock the rows you would need something like this: select pg_advisory_lock(id), * from ( select * table1 order by id limit 5 ) t (Note the use of the derived table for the LIMIT part.

See the manual link I posted for an explanation) Then you need to store the retrieved IDs and later call pg_advisory_unlock() for each ID If each process is always releasing all IDs at once, you could simply use pg_advisory_unlock_all() instead. Then you will not need to store the retrieved IDs Note that this will not prevent others from reading the rows using "normal" selects. It will only work if every process that accesses that table uses the same pattern of obtaining the locks.

I don't think this is possible. You cannot block a read only access to a table (unless that select is done FOR UPDATE) As far as I can tell, the only chance you have is to use the pg_advisory_lock() function. postgresql.org/docs/current/static/funct... But this requires a "manual" release of the locks obtained through it.

You won't get an automatic unlocking with that. To lock the rows you would need something like this: select pg_advisory_lock(id), * from ( select * table1 order by id limit 5 ) t (Note the use of the derived table for the LIMIT part. See the manual link I posted for an explanation) Then you need to store the retrieved IDs and later call pg_advisory_unlock() for each ID.

If each process is always releasing all IDs at once, you could simply use pg_advisory_unlock_all() instead. Then you will not need to store the retrieved IDs. Note that this will not prevent others from reading the rows using "normal" selects.It will only work if every process that accesses that table uses the same pattern of obtaining the locks.

Ahwnn Excellent, manual locking and unlocking won't be any problem for me. The link is useful for the required solution but I don't find anything about unlocking can be done by the same system as one which locks it,especially pg_advisory_unlock_all. If it works I will call that before calling pg_try_advisory_lock.

– kiranking Jan 1 at 13:27 @ahwnn "select * from table1 where pg_try_advisory_lock(id) order by id limit 5;" this worked and locks the rows from sys1 and sys2 is not getting that rows. But I couldn't unlock the rows by pg_advisory_unlock_all() from sys1. But when browser tab(opera and mozilla) is closed it is releasing the locked rows.

If I make pg_advisory_unlock_all() before selecting the rows ie in the beginging of post process in php will it unlock the rows previously locked. – kiranking Jan 1 at 20:08.

It looks like you really have a transaction which transcends the borders of your database, and all the change happens in an another system. My idea is select ... for update no wait to lock the relevant rows, then offload the data into another system, then rollback to unlock the rows. No two select ... for update queries will select the same row, and the second select will fail immediately rather than wait and proceed.

But you don't seem to mark offloaded records in any way; I don't see why two non-consecutive selects won't happily select overlapping range. So I'd still update the records with a flag and/or a target user name and would only select records with the flag unset.

(1) for select...for update to work properly is it should be in the transaction compulsorily. I executed two different queries in pgadmin's two query panel, but getting same rows. (2) once the rows are selected should I have to apply update query to row compulsorily to unlock the row.

– kiranking Jan 1 at 19:24 (1) You have to begin transaction for this to work, and run two different sessions (I used 2 instances of psql), then it works as advertised. (2) No, you don't have to actually update anything; you can rollback right after select for update just fine. No update or select for update will touch these rows until rollback or commit is issued.

– 9000 Jan 1 at 20:14 Within transaction it works fine. But problem is with rollback now. Php file do_search.

Php is for query execution which calls the function get_all_words function and locks the rows using for update in transaction and echos the result to get_words. Php(this file is one which calls the do_search. Php though ajax).

Now until all the fetched words are processed it should not be unlocked. Then where should I put rollback or is there a way to get transaction id and rollback that transaction after all words are processed. – kiranking Jan 1 at 20:45 If you can't fit the entire action into one transaction, you'll need to go for explicit, "hand-made" locking.(With pooled connections, you'll not even be able to rollback as you may get different connection the next page load.) This returns us to the idea of a flag in each row that you update every time you 'lock' your rows for operation, and then update (reset) again once you're done.

– 9000 Jan 1 at 20:52.

I tried both select...for update and pg_try_advisory_lock and managed to get near my requirement. /*rows are locking but limit is the problem*/ select * from table1 where pg_try_advisory_lock( id) limit 5; . .

$_SESSION'rows' = $rowcount; // no of row to process . . /*afer each process of word*/ $_SESSION'rows' -=1; .. /*and finally unlock locked rows*/ if ($_SESSION'rows'===0) select pg_advisory_unlock_all() from table1 But there are two problem in this 1.As Limit will apply before lock, every time the same rows are trying to lock in different instance.2.

Not sure whether pg_advisory_unlock_all will unlock the rows locked by current instance or all the instance.

As far as I can tell, the only chance you have is to use the pg_advisory_lock() function. But this requires a "manual" release of the locks obtained through it. You won't get an automatic unlocking with that.

(Note the use of the derived table for the LIMIT part. Then you need to store the retrieved IDs and later call pg_advisory_unlock() for each ID. If each process is always releasing all IDs at once, you could simply use pg_advisory_unlock_all() instead.

Then you will not need to store the retrieved IDs. Note that this will not prevent others from reading the rows using "normal" selects. It will only work if every process that accesses that table uses the same pattern of obtaining the locks.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions