99 times out of 100, read committed is the right answer. That ensures that you only see changes that have been committed by the other session (and, thus, results that are consistent, assuming you've designed your transactions correctly). But it doesn't impose the locking overhead (particularly in non-Oracle databases) that repeatable read or serializable impose.
99 times out of 100, read committed is the right answer. That ensures that you only see changes that have been committed by the other session (and, thus, results that are consistent, assuming you've designed your transactions correctly). But it doesn't impose the locking overhead (particularly in non-Oracle databases) that repeatable read or serializable impose.
Very occasionally, you may want to run a report where you are willing to sacrifice accuracy for speed and set a read uncommitted isolation level. That's rarely a good idea, but it is occasionally a reasonably acceptable workaround to lock contention issues. Serializable and repeatable read are occasionally used when you have a process that needs to see a consistent set of data over the entire run regardless of what other transactions are doing at the URL1 may be appropriate to set a month-end reconciliation process to serializable, for example, if there is a lot of procedureal code, a possibility that users are going to be making changes while the process is running and a requirement that the process needs to ensure that it is always seeing the data as it existed at the time the reconciliation started.
1 Depending on the user count, I wouldn't advise running reports on your primary OLTP system. Have a log shipped (or otherwise replicated) side server for reporting with a denormalized version of the data – chadmyers Dec 12 '08 at 22:22.
It really depends a lot on how you design your application, the easy answer is just run at READ_COMMITTED. You can make an argument that if you design your system with it in mind that you could use READ_UNCOMMITTED as the default and only increase the isolation level when you need it. The vast majority of your transactions are going to succeed anyway so reading uncommitted data won't be a big deal.
The way isolation levels effect your queries depends on your target database. For instance databases like Sybase and MSSQL must lock more resources when you run READ_COMMITTED, than databases like Oracle.
For SQL Server (and probably most major RDBMS), I'd stick with the default. For SQL Server, this is READ COMMITTED. Anything more and you start overtaxing the DB, anything less and you've got consistency issues.
The default for SQL Server may be READ COMMITTED, but I believe the default for ADO. Net is SERIALIZABLE, in which case that matters more if he's using .net. – Robert C.
Barth Dec 12 '08 at 22:08 @Robert This is not correct. The default is whatever the default is on the SQL Server. When you change the isolation level on the SqlTransaction, for example, it executes SET TRANSACTION ISOLATION LEVEL XXXX on the server.
– chadmyers Dec 14 '08 at 1:11.
Don't forget about SNAPSHOT, which is right below SERIALIZABLE. It depends on how important it is for the data to be accurate in the reports. It really is a task-by-task thing.
It really depends a lot on how you design your application, the easy answer is just run at READ_COMMITTED.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.