Sometimes ... It depends a lot on how you are using map/reduce ... but in my experience it's never been a problem. There isn't much info on this, but it's clearly stated in the docs that is does sometimes lock but it "Allows substantial concurrent operation." There are a couple of questions in the mongodb-user group asking about this ... the best response I've seen offically is that ... "in 1.4 it yields but isn't as nice as it should be, in 1.5 its much friendlier to other requests.
" That does not mean that it doesn't block at all, but compared to db.eval() which blocks the whole mongod process ... it's your best bet. That said, in 1.7.2 and up there is now a nolock option for db.eval() ...
3 Keep in mind however that the most important issue is being glossed over: MapReduce locks the entire Javscript Context, of which there is only one per mongod. Hence, you cannot run multiple simultaneous MapReduce jobs on a single MongoDB server; be careful using it as a query mechanism. – Brendan W.
McAdams Apr 6 at 14:04 @Brendan ... great point, thanks. – Justin Jenkins Apr 6 at 23:18.
No, mapreduce does not lock the database. See the note here, just after "Using db.eval()" (it explains why mapreduce may be more appropriate to use than eval, because mapreduce does not block). If you are going to run a lot of mapreduce jobs you should use sharding, because that way the job can run in parallel on all the shards.
Unfortunately mapreduce jobs can't run on secondaries in a replica set, since the results must be written and replicas are read-only.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.