Leader/Follower is about effeciently dealing with multiple workers. When you have no work (jobs), what are your worker or workers doing? A common, simple approach is to have a single consumer thread dispath jobs to workers, either by spawning a thread or using a thread pool.
The pattern discussed provides an alternative approach that avoid having to syncronize bewteen the dispatcher and the worker by having the (leader) thread that gets the job execute the work task itself. It promotes a waiting worker to the leader position to keep the system responsive Be aware that this article is discussing lower-level mechanisms for waiting for work that do not (easily) support multiple threads waiting on the same "queue" of work. Gher-level constructs like message queues that do support multiple worker threads all performing a blocking read on the same source (AKA competeing consumers) may not get the same benefit described.
With a higher level of abstraction comes more programming ease, but typically at the cost of the kind of performance that can be gained from a more low-level approach EDIT1: Here's a made up sample (psuedocode only). Please note that I did not write the article or benchamrk it so I cannot truely speak to how much more performant one is VS the other. But this hopefully shows the difference in style in QueueHandler processing loop while(true) { // read, blockming until one arrives Request req = requestQueue.BlockingRead(); // we have a unit of work now but the QueueHandler should not process it // because if it is long running then no new requests can be handled.
// so we spawn / dispatch to a thread ThreadPool. QueueWorkItem(req); // or new Thread(DoWork(), req). Start; // at this point we know that the request will get picked up in // an unknown but hopefully very shot amount of time by a // waiting (sleeping/blocking) or new thread and it will get passed the // work.
But doing so required the use of thread syncronization // primatives that can cause all processors to flush their caches // and other expensive stuff. } // now loop back up to read the next request VS in Leader while(true) { // I'm the leader, blocking read until a request arrives Request req = queue.BlockingRead(); // We have a unit of work and we are going to process it ourselves. // But first we notify a follower.Followers.PromoteOne(); // work on the request in this thread!
DoWorkOn(req); // now that I'm done, wait to the the leader Followers. BlockingWaitToBeLeader(); }.
Leader/Follower is about effeciently dealing with multiple workers. When you have no work (jobs), what are your worker or workers doing? A common, simple approach is to have a single consumer thread dispath jobs to workers, either by spawning a thread or using a thread pool.
The pattern discussed provides an alternative approach that avoid having to syncronize bewteen the dispatcher and the worker by having the (leader) thread that gets the job execute the work task itself. It promotes a waiting worker to the leader position to keep the system responsive. Be aware that this article is discussing lower-level mechanisms for waiting for work that do not (easily) support multiple threads waiting on the same "queue" of work.
Gher-level constructs like message queues that do support multiple worker threads all performing a blocking read on the same source (AKA competeing consumers) may not get the same benefit described. With a higher level of abstraction comes more programming ease, but typically at the cost of the kind of performance that can be gained from a more low-level approach. EDIT1: Here's a made up sample (psuedocode only).
Please note that I did not write the article or benchamrk it so I cannot truely speak to how much more performant one is VS the other. But this hopefully shows the difference in style. // in QueueHandler processing loop while(true) { // read, blockming until one arrives Request req = requestQueue.BlockingRead(); // we have a unit of work now but the QueueHandler should not process it // because if it is long running then no new requests can be handled.
// so we spawn / dispatch to a thread ThreadPool. QueueWorkItem(req); // or new Thread(DoWork(), req). Start; // at this point we know that the request will get picked up in // an unknown but hopefully very shot amount of time by a // waiting (sleeping/blocking) or new thread and it will get passed the // work.
But doing so required the use of thread syncronization // primatives that can cause all processors to flush their caches // and other expensive stuff. } // now loop back up to read the next request VS // in Leader while(true) { // I'm the leader, blocking read until a request arrives Request req = queue.BlockingRead(); // We have a unit of work and we are going to process it ourselves. // But first we notify a follower.Followers.PromoteOne(); // work on the request in this thread!
DoWorkOn(req); // now that I'm done, wait to the the leader Followers. BlockingWaitToBeLeader(); }.
– Nicolas Nov 14 at 17:57 It has less contention, less passing of work between threads. Coordinating thread communication/syncronization often requires the use of locks etc that have a very negative impact on code running on many processors. – tcarvin Nov 14 at 18:41 but could you point out the concrete difference between this and work-queues?
I still cannot really see why work queues require more locking. – Nicolas Nov 14 at 18:56 There are work queues of some sort either way, the pattern is about managing the workers. I'll edit my post with some psuedo-code.
– tcarvin Nov 14 at 19:22 cool edit, thanks alot – Nicolas Nov 14 at 19:38.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.