Here's one optimisation I can see: in consume() you don't need to be fetching b->readpos continously, since the thread calling consume() is the only one that can update it anyway. Because it's volatile the compiler can't optimise all those fetches away, so you'll need to do it explicitly: void * consume (buf_t *b) { int rp = b->readpos; while (rp == b->writepos); // nothing to consume. Wait int next = (rp + 1) % BUF_SIZE; void * res = b->bufferrp; b->readpos = next; return res; } You should also step through your buffer in strides of at least a cacheline each, otherwise you'll get cachelines ping-ponging between the two CPUs (as one CPU wants the cacheline to read b->buffern and 15 times out of 16 the other invalidates it to write b->buffern+1 ).
Eg: define STRIDE 16 #define STEPS 2048 #define BUF_SIZE (STRIDE * STEPS) #define TO_INDEX(n) (STRIDE * (((n) + 1) % STEPS) + (((n) + 1) / STEPS)) void produce (buf_t *b, void * e) { unsigned wp = b->writepos; unsigned next = (wp + 1) % BUF_SIZE; while (b->readpos == next); // queue is full. Wait b->bufferTO_INDEX(wp) = e; b->writepos = next; } void * consume (buf_t *b) { unsigned rp = b->readpos; while (rp == b->writepos); // nothing to consume. Wait unsigned next = (rp + 1) % BUF_SIZE; void * res = b->bufferTO_INDEX(rp); b->readpos = next; return res; } Got to be worth a try, anyway.(Note that as long as STRIDE and STEPS are powers of 2, the scary-looking division and modulus in TO_INDEX() can be optimised to a shift and a bitwise-and, but only if the operands are unsigned hence I suggest changing those types accordingly).
Here's one optimisation I can see: in consume(), you don't need to be fetching b->readpos continously, since the thread calling consume() is the only one that can update it anyway. Because it's volatile, the compiler can't optimise all those fetches away, so you'll need to do it explicitly: void * consume (buf_t *b) { int rp = b->readpos; while (rp == b->writepos); // nothing to consume. Wait int next = (rp + 1) % BUF_SIZE; void * res = b->bufferrp; b->readpos = next; return res; } You should also step through your buffer in strides of at least a cacheline each, otherwise you'll get cachelines ping-ponging between the two CPUs (as one CPU wants the cacheline to read b->buffern and 15 times out of 16 the other invalidates it to write b->buffern+1).
Eg: #define STRIDE 16 #define STEPS 2048 #define BUF_SIZE (STRIDE * STEPS) #define TO_INDEX(n) (STRIDE * (((n) + 1) % STEPS) + (((n) + 1) / STEPS)) void produce (buf_t *b, void * e) { unsigned wp = b->writepos; unsigned next = (wp + 1) % BUF_SIZE; while (b->readpos == next); // queue is full. Wait b->bufferTO_INDEX(wp) = e; b->writepos = next; } void * consume (buf_t *b) { unsigned rp = b->readpos; while (rp == b->writepos); // nothing to consume. Wait unsigned next = (rp + 1) % BUF_SIZE; void * res = b->bufferTO_INDEX(rp); b->readpos = next; return res; } Got to be worth a try, anyway.(Note that as long as STRIDE and STEPS are powers of 2, the scary-looking division and modulus in TO_INDEX() can be optimised to a shift and a bitwise-and, but only if the operands are unsigned - hence I suggest changing those types accordingly).
I assume that you are using a machine with more than one processor or core. If not then your busy waits are going to hurt things. They may hurt anyway if you are running under an OS that decides that you don't sleep enough and bumps down your dynamic priority and there are other programs running.
You need to collect data on how full your buffer gets. At a certain point too big starts to hurt your cache too. If you use a global array rather than allocating it from the heap then the pointer to the buffer becomes a pointer literal and both threads won't have to read that pointer value from the same place in cache because it will just be shoved into the code.
If throughput is what's important to you (at the expense of latency) and cache really is playing a big roll, then you might consider letting the consumer lag the producer so that they aren't trying to read and write from the same place in the buffer. You may want to change the interface to your consumer function so that it can consume in cache size (or multiple of) sized chunks (this plays nice with the consumer lagging the producer suggestion I made above) in addition to individual or partial cache line chunks. Try to keep the consumption cache aligned.
If you think of the available data as a snake, then it's possible for the head and the tail to be unaligned. You should only consume an unaligned tail when there is no other data to consume. If you can consume any other data in a call to consume then you should just leave the tail for the next call.
Other than that and what has been mentioned by caf I'd have to suspect that whatever is happening outside the of this code has to be playing a bigger role.
I've implemented the optimizations in the first code block of cafs answer. They actually gave some little speed up (thank you), however it is not yet enough. The second optimization which results in the cache being filled by column instead of by row results in a worse performance.
The idea of the consumer lagging behind the producer did not gave any speedup. Now, I'm at 300%. An additional change I have made was some hack regarding the volatile writepos and readpos variables: void produce (void * e) { unsigned int oldpos = buffer.
Writepos; unsigned int next = (oldpos+1) % BUF_SIZE; while (next == buffer. Rpos) { // rpos is not volatile buffer. Rpos = buffer.
Readpos; usleep(1); } buffer. Bufferoldpos = e; buffer. Writepos = next; } and similar for the consume().
Additional changes to the struct lead to the following new buffer struct (in the global scope as it was suggested in one answer instead of on the heap). #define STRIDE 16 #define STEPS 524288 struct buf_t { volatile unsigned int writepos; int offset STRIDE - 1; unsigned int wpos; int offset2 STRIDE - 1; volatile void * bufferBUF_SIZE; int offset4 STRIDE; volatile unsigned int readpos; int offset3 STRIDE - 1; unsigned int rpos; } which gave the 300% speedup which was missing and pushed it below the performance limit I had to achieve. If you have some additional hacks which could be used to increase the performance further, don't hesitate to post them also :-) Thanks for help.
You need to collect data on how full your buffer gets. At a certain point too big starts to hurt your cache too. If you use a global array rather than allocating it from the heap then the pointer to the buffer becomes a pointer literal and both threads won't have to read that pointer value from the same place in cache because it will just be shoved into the code.
If throughput is what's important to you (at the expense of latency) and cache really is playing a big roll, then you might consider letting the consumer lag the producer so that they aren't trying to read and write from the same place in the buffer. You may want to change the interface to your consumer function so that it can consume in cache size (or multiple of) sized chunks (this plays nice with the consumer lagging the producer suggestion I made above) in addition to individual or partial cache line chunks. Try to keep the consumption cache aligned.
If you think of the available data as a snake, then it's possible for the head and the tail to be unaligned.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.