Thread for interprocess communication in OpenMP?

You could try adding a nowait clause to your single construct.

Up vote 1 down vote favorite share g+ share fb share tw.

I have an OpenMP parallelized program that looks like that: ... #pragma omp parallel { //initialize threads #pragma omp for for(...) { //Work is done here } } Now I'm adding MPI support. What I will need is a thread that handles the communication, in my case, calls GatherAll all the time and fills/empties a linked list for receiving/sending data from the other processes. That thread should send/receive until a flag is set.

So right now there is no MPI stuff in the example, my question is about the implementation of that routine in OpenMP. How do I implement such a thread? For example, I tried to introduce a single directive here: ... int kill=0 #pragma omp parallel shared(kill) { //initialize threads #pragma omp single nowait { while(!kill) send_receive(); } #pragma omp for for(...) { //Work is done here } kill=1 } but in this case the program gets stuck because the implicit barrier after the for-loop waits for the thread in the while-loop above.

Thank you, rugermini. Multithreading parallel openmp link|improve this question edited Mar 13 at 14:36 asked Mar 13 at 13:54rugermini84.

– Tudor Mar 13 at 14:08 Yes, exactly... – rugermini Mar 13 at 14:12.

You could try adding a nowait clause to your single construct: EDIT: responding to the first comment If you enable nested parallelism for OpenMP, you might be able to achieve what you want by making two levels of parallelism. In the top level, you have two concurrent parallel sections, one for the MPI communications, the other for local computation. This last section can itself be parallelized, which gives you a second level of parallelisation.

Only threads executing this level will be affected by barriers in it. #include #include int main() { int kill = 0; #pragma omp parallel sections { #pragma omp section { while (kill == 0){ /* manage MPI communications */ } } #pragma omp section { #pragma omp parallel #pragma omp for for (int I = 0; I.

Sorry, forgot to write it. I have that clause in the code. – rugermini Mar 13 at 14:17 Thank you everybody, that seems to work!

– rugermini Mar 13 at 15:17.

Hmmm. If you are indeed adding MPI 'support' to your program, then you ought to be using mpi_allgather as mpi_gatherall does not exist. Note that mpi_allgather is a collective operation, that is all processes in the communicator call it.

You can't have a process gathering data while the other processes do whatever it is they do. What you could do is use MPI single-sided communications to implement your idea; this will be a little tricky but no more than that if one process only reads the memory of other processes. I'm puzzled by your use of the term 'thread' wrt MPI.

I fear that you are confusing OpenMP and MPI, one of whose variants is called OpenMPI. Despite this name it is as different from OpenMP as chalk from cheese. MPI programs are written in terms of processes, not threads.

The typical OpenMP implementation does indeed use threads, though the details are generally well-hidden from the programmer. I'm seriously impressed that you are trying, or seem to be trying, to use MPI 'inside' your OpenMP code. This is exactly the opposite of work I do, and see others do on some seriously large computers.

The standard mode for such 'hybrid' parallelisation is to write MPI programs which call OpenMP code. Many of today's very large computers comprise collections of what are, in effect, multicore boxes. A typical approach to programming one of these is to have one MPI process running on each box, and for each of those processes to use one OpenMP thread for each core in the box.

This is exactly what I want to do: One MPI process per node, many OpenMP threads per process. And you're right, it's a HPC code. Sorry for writing allgather instead of gatherall.

– rugermini Mar 13 at 14:33 I think I see the problem; okay, that was confusing. In my code above there is no mpi related stuff. I just tried to mention my motivation for putting the question.

– rugermini Mar 13 at 14:38.

You have to be careful, because you can't just have your MPI calling thread "skip" the omp for loop; all threads in the thread team have to go through the for loop. There's a couple ways you could do this: with nested parallism and tasks, you could launch one task to do the message passing and anther to call a work routine which has an omp parallel for in it: #include #include #include void work(int rank) { const int n=14; #pragma omp parallel for for (int i=0; i= size) sneighbour = 0; rneighbour = rank - 1; if (rneighbour #include #include void sendrecv(int rank, int sneighbour, int rneighbour, int *data) { const int tag=1; MPI_Sendrecv(&rank, 1, MPI_INT, sneighbour, tag, data, 1, MPI_INT, rneighbour, tag, MPI_COMM_WORLD, MPI_STATUS_IGNORE); } int main(int argc, char **argv) { int rank, size; int sneighbour; int rneighbour; int data; int got; const int n=14; MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &got); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Comm_rank(MPI_COMM_WORLD,&rank); omp_set_nested(1); sneighbour = rank + 1; if (sneighbour >= size) sneighbour = 0; rneighbour = rank - 1; if (rneighbour.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions