Making sure two processes interleave?

As this is for testing purposes, you could place sched_yield() calls after every line of code in the child processes Another potential idea is to have a parent process ptrace() the child processes, and use PTRACE_SINGLESTEP to interleave the two process's execution on an instruction-by-instruction basis.

As this is for testing purposes, you could place sched_yield(); calls after every line of code in the child processes. Another potential idea is to have a parent process ptrace() the child processes, and use PTRACE_SINGLESTEP to interleave the two process's execution on an instruction-by-instruction basis.

1 for opening phrase. You might want to elaborate on why not to do this except for testing, tho.. – R.. May 6 at 11:02.

Since you need random interleaving, here's a horrible hack to do it: Immediately after forking, send a SIGSTOP to each application. Set your parent application to have real-time priority with sched_setscheduler. This will allow you to have more fine-grained timers.

Send a SIGCONT to one of the child processes. Loop: Wait a random, short time. Send a SIGSTOP to the currently-running application, and a SIGCONT to the other.Repeat.

This will help force execution to interleave. It will also make things quite slow. You may also want to try using sched_setaffinity to assign each process to a different CPU (if you have a dual-core or hyperthreaded CPU) - this will cause them to effectively run simultaneously, modulo wait times for I/O.

I/O wait times (which could cause them to wait for the hard disk, at which point they're likely to wake up sequentially and thus not interleave) can be avoided by making sure whatever data they're manipulating is on a ramdisk (on linux, use tmpfs). If this is too coarse-grained for you, you can use ptrace's PTRACE_SINGLESTEP operation to step one CPU operation at a time, interleaving as you see fit.

Thanks a lot..Tried as suggested and I give my code as an edit to my question. Sched_setscheduler() seems to hang the process. Will be very gratefull for some help.

– Juggler May 7 at 11:57 @Lipika, the order of operations I suggested is important - you're setting a FIFO scheduler before forking, which means the child process will be real-time, and unless it sleeps the parent process may never get a chance to run again. – bdonlan May 7 at 19:15 Thanks again..but I think the problem is my process is not a priviledged process and hence not being able to change priority. Sched_getscheduler() returns 0.

– Juggler May 8 at 5:17 Am sorry for changing my correct answer but the problem I found here was the "wait a random , short time". Whenever I call SIGCONT, the entire process runs before it is stopped again by the parent and the next child process is scheduled. Is it because, the child process executes is lesser time then the parent sleeps.

Tried using nanosleep() for very very small interval of time but did not suceed in getting two interleaved processes. Would love to know why but till then I will go with the sched_yield() answer – Juggler May 9 at 4:44 @Lipika, ah, I thought you were dealing with a long-running process. I added a bit on ptrace; sched_yield() doesn't guarentee interleaving, as process A might complete before process B finishes starting up... – bdonlan May 11 at 1:34.

If you need to synchronize them and they are your own processes, use semaphores. If you do not have access to the source, then there is no way to synchronize them.

If your aim is to do concurrency testing, I know of only two techniques: Test exact scenarios using synchronization. For example, process 1 opens a connection and executes a query, then process 2 comes in and executes a query, then process1 gets active again and gets the results, etc. You do this with synchronization techniques mentioned by others. However, getting good test scenarios is very difficult.

I have rarely used this method in the past. In random you trust: fire up a high number of test processes that execute a long running test suite. I used this method for both multithreading and multiprocess testing (my case was testing device driver access from multiple processes without blue screening out).

Usually you want to make the number of processes and number of iterations of the test suite per process configurable so that you can either do a quick pass or do a longer test before a release (running this kind of test with 10 processes for 10-12 hours was not uncommon for us). A usual run for this sort of testing is measured in hours. You just fire up the processes, let them run for a few hours, and hope that they will catch all the timing windows.

The interleaving is usually handled by the OS, so you don't really need to worry about it in the test processes.

Be aware however that some of the worst race conditions, especially ones that involve being interrupted exactly between two consecutive cpu instructions, are more like once-in-a-year or once-in-a-decade events. Testing is good but it's no substitute for auditing your concurrency logic and working out scenarios on paper where you can manually invoke each potential race situation. Of course OP's idea of inserting yields everywhere could allow this kind of once-in-a-decade race to be found by testing... – R.. May 6 at 11:06 I agree.

You should do both: analyze your concurrency logic, and do exhaustive testing. If you are doing analysis, you can create test cases out of the analysis, but there is still a lot to be said about random testing. Their purpose in life is to exercise the interactions you haven't thought of.

– vhallac May 6 at 15:39 Oh, I know another technique. Use valgrind's thread tool. Of course, this doesn't substitute proper thinking about the problem.

– ninjalj May 7 at 12:02.

Job control is much simpler with the Bash instead of C. Try this: #! /bin/bash stop () { echo "$1 stopping" kill -SIGSTOP $2 } cont () { echo "$1 continuing" kill -SIGCONT $2 } replay1 () { while sleep 1 ; do echo "replay 1 running" ; done } replay2 () { while sleep 1 ; do echo "replay 2 running" ; done } replay1 & P1=$!

Stop "replay 1" $P1 replay2 & P2=$! Stop "replay 2" $P2 trap "kill $P1;kill $P2" EXIT while sleep 1 ; do cont "replay 1 " $P1 cont "replay 2" $P2 sleep 3 stop "replay 1 " $P1 stop "replay 2" $P2 done The two processes are running in parallel: $ . /interleave.Sh replay 1 stopping replay 2 stopping replay 1 continuing replay 2 continuing replay 2 running replay 1 running replay 1 running replay 2 running replay 1 stopping replay 2 stopping replay 1 continuing replay 2 continuing replay 1 running replay 2 running replay 2 running replay 1 running replay 2 running replay 1 running replay 1 stopping replay 2 stopping replay 1 continuing replay 2 continuing replay 1 running replay 2 running replay 1 running replay 2 running replay 1 running replay 2 running replay 1 stopping replay 2 stopping ^C.

Thanks a lot but need a C program as its a part of a much larger project and these process makes system calls that are processed by my own system call wrappers – Juggler May 8 at 14:42.

In random you trust: fire up a high number of test processes that execute a long running test suite. I used this method for both multithreading and multiprocess testing (my case was testing device driver access from multiple processes without blue screening out). Usually you want to make the number of processes and number of iterations of the test suite per process configurable so that you can either do a quick pass or do a longer test before a release (running this kind of test with 10 processes for 10-12 hours was not uncommon for us).

A usual run for this sort of testing is measured in hours. You just fire up the processes, let them run for a few hours, and hope that they will catch all the timing windows. The interleaving is usually handled by the OS, so you don't really need to worry about it in the test processes.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions