I will try to answer that question. First, background. Each level of recursion is stored an area of memory known as the stack.
Unfortunately, the system has to allocate the stack space in advance and it doesn't know in advance how much stack space your program might need. That's why too much recursion causes a "maximum recursion depth" error: your program has used up all of that stack space. Each thread needs its own stack to store the list of functions that are currently executing in that thread.
In a single threaded program, the system can afford to give a big chunk of memory to the stack for that one thread. In a multi-threaded program, the system has to be a bit more conservative and it gives only a small stack to each thread. Otherwise, a program with many threads could quickly use up all system memory just with stack space (most of which won't be used).
All of this is done by the operating system and/or by the C library, which Python (more precisely, CPython) runs on top of. Python tries hard to prevent you from using the entire C stack, because that would cause a hard crash rather than simply an exception. You can tell Python how to behave with the setrecursionlimit function, but that doesn't change the actual amount of stack space available.
On a unix-ish system with a bash shell, you may be able to change the stack size with the ulimit -s command. Type help ulimit at your bash shell prompt for more information.
If not, I'd suggest using the built in sorting mechanisms; they're quite good for the vast majority of cases and don't suffer from a recursion depth problem. If you're looking at extremely large datasets, I'd suggest looking at the various containers and algorithms available from scipy and numpy. If it's purely for curiosity of implementing the routine, as Marcelo suggests in the comments, we will need to see code.
It is my own homework project. The Quicksort function work well if not in multithread environment. My question is why multiple threads affect recursion depth.
The following is Quicksort code. – chnet Apr 26 '10 at 4:03.
The problem you are having is a recursive function uses memory, and with a large number of elements and thus a large number of recursions, you're running out of memory. This explains why raising the recursion limit crashes your program – you're asking for more memory than you have. If you really want to implement quicksort for a large number of elements, you will want to read this article on Wikipedia on memory usage specifically using quicksort.
Otherwise as Nathan suggested, Python already has a built in sorted() function. Unless this is homework or curiosity, I would highly suggest using that.
It is a homework project I want to implement quicksort in multithread environment. The quicksort works well in single thread. It seems that the multiple threads affect the recursion depth.
– chnet Apr 26 '10 at 4:08.
You are using a recursive implementation of quicksort. You want to implement quicksort using iteration instead. Recursion isn't scalable in Python (in CPython at least), so for larger input it will fail.
You can increase the recursion limit, but this will only let you scale over a larger range, not make your implementation really scale. It also comes at the cost of allowing the possibility of a segfault if you have too much recursion. This approach works (or rather doesn't really work) as well for multithreaded code, you just have to do it more because the recursion limit for each thread will be lower.
All in all, it's a losing proposition: use iteration instead. You are using threads (or planning to), which is usually a bad sign. Threads are confusing and dangerous and hard.
What's more, threads in Python do not give you parallel execution, if that's what you were expecting. Using threads for a quicksort implementation, especially in Python, will probably prove less than ideal. (If you are required to do it, you should at least step back and understand that it might not be the best approach.).
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.