Which sorting algorithms give near / approximate sort sooner?

You may want to check out the shell sort algorithm.

You may want to check out the shell sort algorithm. AFAIK it is the only algorithm you can use with a subjective comparison (meaning that you won't have any hint about median values or so) that will get nearer to the correct sort at every pass. Here is some more information en.wikipedia.org/wiki/Shell_sort.

I would suggest some version of quicksort. If you know in what range the data you want to sort is then you can select pivot elements cleverly and possibly divide the problem into more than two parts at a time.

1 Quicksort, maybe even with a "guess" pivots, should give pretty well ordered lists after a few recursions and can be easily implemented by human beings using buckets. – stephan May 27 '09 at 6:13 I did consider having more than a binary partition (where this related question stackoverflow. Com/questions/913624/… is relevant) – James Tauber May 27 '09 at 6:17 I thought that the comparison was subjective, but if you can guess a good pivot element (maybe because it's subjective but "predictable"?) than you should use quicksort.

– MartinodF May 27 '09 at 6:24 it's subject and typically non-numerical so there is no a priori good pivot, however I could ask the user to rank a small random subset then use the middle item as the initial pivot – James Tauber May 27 '09 at 6:27 1 Didn't thought of that. Then you are go for quicksort ;) – MartinodF May 27 '09 at 6:31.

No. Try not. Sort, or sort not.

There is no try. :-).

My completely un-scientific and visual survey of the sorts on this page indicates that "Comb Sort" looks good. It seems to be getting to a better approximation with every pass.

This will be Nb runtime where be is the number of bits you decide to examine. The more bits you examine, the more sorted it will be unsorted: 5 0101 8 1000 4 0100 13 1101 1 0001 after 1 bits (N): 5 0101 1 0001 4 0100 13 1101 8 1000 after 2 bits (2N) 1 0001 5 0101 4 0100 8 1000 13 1101 and so on...

The OP is using humans to do pairwise comparison. S objects probably don't have "bits" in the radix sort way. – Antti Rasinen May 27 '09 at 6:19 my objects are typically non-numerical – James Tauber May 27 '09 at 6:28 chances are you at least have some key that you are sorting on that could be done in a bitwise way?

If not an int, go off the first be bits of the ascii representation? Unicode and other encodings would degrade pretty quickly. – nategood May 27 '09 at 6:47 no, the items are being sorted by on subjective pairwise comparison by humans, there's no key – James Tauber May 27 '09 at 13:49 ah okay.

Missed that. But for future reference, when you're willing to trade off "sortedness" for performance, it's an idea. – nategood May 27 '09 at 16:19.

I devised an NlgN sorting algorithm I called "tournament sort" some time ago, which finds output items in order (i.e. It starts by finding the first item, then it finds the second item, etc. ). I'm not sure it's entirely practical, since the book-keeping overhead exceeds that of quicksort, merge sort, etc. But in benchmarking with 1,000,000 random items the comparison count actually came in below a standard library quicksort implementation (though I'm not sure how it would do against newer ones).

For purposes of my algorithm, the "score" of each item is the number of items that it's known to be better than. Initially each item has a score of 0. When two items are compared, the better item adds to its score the score of the other item, plus one.

To run the algorithm, declare all items to be "eligible", and as long as more than one eligible item remains, compare the two eligible items with the lowest score, and make the loser "ineligible". Once all but one item has been declared ineligible, output the one remaining item and then declare eligible all the items which that item had "beaten". The priority queue required to compare the two lowest-scoring items poses a somewhat annoying level of bookkeeping overhead, but if one were sorting things that were expensive to compare, the algorithm may be interesting.

Bubble sort I reckon. Advantage is that you can gradually improve the ordering with additional sweeps of the data.

Using bubble sort he would end with one side of the list perfectly ordered, and the other side not – MartinodF May 27 '09 at 5:58 Bubble sort starts sorting the top n values, but the rest is almost completely unordered. Plus: You need more comparisons in total. – Georg Schölly May 27 '09 at 5:59 Shell sort is a generalization of bubble sort that runs much faster.In its final step Shell sort is bubble sort.

– Neil G May 27 '09 at 6:04 1 Actually, the bidirectional bubble sort which I use for my accounting spreadsheet would not have an "unbalanced" state since it alternates between trickling higher values down and lower values up. Hence it correctly places in, i1, in-1, i2 and so on. This is one of the few places I'll ever use bubble sorts, in data that's already heavily sorted and I'm just adding stuff to the end that needs to be placed.

– paxdiablo May 27 '09 at 6:04.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions