Time complexity of Euclid's Algorithm?

The trick to analyzing the time complexity of Euclid's algorithm is to follow what happens over two iterations.

Up vote 5 down vote favorite 2 share g+ share fb share tw.

I am having difficulty deciding what the time complexity of Euclid's great common denominator algorithm is. Sample pseudo-code is: function gcd(a, b) while be? 0 t := be be := a mod be a := t return a The only decision I think of is, "It depends.

" It seems to depend on "a" and "b. " But what would the big-O notation for this be? My guess is that the time complexity is O(a % b).

Is that correct or is there a better way to write that? Algorithm time-complexity link|improve this question asked Oct 20 '10 at 16:59DT3301114 67% accept rate.

3 See Knuth TAOCP, Volume 2 -- he gives the extensive coverage. Just FWIW, a couple of tidbits: it's not proportional to a%b. The worst case is when a and be are consecutive Fibonacci numbers.

– Jerry Coffin Oct 20 '10 at 17:10.

The trick to analyzing the time complexity of Euclid's algorithm is to follow what happens over two iterations: a', b' := a % b, be % (a % b) Now a and be will both decrease, instead of only one, which makes the analysis easier. You can divide it into cases: Tiny A: 2*a be but a a but be Equal: a+b drops to 0, which is obviously decreasing a+b by at least 25%. Therefore, by case analysis, every double-step decreases a+b by at least 25%.

So the total number of steps (S) until we hit 0 must satisfy 1.25^S Now just work it: (5/4)^S.

There's a great look at this on the wikipedia article. It even has a nice plot of complexity for value pairs. It is not O(a%b).

It is known (see article) that it will never take more steps than five times the number of digits in the smaller number. So the max number of steps grows as the number of digits (ln b). The cost of each step also grows as the number of digits, so the complexity is bound by O(ln^2 b) where be is the smaller nubmer.

That's an upper limit, and the actual time is usually less.

– IVlad Oct 20 '10 at 17:11 @IVlad: Number of digits. I've clarified the answer, thank you. – JoshD Oct 20 '10 at 17:16 For OP's algorithm, using (big integer) divisions (and not substractions) it is in fact something more like O(n^2 log^2n).

– Alexandre C. Oct 20 '10 at 17:27 @Alexandre C. : Keep in mind n = ln b.

What is the regular complexity of modulus for big int? Is it O(log n log^2 log n) – JoshD Oct 20 '10 at 17:34 @JoshD: it is something like that, I think I missed a log n term, the final complexity (for the algorithm with divisions) is O(n^2 log^2 n log n) in this case. – Alexandre C.

Oct 20 '10 at 18:49.

See here. In particular this part: Lamé showed that the number of steps needed to arrive at the greatest common divisor for two numbers less than n is So O(log min(a, b)) is a good upper bound.

That is true for the number of steps, but it doesn't account for the complexity of each step itself, which scales with the number of digits (ln n). – JoshD Oct 20 '10 at 17:18.

The worst case of Euclid Algorithm is when the remainders are the biggest possible at each step, ie. For two consecutive terms of the Fibonacci sequence. When n and m are the number of digits of a and b, assuming n >= m, the algorithm uses O(m) divisions.

Note that complexities are always given in terms of the sizes of inputs, in this case the number of digits.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions