If you look at the code for BigInteger (provided with JDK), it appears to me that multiply(..) has O(n^2) (actually the method is multiplyToLen(..) ). The code for the other methods is a bit more complex, but you can see yourself.
If you look at the code for BigInteger (provided with JDK), it appears to me that multiply(..) has O(n^2) (actually the method is multiplyToLen(..)). The code for the other methods is a bit more complex, but you can see yourself. Note: this is for Java 6.
I assume it won't differ in Java 7.
– Joey Jan 28 '10 at 11:52 1 I believe there have been changes in Java 7. I can't remember the details I found searching, but they were scarce. – PythonPower Jan 28 '10 at 11:54 Rössel: There exist other algorithms for multiplication, but Java 6 doesn't use them.
When multiplying large numbers you'd certainly notice the difference between the schoolbook algorithm and Karatsuba multiplication. The others are less of a jump unless you're filling up primary memory with the numbers. – Charles May 28 '10 at 20:24.
Measure it. Do operations with linearly increasing operands and draw the times on a diagram. Don't forget to warm up the JVM (several runs) to get valid benchmark results.
If operations are linear O(n), quadratic O(n^2), polynomial or exponential should be obvious. EDIT: While you can give algorithms theoretical bounds, they may not be such useful in practice. First of all, the complexity does not give the factor.
Some linear or subquadratic algorithms are simply not useful because they are eating so much time and resources that they are not adequate for the problem on hand (e.g. Coppersmith-Winograd matrix multiplication). Then your computation may have all kludges you can only detect by experiment. There are preparing algorithms which do nothing to solve the problem but to speed up the real solver (matrix conditioning).
There are suboptimal implementations. With longer lengths, your speed may drop dramatically (cache missing, memory moving etc.). So for practical purposes, I advise to do experimentation.
The best thing is to double each time the length of the input and compare the times. And yes, you do find out if an algorithm has n^1.5 or n^1.8 complexity. Simply quadruple the input length and you need only the half time for 1.5 instead of 2.
You get again nearly half the time for 1.8 if you multiply the length 256 times.
That might work. I would need to test large values of n. If I measured the time to multiply two n-bit BigIntegers (t_0) and then two 2n-bit BigIntegers (t_1).
Then I might expect the complexity to be O(n^(log2(t_1/t_0))). In general I am a little skeptical of empirical methods though (possibly unfairly). – PythonPower Jan 28 '10 at 12:02 This is a difficult approach to take, though.
A priori, there's no reason to think that a single algorithm is used rather than a combination of algorithms. Thus the scaling from 10 digits to 1000 digits might be different from the scaling from 1000 digits to 3000 digits. – Charles May 28 '10 at 20:25.
There is a new "better" BigInteger class that is not being used by the sun jdk for conservateism and lack of useful regression tests (huge data sets). The guy that did the better algorithms might have discussed the old BigInteger in the comments. Here you go futureboy.us/temp/BigInteger.java.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.