"YOU AND THE ART OF ONLINE DATING" is the only product on the market that will take you step-by-step through the process of online dating, provide you with the resources to help ensure success. Get it now!
Well, you can have another array of the same size where you store the partial sums. Then, whenever you are given the bounds you can just subtract the partial sums and you get the sum of elements in that interval. For example.
Well, you can have another array of the same size where you store the partial sums. Then, whenever you are given the bounds you can just subtract the partial sums and you get the sum of elements in that interval. For example: Elements: 1 2 3 4 5 6 Partial_Sum: 1 3 6 10 15 21 Lets, say that the array starts at index=0, and you want the sum of elements in the interval 1, 3 inclusive: // subtract 1 from the index of the second sum, because we // want the starting element of the interval to be included.
Partial_Sum3 - Partial_Sum1-1 = 10 - 1 = 9.
Good answer. If the items are sorted (i.e. The range is specified as values rather than indexes), the algorithm is O(log n).
Actually 2 * log n, because it'll take two lookups with a binary search. – Jim Mischel Feb 24 at 17:03 +1 for not overcomplicating it like I did. :-) – Aasmund Eldhuset Feb 24 at 17:03 @Jim Mischel Surely computing Partial_Sum is O(N) (you have to visit every element at least once - there's just no way around that) and computing the sum between a and be is O(1) (constant time), provided indexing into your array is also constant time.
If you expect to compute sums for lots of different values of a and be on the same array then this will be significantly faster otherwise not so much. Also, with the implementation above - watch out for a=0 since it will try to index Partial_Sum-1. – Adam Bowen Feb 24 at 18:04 @Adam Bowen: As I said in my comment, if the range is specified as values rather than indexes into the array, then you'll have to do a binary search to find the indexes of the values.
And, yes, there is pre-processing time--that was assumed. There's no point in computing the partial sums if you're only going to do this once for a particular array. – Jim Mischel Feb 24 at 18:18.
I seem to recall that prefix sums can be used to answer such queries in O(lg n) time. EDIT: I was a little too quick there - it can be done even faster. If you spend O(n) time (and O(n) extra memory) precomputing the prefix sum array (on a single-core computer), the answer to each query can be found in O(1) time by subtracting the appropriate elements of that array.
If you happen to have n processors available, the precomputation can be done in O(lg n) time.
1 +1: No. I don't think you are overcomplicating! This works with dynamic arrays, and instead of SUM, it works with any associative (we can probably drop the requirement of commutative) operation.
FOr instance MIN, MAX, GreatestCommonDivisor, LeastCommonMultiple etc etc. – Aryabhatta Feb 24 at 17:44.
Ok maybe I found a solution to have log(n) on both change value and sum, and with a linear space overhead. I'll try to explain: we build a binary tree, where the leaves are the array values, in the order they are in the array (not sorted, not a sorted tree). Then we create the tree bottom-up merging 2 leaves at a time, and putting their sum in the parent.
For example if the array has length 4 and values 1,5,3,2, we'll have a tree with 3 levels, the root will be the total sum (11) and the others will be 1+5->6 and 3+2->5 now, to change a value we have to update this tree (log n), and to compute the sum I worked out this algorithm (log n): acc = 0 // accumulator starting from the lower bound, we go up the tree. I we go up left (current node is the right child) then acc += current_node - parent_node. If we go up right (current node is the left child) we don't do anything.
We then do the same from the upper bound, of course in this case it's the opposite (we do the sum if we go up right) we do this alternating, once on the lower bound, once on the upper bound. If we have that the 2 node we reach are actually the same node, we then sum the value of that node to the accumulator, and return the accumulator. I know I didn't explain it well.. I'm having some difficulty in explaining.. Anyone understood?
1; I was just scetching out this algorithm myself, but you beat me to it... – Aasmund Eldhuset Feb 24 at 18:25 (In other words: I am very certain that this algorithm works, and that your time and space bounds are correct. ) – Aasmund Eldhuset Feb 24 at 18:34 @marco The total complexity for this algorithm is O(N + log N): O(N) for the sum + O(log N) for the update. This is still higher than O(N).
Please see my answer below (or above). (: – dave Feb 24 at 18:37 @RPR: This is a data structure problem. You are to create an 'array' structure which supports querying for range sums.So calling it Omega(N) is not really correct.
– Aryabhatta Feb 24 at 18:57 @Moron You are assuming the content of the array is static (in which case it would be appropriate to precompute some or all sum ranges), as opposed to what the author of the question has stated as the problem: "the array is dynamic, values can change, and I don't want to have linear complexity to change a value! ", therefore, the optimal solution to this problem is O(N). – dave Feb 24 at 19:05.
I must be missing something about the question. Given an array of partial sums, you should be able to get constant complexity -- the sum of elements from a to be is partial_sumsb - partial_sumsa (or if you can't assume aIf so, then assuming your array is sorted, you can get O(log N) complexity by using a binary search for the locations of a and b, then subtracting as above. If the array isn't (and can't be) sorted, you can accomplish the same by creating an array of references to the original objects, sorting the references, and generating partial sums for those references.
That adds work to the preprocessing, but keeps O(log N) for the queries. Edit: Making the array(s) dynamic should have no effect, at least in terms of computational complexity. If you only ever insert/delete at the end of the main array, you can insert/delete in constant time in the partial sums array as well.
For an insertion, you do something like: N = N + 1 main_arrayN = new_value partial_sumN = partial_sumN-1 + new_value To delete from the end, you just use N = N - 1, and ignore the values previously at the ends of both arrays. If you need to support insertion/deletion in the middle of the main array, that takes linear time. Updating the partial sums array can be done in linear time as well.
For example, to insert new_value at index i, you'd do something like: N = N + 1 for location = N downto I + 1 main_arraylocation = main_arraylocation-1 partial_sumslocation = partial_sumslocation-1 + new_value Deleting is similar, except that you work your way up from the deletion point to the end, and subtract the value being deleted. I did say "should" for a reason though -- there is a possible caveat. If your array is extremely dynamic and the contents are floating point, you can/will run into a problem: repeatedly adding and subtracting values as you insert/delete elements may (and eventually will) lead to rounding errors.
Under these circumstances, you have a couple of choices: one is to abandon the idea altogether. Another uses even more storage -- as you add/delete items, keep a running sum of the absolute values of the elements that have been added/subtracted. When/if this exceeds a chosen percentage of the partial sum for that point, you re-compute your partial sums (and zero the running sum).
1 for not overcomplicating it like I did. :-) – Aasmund Eldhuset Feb 24 at 17:02 Perhaps he wants a dynamic array, he can update values at will. In which case a binary tree gives O(logn) insert/access/query times.
+1 for addressing the other interpretation, though. – Aryabhatta Feb 24 at 17:37 @Moron: Yup -- good point. – Jerry Coffin Feb 24 at 17:41 1 Yes that's right, as I said in the comments above, the array is dynamic.
With your solution I get constant complexity on sum() but linear on change_value()... – marco signati Feb 24 at 18:03 @marco signati: see edited answer. A dynamic array doesn't really change anything. – Jerry Coffin Feb 24 at 19:21.
According to the problem statement, you are given an array of numbers, and a pair of indices representing the bounds of an interval which contents are to be summed. Since there is no search involved in this problem, representing the data as a binary tree structure offers no advantage in terms of time or space complexity. Since you are not allowed to execute your solution in a multi-processor environment, you are "stuck" with O(N).
If your solution were allowed to execute in a multi-processor environment, the optimal complexity would be O(N/p + p + 1), where p is the number of processors available. This is due to the fact that, in this case, you would have been able to divide the interval into p sub-intervals (+1), sum the intervals in parallel (N/p), and then sum the result of each individual sub-interval (+p), to complete the calculation.
There are two cases: static data or dynamic (varying) data 1. Static data For static data this is a well known problem. First compute the "sum table" (an array of n+1 elements): st0 = 0; for (int i=0,n=x.size(); x >& data, int index, int delta) { for (int level=0,n=data.size(); level>= 1; } } int sumToIndex(std::vector >& data, int index) { int result = 0; for (int level=0,n=data.size(); level>= 1; } return result; } int sumRange(std::vector >& data, int a, int b) { return sumToIndex(data, b) - sumToIndex(data, a); } Basically at each "level" a cell holds the sums of two cells of the next finer levels.
When you add a data to the lowest (higher-resolution) level you also have to add it to the higher levels (this is what addDelta does). To compute the sum of all values from 0 to x you can use higher levels to save up computation... see the following picture: Finally to get the sum from a to be you simply compute the difference between those two sums starting from 0.
– Jerry Coffin Feb 24 at 17:31 @Jerry Coffin: It's o(n) where n is the size of the input (that for a 2d matrix is rows*columns). I added an explanation of how to build the 2d sum table with a single pass and a link to an article explaining how to use massively parallel hw (CUDA) to speedup this computation. – 6502 Feb 24 at 18:32 For a simpler way of stating this, please see my answer below (or above).
(: – dave Feb 24 at 19:28 Actually the optimum complexity of the parallel computation for case 1 (the case stated by the question's author), is O(n/m + m + 1). – dave Feb 24 at 19:30 Yes, good answer, it's the same idea I used in my algorithm with the binary tree! Maybe this version is easier to implement/explain.. – marco signati Feb 24 at 22:18.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.