Accumulator filling for Hough transform?

No it's not. Replace as many of the usages as you can by simple pointer arithmetic to iterate the arrays in question. Abstract out invariant expressions into local variables However, the first question is, does your profiler show that this code is a bottleneck in the context of your entire app.

If not, why bother micro-optimizing this? EDIT: loop micro-optimization - prefer the second as no array indexing required (mult versus add) int ints100; int i; int *pi; for (i = 0; I.

No it's not. Replace as many of the usages as you can by simple pointer arithmetic to iterate the arrays in question. Abstract out invariant expressions into local variables.

However, the first question is, does your profiler show that this code is a bottleneck in the context of your entire app. If not, why bother micro-optimizing this? EDIT: loop micro-optimization - prefer the second as no array indexing required (mult versus add) int ints100; int i; int *pi; for (i = 0; I.

This part definitely is the bottleneck in my processing, the rest of the program is very simple. – Denis Nov 19 '10 at 19:46 @denis - if that's all you do then yes, but see edit for an example – Steve Townsend Nov 19 '10 at 19:48 I'm sorry, i'm kind of new to this forum, which edit are you referring to? – Denis Nov 19 '10 at 19:52 @Denis - see now – Steve Townsend Nov 19 '10 at 19:53.

There is a large and repetitive Hough transform in a piece of code I'm vaguely attached too. The maintainer of that part of the code has been experimenting with sparse arrays (actually a C++ std::map keyed on the cell index if I understood his presentation right) for the accumulator with some success. I presume the speed up is related to cache locality issues, and it certainly depends on the data being sparse.

My matrix is quite sparse so that'd fit perfectly. – Denis Nov 19 '10 at 19:48 @Denis: No. This work is still going on, and as it is analysis code for a particle physics experiment there is unlikely to be a paper on the code itself, though I suspect it will go into the student's dissertation.

– dmckee Nov 19 '10 at 21:14.

Depending on your application, there are numerous way to optimise Hough Transform and fiddling with low-level code is possibly the last of them. I would start with Randomised HT or Multiresolution HT, followed Hybrid approach merge. I believe it is better to optimised algorithm first.

Last step would be do use hardware optimisation like CAD memory.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions