How to optimized this image processing replace all pixels on image with closest available RGB?

This is a similar question (with no definitive answer), but the answer there has the code for directly accessing pixels from an image.

Up vote 1 down vote favorite share g+ share fb share tw.

Im' trying to replace all pixels of input image with closest available RGB. I have a array contain color and input image. Here is my code, it give me an output image as expected, BUT it take very LONG time( about a min) to process one image.

Can anybody help me improve the code? Or if you have any other suggestions, please help. UIGraphicsBeginImageContextWithOptions(CGSizeMake(CGImageGetWidth(sourceImage),CGImageGetHeight(sourceImage)), NO, 0.0f); //Context size I keep as same as original input image size //Otherwise, the output will be only a partial image CGContextRef context; context = UIGraphicsGetCurrentContext(); //This is for flipping up sidedown CGContextTranslateCTM(context, 0, self.imageViewArea.image.size.

Height); CGContextScaleCTM(context, 1.0, -1.0); // init vars float d = 0; // squared error int idx = 0; // index of palette color int min = 1000000; // min difference UIColor *oneRGB; // color at a pixel UIColor *paletteRGB; // palette color // visit each output color and determine closest color from palette for(int y=0; y.

Profile your code. – Jacob Feb 13 at 15:23 What is the code for ColorDiff? Also, the convention is to name methods with initial lower case letters, ex: self colorDiffWithPalette:paletteRGB forRGB:oneRGB – Zaph Feb 13 at 15:27 @Jacob:I think the loops is talking most of time consume.

– user1139699 Feb 13 at 15:47 @CocoaFu: the ColorDiff if find different between two input UIColor. And the name of method was myColorDiff, but when I edit the name, I forgot to change it back to lowercase. Thanks.

Do you have any other suggestion to improve the time consuming for the code? – user1139699 Feb 13 at 15:49 1 The comment says myColorDiff is returning the sum of the squares which would be correct, but I don't see any squaring in the actual code. – Mark Ransom Feb 13 at 18:46.

This is a similar question (with no definitive answer), but the answer there has the code for directly accessing pixels from an image. Quantize Image, Save List of Remaining Colors You should do that rather than use CG functions for each get and set pixel. Drawing 1 pixel of an image onto another image is a lot slower than changing 3 bytes in a array.

Also, what's in ColorDiff -- you don't need perfect diffing as long as the closest pixel has the smallest diff. There may be room for pre-processing this list so that for each palette entry you have the smallest diff to the nearest other palette entry. Then, while looping through pixels, I can quickly check to see if the next pixel is within half that distance to the color just found (because photos tend to have common colors near each other).

If that's not a match, then while looping through the palette, if I am within half this distance to any entry, there is no need to check further. Basically, this puts a zone around each palette entry where you know for sure that this one is the closest.

Nice optimization for early exit, especially when used against the previously found palette entry. For the linear search it's only going to cut the search time in half on average, probably not enough to be a viable solution by itself. Note that when comparing the square of the distance you want to use 1/4 of the value, not half.

– Mark Ransom Feb 13 at 18:44 I think you get more than half benefit because of color runs of nearly the same color happening fairly often in real images. I think the real solution is to use direct memory access to the pixels -- the other idea probably won't come close to that one in effectiveness. – Lou Franco Feb 13 at 19:44 I just started to test it work with array of RGB or not.

Then my goal is read thru pixels of input image and find all closest tile images to replace. It means I can't really just change RGB. – user1139699 Feb 13 at 19:53 I don't think I was very clear - I was splitting your solution into two parts.

If you manage to skip the linear search, or in your words "looping through the palette", you're going to get fantastic savings. That's the case when you have a run of similar colors. However once you start searching the entire palette it's totally random where the match will be so the savings will be half or even less.

– Mark Ransom Feb 13 at 19:58 Mark ah, I see that now. @user1139699 the code you have ends up coloring just one pixel, so it's equivalent to just changing the RGB of a pixel -- what do you mean by tiles? – Lou Franco Feb 15 at 14:28.

The usual answer is to use a k-d tree or some other Octree structure to reduce the number of computations and comparisons that have to be done at each pixel. I've also had success with partitioning the color space into a regular grid and keeping a list of possible closest matches for each part of the grid. For example you can divide the (0-255) values of R,G,B by 16 and end up with a grid of (16,16,16) or 4096 elements altogether.

Best case is that there's only one member of the list for a particular grid element and no need to traverse the list at all.

I understand what you saying. That's why I scale down my processing image to lower resolution, then do processing – user1139699 Feb 13 at 19:41.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions