How to erase part of an image as the user touches it?

I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, I knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, I knew I could ise it on the iPhone. Doing what you want to do with OpenCV is extremely easy.

First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way (IplImage *)CreateIplImageFromUIImage:(UIImage *)image { CGImageRef imageRef = image. CGImage; //This is the function you use to convert a UIImage -> IplImage CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); IplImage *iplimage = cvCreateImage(cvSize(image.size. Width, image.size.

Height), IPL_DEPTH_8U, 4); CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height, iplimage->depth, iplimage->widthStep, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault); CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size. Width, image.size. Height), imageRef); CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace); return iplimage;} + (UIImage *)UIImageFromIplImage:(IplImage *)image { //Convert a IplImage -> UIImage CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSData * data = NSData alloc initWithBytes:image->imageData length:image->imageSize; //NSData *data = NSData dataWithBytes:image->imageData length:image->imageSize; CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data); CGImageRef imageRef = CGImageCreate(image->width, image->height, image->depth, image->depth * image->nChannels, image->widthStep, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault); UIImage *ret = UIImage alloc initWithCGImage:imageRef; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); data release; return ret;} Now that you have both the basic functions you need you can do whatever you want with your IplImage: this is what you want: (UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{ //r is the radious of the erasing int a = point.

X; int be = point. Y; int position; int minX,minY,maxX,maxY; minX = (a-r>0)? A-r:0; minY = (b-r>0)?

B-r:0; maxX = ((a+r) width))? A+r : (image->width); maxY = ((b+r) height))? B+r : (image->height); for (int I = minX; I imageData) + (j*image->widthStep + i*image->nChannels); ptr1 = ptr2 = ptr3 = ptr4 = 0; } } } UIImage * res = self UIImageFromIplImage:image; return res;} Sorry for the formatting If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's If you want to check out an app currently working with OpenCV on the AppStore go get : Flags&Faces.

I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, I knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, I knew I could ise it on the iPhone. Doing what you want to do with OpenCV is extremely easy.

First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way. + (IplImage *)CreateIplImageFromUIImage:(UIImage *)image { CGImageRef imageRef = image. CGImage; //This is the function you use to convert a UIImage -> IplImage CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); IplImage *iplimage = cvCreateImage(cvSize(image.size.

Width, image.size. Height), IPL_DEPTH_8U, 4); CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height, iplimage->depth, iplimage->widthStep, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault); CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size. Width, image.size.

Height), imageRef); CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace); return iplimage;} + (UIImage *)UIImageFromIplImage:(IplImage *)image { //Convert a IplImage -> UIImage CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSData * data = NSData alloc initWithBytes:image->imageData length:image->imageSize; //NSData *data = NSData dataWithBytes:image->imageData length:image->imageSize; CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data); CGImageRef imageRef = CGImageCreate(image->width, image->height, image->depth, image->depth * image->nChannels, image->widthStep, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault); UIImage *ret = UIImage alloc initWithCGImage:imageRef; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); data release; return ret;} Now that you have both the basic functions you need you can do whatever you want with your IplImage: this is what you want: +(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{ //r is the radious of the erasing int a = point. X; int be = point. Y; int position; int minX,minY,maxX,maxY; minX = (a-r>0)?

A-r:0; minY = (b-r>0)? B-r:0; maxX = ((a+r) width))? A+r : (image->width); maxY = ((b+r) height))?

B+r : (image->height); for (int I = minX; I imageData) + (j*image->widthStep + i*image->nChannels); ptr1 = ptr2 = ptr3 = ptr4 = 0; } } } UIImage * res = self UIImageFromIplImage:image; return res;} Sorry for the formatting. If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's If you want to check out an app currently working with OpenCV on the AppStore go get :Flags&Faces.

You usually want to draw into the current graphics context inside of a drawRect: method, not just any old method. Also, a clip region only affects what is drawn to the current graphics context. But instead of going into why this approach isn't working, I'd suggest doing it differently.

What I would do is have two views. One with the image, and one with the gray color that is made transparent. This allows the graphics hardware to cache the image, instead of trying to redraw the image every time you modify the gray fill.

The gray one would be a UIView subclass with CGBitmapContext that you would draw into to make the pixels that the user touches clear. There are probably several ways to do this. I'm just suggesting one way above.

Thanks for pointing me in the right direction. I was looking into that before, but couldn't find the right way to 1) create an ARGB bitmap (it seems like it is always an RGB) and 2) manipulate the pixel's alpha value once I have the 2D array of pixel data. I'll keep digging and post what I find out.Thanks.

– Joel Jun 5 '10 at 14:52 It would be super handy if you posted your solution, Joel. (Or at least the pertinent drawRect methods. ) – livingtech Dec 9 '10 at 23:50.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions