The short answer is YES you can have "High resolution" Content.
Up vote 1 down vote favorite share g+ share fb share tw.
I am working on paint app taking reference from GLPaint app for iPhone and iPad. In this app I am filling colors in paint-images by drawings lines onscreen based on where the user touches. App working properly for iPhone.
In iPad without zooming lines on the paint view are proper no pixel distortion but after zooming lines on the paintView has distorted pixels i. E Content of OpenGL ES is not High Resolution. I am using Following code for initialize paint view: -(id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; CGFloat components3; if ((self = super initWithCoder:coder)) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.
Layer; eaglLayer. Opaque = NO; eaglLayer. DrawableProperties = NSDictionary dictionaryWithObjectsAndKeys:NSNumber numberWithBool:YES, kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil; context = EAGLContext alloc initWithAPI:kEAGLRenderingAPIOpenGLES1; if (!context ||!
EAGLContext setCurrentContext:context) { return nil; } if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { brushImage = UIImage imageNamed:@"circle 64. Png". CGImage; } else { brushImage = UIImage imageNamed:@"flower 128.
Png". CGImage; } // Get the width and height of the image width = CGImageGetWidth(brushImage) ; height = CGImageGetHeight(brushImage) ; if(brushImage) { // Allocate memory needed for the bitmap context brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); // Use the bitmatp creation function provided by the Core Graphics framework. BrushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); // After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); // You don't need the context at this point, so you need to release it to avoid memory leaks. CGContextRelease(brushContext); // Use OpenGL ES to generate a name for the texture. GlGenTextures(1, &brushTexture); // Bind the texture name.
GlBindTexture(GL_TEXTURE_2D, brushTexture); // Set the texture parameters to use a minifying filter and a linear filer (weighted average) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Specify a 2D texture image, providing the a pointer to the image data in memory glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); // Release the image data; it's no longer needed free(brushData); } CGFloat scale; if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { NSLog(@"IPAd"); self. ContentScaleFactor=1.0; scale = self. ContentScaleFactor; } else { // NSLog(@"IPHone"); self.
ContentScaleFactor=2.0; } //scale = 2.000000; // Setup OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self. Bounds; NSLog(@"Scale %f", scale); glOrthof(0, (frame.size. Width) * scale, 0, (frame.size.
Height) * scale, -1, 1); glViewport(0, 0, (frame.size. Width) * scale, (frame.size. Height) * scale); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); glEnableClientState(GL_VERTEX_ARRAY); glEnable(GL_BLEND); // Set a blending function appropriate for premultiplied alpha pixel data glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); // Make sure to start with a cleared buffer needsErase = YES; // Define a starting color HSL2RGB((CGFloat) 0.0 / (CGFloat)kPaletteSize, kSaturation, kLuminosity, &components0, &components1, &components2); self setBrushColorWithRed:245.0f green:245.0f blue:0.0f; boolEraser=NO; } return self; } TO CREATE FRAME BUFFER -(BOOL)createFramebuffer { // Generate IDs for a framebuffer object and a color renderbuffer glGenFramebuffersOES(1, &viewFramebuffer); glGenRenderbuffersOES(1, &viewRenderbuffer); glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); // This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer) // allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
Context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id)self. Layer; glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer); // Get the size of the backing CAEAGLLayer glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); // For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer. GlGenRenderbuffersOES(1, &depthRenderbuffer); glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer); glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight); glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)!
= GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; } Line Drawn using Following code -(void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end { static GLfloat* vertexBuffer = NULL; static NSUInteger vertexMax = 64; NSUInteger vertexCount = 0, count, i; EAGLContext setCurrentContext:context; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); // Convert locations from Points to Pixels //CGFloat scale = self. ContentScaleFactor; CGFloat scale; scale=self. ContentScaleFactor; NSLog(@"Scale %f",scale); start.
X *= scale; start. Y *= scale; end. X *= scale; end.
Y *= scale; float dx = end. X - start. X; float dy = end.
Y - start. Y; float dist = (sqrtf(dx * dx + dy * dy)/ kBrushPixelStep); // Allocate vertex array buffer if(vertexBuffer == NULL) // vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); count = MAX(ceilf(dist), 1); //NSLog(@"count %d",count); for(i = 0; I Y + (dy) * ((GLfloat)i / (GLfloat)count); vertexCount += 1; } // Render the vertex array glVertexPointer(2, GL_FLOAT, 0, vertexBuffer); glDrawArrays(GL_POINTS, 0, vertexCount); glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); context presentRenderbuffer:GL_RENDERBUFFER_OES; } For ipad device content of paint view is proper- high resolution for normal view but after zooming I am not getting High resolution content of paint view pixel of the lines looks distorted. I have tried to change ContentScaleFactor as well as scale parameter of above code to see the difference but nothing worked as expected.
IPad supports contentScaleFactor of 1.0 & 1.5, when I set contentScaleFactor = 2 Paint view can not display line it shows weird dotted lines. Is there any way to make contents of OpenGL es high resolution? Objective-c ios xcode ipad opengl-es link|improve this question edited Mar 13 at 17:07Mark13.2k21534 asked Mar 13 at 12:55user39240618715 55% accept rate.
Your code is utterly ugly - get rid of those excessive blank lines! – Till Mar 13 at 14:41.
The short answer is YES, you can have "High resolution" Content. But you will have to clearly understand the issue before solving it. This is the long answer : The brushes you use have a specific size (64 or 128).
As soon as your virtual paper (the area in which you draw) will display its pixels larger than 1 screen pixel, you will start to see the "distortion". For example, in your favorite picture viewer, if you open one of your brush and zoom in the picture will also be distorted. You cannot avoid that, unless using vertor-brushes (with is not the scope of this answer and is far more complicated).
The quickest way would be to use more detailled brushes, but it is a fudge as if you zoom enought, the texture will look distorted as well. You can also add a magnification filter using glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); . You used MIN in your sample, add this one will smooth the textures.
Thanks for replying. In this code I've added paintView in UIScrollview for zooming purpose. Here I am using a brushImage of circular shape after using GL_TEXTURE_MAG_FILTER it is showing Rectangular shape I think it is showing rectangle of 64*64 which is the size of brushImage.
I've tried to increse size 128* 128 of brushImage but it only increases size of brushImage on paintview not pixel quality. I have no clue of how to improve brushImage pixel quality so that it won't show distorted content on zoom. – user392406 Mar 14 at 5:50 It is a rather complex question.
Quickly, you will have to change your code to differentiate the brush size in your working space (with it's own size and resolution) and the brush resolution. E.g. , use a 1024 brush picture and generate mipmaps.
In your painter, the user might select a size of the brush tool (related to the working space). If working space res > the brush resolution, it will look fine. Play around with some popular image editor with resolution and brushes and you will find out how to achieve what you want.
– rockeye Mar 14 at 9:21.
I am not sure what you mean by high resolution. Opengl is a vector library with a bitmap backed rendering system. The backing store will have the size in pixels (multiplied by the content scale factor) of the layer you are using to create the renderbuffer in: - (BOOL)renderbufferStorage:(NSUInteger)target fromDrawable:(id)drawable once it is created there is no way to change the resolution, nor would it make sense to do so generally, one renderbuffer pixel per screen pixel makes the most sense.
It is hard to know exactly what problem you are trying to solve without knowing what zooming you are talking about. I assume you have set up a CAEAGLLayer in a UIScrollView, and you are seeing pixel artifacts. This is inevitable, how else could it work?
If you want your lines to be smooth, you need to implement them using triangle strip meshes with alpha blending at the edges, which will provide antialiasing. Instead of zooming the layer itself, you would simply "zoom" the contents by scaling the vertices, but keeping the CAEAGLLayer the same size. This would eliminate pixelation and give purdy alpha blended edges.
OpenGL is a vector library – rockeye Mar 13 at 13:39 you are right, duly edited, although the basic line and point primitives are basically unusable on iOS. – Tark Mar 13 at 17:36.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.