I would like to know if it is possible to erase parts of any drawing in OpenGL? Lets say I have drawn two lines with my mouse and those lines are overlapping at some points.
Is it possible to erase just one line? Is there a more or less simple approach?
OpenGL does not store what you draw. If you draw a line in OpenGL, then OpenGL will take that line, perform various math operations on it, and write pixels into a framebuffer that makes the shape of a line. OpenGL does not remember that you drew a line; all OpenGL can do is write pixels to the framebuffer.
The general idea is that it is up to the user of OpenGL to remember what they drew. So if you draw two lines, you should remember the coordinates you gave for those two lines. Therefore, if you want to "erase" a line, what you do is clear the screen and redraw everything except that line.
This isn't as silly as it may sound. Many OpenGL applications are constantly redrawing the screen. They show a frame, draw a new frame, then show that frame, etc. This provides the possibility for animation: changing what gets drawn and where it gets drawn from frame to frame. This creates the illusion of movement.
You can use glLogicOp with GL_XOR, then repaint the line to erase it. It's not a general solution, but it is a good fit for marquee selection or mouse tool overlays, where it was traditionally used. Note that you'll need to either use single-buffering, or copy between the back and the front buffer rather than swapping them.
There are two aproaches if I understood you correctly.
Repaint only the elements you need, each element must have a boolean indicating if it will be painted or not.
In case you need to erase exactly one part of the window, use glScissor.
Info:
Now for something new. A wonderful GL command called glScissor(x,y,w,h). What this command does is creates almost what you would call a window. When GL_SCISSOR_TEST is enabled, the only portion of the screen that you can alter is the portion inside the scissor window.
You need to clear the buffer and redraw the lines you want. For that you probably need to store the line data in some structure.
You can try stencil buffer techniques. See J. Carmack technique on shadow volumes (inverse stencil or something).
Related
Using OPENGL , I am making a simple animation where a small triangle will move through the path that I have created with mouse (glutMotionFunc).
So the problem is how can I animate a small triangle without redrawing the whole path using glutSwapBuffers();
And also ,how can I rotate that triangle only.
I don't want to use overlay as switching between these 2 layers takes much time.
If redrawing the whole path is really too expensive, you can do your rendering to an off-screen framebuffer. The mechanism to do this with OpenGL is called Frame Buffer Object (FBO)
Explaining how to use FBOs in detail is beyond the scope of an answer here, but you should be able to find tutorials. You will be using functions like:
glGenFramebuffers()
glBindFramebuffer()
glFramebufferRenderbuffer() or glFramebufferTexture()
This way, you can draw just the additional triangle to your FBO whenever a new triangle is added. To show your rendering on screen, you can copy the current content of the FBO to the primary framebuffer using glBlitFramebuffer().
You cant! Because it just does not makes sense!
The way computer screen work is the same as in films: fps! Frames per second. There is no thing as "animation" in screens, it is just a fast series of static images, but as our eyes cannot see things moving fast, it looks like it is moving.
This means that every time something changes in the thing you want to draw, you need to create a new "static image" of that stage, and that is done with all the glVertex and so pieces of code. Once you finish drawing you want to put it on the screen, so you swap your buffer.
I want to create two viewport.
The first one is for showing as normal does.
The second one is only for save to image like per one minute save to a new image.
How to do that?
Thank you~
Two methods you can use:
1) A framebuffer (http://www.opengl.org/wiki/Framebuffer) which is essentially a second frame that you can do drawing calls on. Any rendering done to the framebuffer is saved to an internal texture object, which you can then grab and do whatever with. As a side note, you can use framebuffers to do full screen effects (bloom, anti-aliasing, etc.)
2) More likely, however, you're looking for glReadPixels (https://www.opengl.org/sdk/docs/man/html/glReadPixels.xhtml). This is a quick function call you can do after you're finished your drawing calls but before you swap out your buffers (assuming you're drawing with a double-buffered context). This function will copy a selection of pixels from the framebuffer that you just issued your drawing calls to and give them to you in an array, which you can again do whatever you want with. The nice thing about this is you dont have to put up with the hassle of creating a second "viewport" (I think you meant framebuffer?), you can just copy the pixels off of the same framebuffer that you'll eventually show the user.
Let me know if you have any questions!
I'm using OpenGL to optimize GUI rendering. When rendering the whole scene it works fine, but that's not optimal, since often only a small part is changed. So I tried this:
glReadBuffer (GL_FRONT);
glDrawBuffer (GL_BACK);
glCopyPixels (0, 0, sz.X, sz.Y, GL_COLOR);
glFlush();
This should copy front buffer to back buffer, so that afterwards I can change just a portion limited using glViewport. Unfortunately when a scene is changed, it looks like the glCopyPixels command is performed after the actual rendering, so that the original content sort of alphablends with the new graphics.
What is wrong? Or is there a better way to do this?
(for the record, when I do nothing, the front buffer starts blinking with back buffer and stuff like that...)
but that's not optimal
What makes you think that? OpenGL and modern GPUs are designed on the grounds, that in the worst case you have to redraw the whole thing anyway and things should perform well in that situation, too.
Or is there a better way to do this?
Yes: Redraw the whole scene. (Or what I suggest below)
To a modern low-end GPU which easily capable of throwing tens of millions of triangles to the screen per second, the few hundred to a thousand triangles of a 2D GUI are neglectible.
In fact your copying-stuff-around will probably a worse performance hit, than redrawing everything, because copying from the front to the back buffer is not a very performant operation and causes serious synchronization issues.
If you want to cache things you might split your GUI into separate widgets of content, which you draw to individually using FBOs into textures – design it in a way that widgets may overlap. You redraw a widget if its contents change. For drawing the whole window you just redraw the full window contents from the textures into the main framebuffer.
Ok so it seems this solves the problem:
glDisable( GL_BLEND );
I don't know why, but it looks like before it was blending the data, despite I didn't find anything about it in docs and "glCopyPixels" doesn't seem it should do that.
Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.
I'm working on an OpenGL 3 renderer for a GUI toolkit called Gwen. I nearly have everything working, but I'm having some issues getting everything to blend correctly. I've sorted the triangles by which texture they use and packed them into a VBO, so with the Unit Test, it basically boils down into 3 layers: Filled Rects with no texture, Text, and the windows, buttons, etc that use a skin texture.
The Filled Rects are usually drawn on top of everything else and blended in, but the background behind everything is also a Filled Rect, so I can't count on that. There is a Z-value conflict if you draw them last (ex: the windows have a textured shadow around the edges that turns black because the background fails the depth test) and a blending/z-value conflict if you draw them first (ex: some of the selection highlights get drawn on top of instead of blending like they're supposed to).
I can't count on being able to identify any specific layer except the Filled Rects. The different layers have a mix of z-values, so I can't just draw them in a certain order to make things work. While writing this, I thought of a simple method of drawing the triangles sorted back to front, but it could mean lots of little draw calls, which I'm hoping to avoid. Is there some method that involves some voodoo magic blending that would let me keep my big batches of triangles?
You're drawing a GUI; batching shouldn't be your first priority for the simple fact that a GUI just doesn't do much. A GUI will almost never be your performance bottleneck. This smells of premature optimization; first, get it to work. Then, if it's too slow, make it work faster.
There is no simple mechanism for order-independent transparency. Your best bet is to just render things in the proper Z order.