I am developing a paint-like application using C++ and Open GL. But every time i draw objects like circle, lines etc they don't ** stay ** on the page. By this I mean that every new object I draw is getting placed on a blank page. How do I get my drawn objects to persist?
OpenGL has no geometry persistency. Basically it's pencils, brushes and paint, with which you draw on a canvas called the "framebuffer". So after you drawn something and clear the framebuffer, it will not reappear in some magic way.
There are two solutions:
you keep a list of all drawing operations and at each redraw you repaint everything from that list.
After drawing something copy the image in the framebuffer to a texture and instead of glClear you fill the background with that texture.
Both techniques can be combined.
Just don't clear the framebuffer and anything you draw will stay on the screen. This is the same method I use to allow users to draw on my OpenGL models. This is only good for marking up an image, since by using this method you can't erase what you've drawn, unless your method of erasing is to draw using your background color.
Related
I learn opengl here: https://learnopengl.com/#!Advanced-OpenGL/Cubemaps
Did skybox. If you draw it first, then everything is fine. However, to reduce the number of pixels for its output, I try to draw it last. But when you look at the skybox through transparent objects, it is not displayed. If you draw skybox before transparent objects, then they are not displayed. How to fix it?enter image description here
Transparency is not order independent. You cannot draw something "behind" a already drawn surface. You will have to draw the skybox (at least) before you draw your transparent objects.
Note, that you also have to order your transparent objects back to front if it should be possible to correctly see through multiple of them.
I'm using Qt5 and its OpenGL integration, and am running into a problem when I try to draw translucent objects. When an object is translucent, whatever is visible behind my OpenGL window is shown within the screen area of that object, instead of the object being blended with whatever is already in the colour buffer. I have started watching YouTube videos through my translucent objects, as whatever shows through is live.
Interestingly, the most see-through an object gets seems to occur at half opacity - full opacity renders it solid, while zero opacity renders nothing at all (and whatever was previously in the background of the 3D scene remains there). Rendering translucent objects last does not fix the issue.
I have noticed that this also happens when I enable mipmaps on my textures - as the distance to a point on an object increases, the pixel concerned becomes more translucent and displays whatever is behind the OpenGL window. The issue occurs both on my Windows and OSX machines.
Is this a known issue? Is there a workaround? Google hasn't proven too helpful.
Hah, that's a funny one. I can't tell you what is going on, because it normally takes some extra effort to make windows actually transparent; in Windows you have to select a framebuffer format with an alpha channel and call DwnEnableBlurBehindWindow to actually achieve this effect. And as far as I know Qt doesn't do this.
But if it does here are a few hints:
Make sure you clear your framebuffer to alpha=1
When rendering translucent objects keep the destination alpha value 1, i.e. don't use blending modes and functions that modify the destination alpha value, or force it to 1.
There's actually little use for the alpha channel on the main window framebuffer, except for implementing window translucency effects. Unless you need those you should choose an either pixel format without an alpha channel for your window framebuffer, or keep all its pixels alpha values at full opacity.
Using OPENGL , I am making a simple animation where a small triangle will move through the path that I have created with mouse (glutMotionFunc).
So the problem is how can I animate a small triangle without redrawing the whole path using glutSwapBuffers();
And also ,how can I rotate that triangle only.
I don't want to use overlay as switching between these 2 layers takes much time.
If redrawing the whole path is really too expensive, you can do your rendering to an off-screen framebuffer. The mechanism to do this with OpenGL is called Frame Buffer Object (FBO)
Explaining how to use FBOs in detail is beyond the scope of an answer here, but you should be able to find tutorials. You will be using functions like:
glGenFramebuffers()
glBindFramebuffer()
glFramebufferRenderbuffer() or glFramebufferTexture()
This way, you can draw just the additional triangle to your FBO whenever a new triangle is added. To show your rendering on screen, you can copy the current content of the FBO to the primary framebuffer using glBlitFramebuffer().
You cant! Because it just does not makes sense!
The way computer screen work is the same as in films: fps! Frames per second. There is no thing as "animation" in screens, it is just a fast series of static images, but as our eyes cannot see things moving fast, it looks like it is moving.
This means that every time something changes in the thing you want to draw, you need to create a new "static image" of that stage, and that is done with all the glVertex and so pieces of code. Once you finish drawing you want to put it on the screen, so you swap your buffer.
Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.
I am rendering an OpenGL scene that include some bitmap text. It is my understanding the order I draw things in will determine which items are on top.
However, my bitmap text, even though I draw it last, is not on top!
For instance, I am drawing:
1) Background
2) Buttons
3) Text
All at the same z depth. Buttons are above the background, but text is invisible. It I change the z depth of the text, I can see it, but I then have other problems.
I am using the bitmap text method from Nehe's Tutorials.
How can I make the text visible without changing the z depth?
You can simply disable the z-test via
glDisable (GL_DEPTH_TEST); // or something related..
If you do so the Z of your text-primitives will be ignored. Primitives are drawn in the same order as your call the gl-functions.
Another way would be to set some constant z-offset via glPolygonOffset (not recommended) or set the depth-compare mode to something like GL_LESS_EQUAL (the EQUAL is the important one). That makes sure that primitives drawn with the same depth are rendered ontop of each other.
Hope that helps.
You can also use glDepthFunc (GL_ALWAYS).