How to make a step by step display animation in openGL? - c++

How to make a step by step display animation in openGL??
I'M doing a reprap printer project to read a GCode file and interpret it into graphic.
now i have difficulty make a step by step animation of drawing the whole object.
i need to draw many short lines to make up a whole object.
for example:
|-----|
| |
| |
|-----|
the square is made up of many short lines, and each line is generated by code like:
glPushMatrix();
.....
for(int i=0; i< instruction[i].size(),i++)
{ ....
glBegin(GL_LINES);
glVertex3f(oldx, oldy, oldz);
glVertex3f(x, y, z);
glEnd();
}
glPopMatrix();
now i want to make a step animation to display how this square is made. I tried to refresh the screen each time a new line is drawn, but it doesn't work, the whole square just come out at once. anyone know how to make this?

Typical OpenGL implementations will queue up large number of calls to batch them together into bursts of activity to make optimal use of available communication bandwidth and GPU time resources.
What you want to do is basically the opposite of double buffered rendering, i.e. rendering where each drawing step is immediately visible. One way to do this is by rendering to a single buffered window and call glFinish() after each step. Major drawback: It's likely to not work well on modern systems, which use compositing window managers and similar.
Another approach, which I recommend, is using a separate buffer for incremental drawing, and constantly refreshing the main framebuffer from this one. The key subjects are Frame Buffer Object and Render To Texture.
First you create a FBO (tons of tutorials out there and as answers on StackOverflow). A FBO is basically an abstraction to which you can connect target buffers, like textures or renderbuffers, and which can be bound as the destination of drawing calls.
So how to solve your problem with them? First you should not do the animation by delaying a drawing loop. This has several reasons, but the main issue is, that you loose program interactivity by this. Instead you maintain a (global) counter at which step in your animation you are. Let's call it step:
int step = 0;
Then in your drawing function you have to phases: 1) Texture update 2) Screen refresh
Phase one consists of binding your framebuffer object as render target. For this to work the target texture must be unbound
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, animFBO);
glViewport(0, 0, fbo.width, fbo.height);
set_animFBO_projection();
the trick now is, that you clear the animFBO only once, namely after creation, and then never again. Now you draw your lines according to the animation step
draw_lines_for_step(step);
and increment the step counter (could do this as a compound statement, but this is more explicit)
step++;
After updating the animation FBO it's time to update the screen. First unbind the animFBO
glBindFramebuffer(GL_FRAMEBUFFER, 0);
We're now on the main, on-screen framebuffer
glViewport(0, 0, window.width, window.height);
set_window_projection(); //most likely a glMatrixMode(GL_PROJECTION); glOrtho(0, 1, 0, 1, -1, 1);
Now bind the FBO attached texture and draw it to a full viewport quad
glBindTexture(GL_TEXTURE_2D, animFBOTexture);
draw_full_viewport_textured_quad();
Finally do the buffer swap to show the animation step iteration
SwapBuffers();

You should have the SwapBuffer method called after each draw call.
Be sure you don't screw the matrix stack and you'll probably need something to "pause" the rendering like a breakpoint.

If you only want the Lines to appear one after another and you dont have to be nit-picking about efficiency or good programming style try something like:
(in your drawing routine)
if (timer > 100)
{
//draw the next line
timer = 0;
}
else
timer++;
//draw all the other lines (you have to remember wich one already appeared)
//for example using a boolean array "lineDrawn[10]"
The timer is an integer that tells you, how often you have drawn the scene. If you make it larger, stuff happens more slowly on the screen when you run your program.
Of course this only works if you have a draw routine. If not, I strongly suggest using one.
->plenty tutorials pretty everywhere, e.g.
http://nehe.gamedev.net/tutorial/creating_an_opengl_window_%28win32%29/13001/
Goor luck to you!
PS: I think you have done nearly the same, but without a timer. thats why everything was drawn so fast that you thought it appeared all at the same time.

Related

Libgdx shader that affects whole screen

I'm making a game in Libgdx.
The only way I have ever known how to use shaders is to have the batch affect the given textures one after another. This is what I normally do in my code:
shader = new ShaderProgram(Gdx.files.internal("shaders/shader.vert"), Gdx.files.internal("shaders/shader.frag"));
batch.setShader(shader);
And that's about all of the needed code.
Anyways, I do not want this separation between textures. However, I can't find any way to affect the whole screen at once with a shader, like the whole screen is just one big texture. To me, it seems like the most logical way to use a shader.
So, does anyone know how to do something like this?
Draw all textures (players, actors, landscape, ...) with the same batch and, if you want to affect also the background with the same shader, draw a still texture with the size of the screen in the background and draw it with the same batch.
Quite easy with FBO objects, you can get "the whole screen as just one big texture" like you said in your question:
First of all, before any rendering, create yout FBO object and begin it:
FrameBuffer fbo = new FrameBuffer(Format.RGBA8888, Width, Height, false);
fbo.begin();
Then do all of your normal rendering:
Gdx.gl.glClearColor(0.2f, 0.2f, 0.2f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
...
Batch b = new SpriteBatach(...
//Whatever rendering code you have
Finally save that FBO into a texture or sprite, do any transformation needed on it, and prepare and use your shader on it.
fbo.end();
SpriteBatch b = new SpriteBatch();
Sprite s = new Sprite(fbo.getColorBufferTexture());
s.flip(false,true); //Coord systems in buffer differs from screen
b.setShader(your_shader);
b.begin();
your_shader.setUniformMatrix("u_projTrans",camera.combined); //if you have camera
viewport.apply(); //if you have viewport
b.draw(s,0,0,viewportWidth,viewportHeight);
b.end();
b.setShader(null);
And this is all!
Essentially what you are doing is to "render" all your assets and game scene and stages into a buffer, than, saving that buffer image into a texture and finally rendering that texture with the shader effect you want.
As you may notice, this is highly inefficient, since you are copying all your screen to a buffer. Also note that some older drivers only support power of 2 sizes for the FBO, so you may have to have that in mind, check here for more information on the topic.

Zoom window in OpenGL

I've implemented Game of Life using OpenGL buffers (as specified here: http://www.glprogramming.com/red/chapter14.html#name20). In this implementation each pixel is a cell in the game.
My program receives the initial state of the game (2d array). The size array ,in my implementation, is the size of the window. This of course makes it "unplayable" if the array is 5x5 or some other small values.
At each iteration I'm reading the content of the framebuffer into a 2D array (its size is the window size):
glReadPixels(0, 0, win_x, win_y, GL_RGB, GL_UNSIGNED_BYTE, image);
Then, I'm doing the necessary steps to calculate the living and dead cells, and then draw a rectangle which covers the whole window, using:
glRectf(0, 0, win_x, win_y);
I want to zoom (or enlarge) the window without affecting the correctness of my code. If I resize the window, then the framebuffer content won't fit inside image(the array). Is there a way of zooming the window(so that each pixel be drawn as several pixels) without affecting the framebuffer?
First, you seem to be learning opengl 2, I would suggest instead learning a newer version, as it is more powerful and efficient. A good tutorial can be found here http://www.opengl-tutorial.org/
If i understand this correctly, you read in an initial state and draw it, then continuously read in the pixels on the screen, update the array based on the game of life logic then draw it back? this seems overly complicated.
The reading of the pixels on the screen is unnecessary, and will cause complications if you try to enlarge the rects to more than a pixel.
I would say a good solution would be to keep a bit array (1 is a organism, 0 is not), possibly as a 2d array in memory, updating the logic every say 30 iterations (for 30 fps), then drawing all the rects to the screen, black for 1, white for 0 using glColor(r,g,b,a) tied to an in statement in a nested for loop.
Then, if you give your rects a negative z coord, you can "zoom in" using glTranslate(x,y,z) triggered by a keyboard button.
Of course in a newer version of opengl, vertex buffers would make the code much cleaner and efficient.
You can't store your game state directly the window framebuffer and then resize it for rendering, since what is stored in the framebuffer is by definition what is about to be rendered. (You could overwrite it, but then you lose your game state...) The simplest solution would just to store the game state in an array (on the client side) and then update a texture based on that. Thus for each block that was set, you could set a pixel in a texture to be the appropriate color. Each frame, you then render a full screen quad with that texture (with GL_NEAREST filtering).
However, if you want to take advantage of your GPU there are some tricks that could massively speed up the simulation by using a fragment shader to generate the texture. In this case you would actually have two textures that you ping-pong between: one containing the current game state, and the other containing the next game state. Each frame you would use your fragment shader (along with a FBO) to generate the next state texture from the current state texture. Afterwards, the two textures are swapped, making the next state become the current state. The current state texture would then be rendered to the screen the same way as above.
I tried to give an overview of how you might be able to offload the computation onto the GPU, but if I was unclear anywhere just ask! For a more detailed explanation feel free to ask another question.

How to zoom-in/out to an OpenGL screen without rendering the entire screen again

I am writing a program to display 5 millions rectangles, rendering with OpenGL.
It takes about approx. 3 seconds to display these rectangles on the screen.
However, it will also take the same time when I try to zoom in/out or pan left/right the screen.
I am wondering if there is a way to save everything into memory/buffer, therefore the screen doesn't have to be redraw all over again and again.
I am also open with other solutions.
The following is my reshape function:
static void reshape_cb() {
glViewport(0, 0, (GLint) screen_width, (GLint) screen_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D (0.0, DESIGN_SIZE, 0.0, DESIGN_SIZE);
}
I am writing a program to display 5 millions rectangles, rendering with OpenGL. It takes about approx. 3 seconds to display these rectangles on the screen.
This sounds like you're sending drawing commands in a very inefficient manner. Modern GPUs are capable of rendering hundreds of millions of triangles per second. My guess would be, that you're using immediate mode.
I am wondering if there is a way to save everything into memory/buffer, therefore the screen doesn't have to be redraw all over again and again.
Zooming usually means a change of point of view or rendering resolution, hence it will require a full redraw.
I am also open with other solutions. Thank you.
You should optimize your drawing code. The keywords are:
Vertex Arrays
Vertex Buffer Objects
large drawing batches
I agree that drawing this scene shouldn't take 3 seconds.
However, to answer the question: Yes, you can do that.
You'd render to an offscreen framebuffer (FBO), which you could even do on another thread with a separate shared context so it doesn't block the GUI. Then, the GUI would draw using the most recently rendered FBO (you'd double-buffer these so another one could be drawing while you use the old one for display). You could then pan and zoom around on the rendered FBO at full interactive framerate. Of course you couldn't pan further up/down/left/right than you rendered, and if you zoom in too much (more than 1.5x or 2x) things will get blurry. But it can be done. Also, as noted in the other answer, your view point or geometry or shading can't change, it will be just like moving around in a fixed photo.

opengl don't call glClear() in render

I draw GLUT primitives each render, more and more. To make things faster, I decided not to clear every time, just put new primitives. Is that just wrong?
When I did this, I got blinking. Putting sleep() showed that one frame is ok and second is empty and so on.
EDIT:
Brief code in render(display) that is executed once(I use Java's JOGL):
gl.glPushMatrix();
gl.glColor3f(1, 1, 0);
gl.glTranslatef(0, 0, 0);
glut.glutSolidCube(10);
gl.glPopMatrix();
drawable.swapBuffers();
Sure it is empty.When you clear, you clear the front buffer frame.Then , when swapBuffers() called the back-buffer frame becomes front , and in the meanwhile your stuff is being drawn to the front buffer frame ,which just has become back-buffer frame.Then ,when the backbuffer frame is finished the buffer swap is done(triggered by the call to swapBuffers().That is how double buffering works.If you don't clear the frame color you will get your drawings accumulated in the front buffer over time ,which I am not sure is desired result.
Clearing the front buffer once in the beginning of every render loop is not a big performance hit.The problem appears when you call glClear() frequently ,like calling it before each object drawing which also doesn't make sense as in such a case you will see only the last drawn object.
For the flickering - you should be more descriptive on how you do it all.From your example it is unclear why it happens.
gl.glDisable(GL_DEPTH_TEST);
?
It's hard to say without seeing more of your code.
When ever I get unexpected results in openGL code, I mentally go through the list of state possibilities and set them either enabled or disabled:
Depth Test
Texturing
Lighting
Blending
Culling
Framebuffers
Shaders
Swapbuffers(HDC) doesn't actually copy the contents of the buffer but merely swaps the front and back buffer, that is why you see every odd frame, but not the even ones.

OpenGL what do I have to do before drawing a triangle?

Most of the tutorials, guides and books that I've found out there are related to OpenGL, explains how to draw a triangle and initialize OpenGL. That's fine. But when they try to explain it they just list a bunch of functions and parameters like:
glClear()
glClearColor()
glBegin()
glEnd()
...
Since I'm not very good at learning things by memory, I always need an answer to "why are we doing this?" so that I'll write that bunch of functions because I remember that I have to set a certain things before doing somethings else and so on not because the tutorial told me so.
Could please someone explain to me what do I have to define to OpenGL (only pure OpenGL, I'm using SFML as background library but that really doesn't matter) before starting to draw something with glBegin() and glEnd()?
Sample answer:
You have to first tell OpenGL what color does it need to clear the
screen with. Because each frame needs to be cleared by the previous
before we start to draw the current one...
First you should know, that OpenGL is a state machine. That means, that apart from creating the OpenGL context (which is done by SFML) there's no such thing as initialization!
Since I'm not very good at learning things by memory,
This is good…
I always need an answer to "why are we doing this?"
This is excellent!
Could please someone explain to me what do I have to define to OpenGL (only pure OpenGL, I'm using SFML as background library but that really doesn't matter) before starting to draw something with glBegin() and glEnd()?
As I already told: OpenGL is a state machine. That basically means, that there are two kinds of calls you can do: Setting state and executing operations.
For example glClearColor sets a state variable, that of the clear color, which value is used for clearing the active framebuffer color to, when a call to glClear with the GL_COLOR_BUFFER_BIT flag set. There exists a similar function glClearDepth for the depth value (GL_DEPTH_BUFFER_BIT flag to glClear).
glBegin and glEnd belong to the immediate mode of OpenGL, which have been deprecated. So there's little reason in learning them. You should use Vertex Arrays instead, preferrably through Vertex Buffer Objects.
But here it goes: glBegin sets OpenGL in a state that it should now draw geometry, of the kind of primitive selected as parameter to glBegin. GL_TRIANGLES for example means, that OpenGL will now interpret every 3 calls to glVertex as forming a triangle. glEnd tells OpenGL that you've finished that batch of triangles. Within a glBegin…glEnd block certain state changes are disallowed. Among those everything that has to do with transforming the geometry and generating the picture, which matrices, shaders, textures, and some others.
One common misconception is, that OpenGL is initialized. This is due to badly written tutorials which have a initGL function or similar. It's a good practice to set all state from scratch when beginning to render a scene. But since a single frame may contain several scenes (think of a HUD or split screen gaming) this happens several times a scene.
Update:
So how do you draw a triangle? Well, it's simple enough. First you need the geometry data. For example this:
GLfloat triangle[] = {
-1, 0, 0,
+1, 0, 0,
0, 1, 0
};
In the render function we tell OpenGL that the next calls to glDrawArrays or glDrawElements shall fetch the data from there (for the sake of simplicity I'll use OpenGL-2 functions here):
glVertexPointer(3, /* there are three scalars per vertex element */
GL_FLOAT, /* element scalars are float */
0, /* elements are tightly packed (could as well be sizeof(GLfloat)*3 */
trignale /* and there you find the data */ );
/* Note that glVertexPointer does not make a copy of the data!
If using a VBO the data is copied when calling glBufferData. */
/* this switches OpenGL into a state that it will
actually access data at the place we pointed it
to with glVertexPointer */
glEnableClientState(GL_VERTEX_ARRAY);
/* glDrawArrays takes data from the supplied arrays and draws them
as if they were submitted sequentially in a for loop to immediate
mode functions. Has some valid applications. Better use index
based drawing for models with a lot of shared vertices. */
glDrawArrays(Gl_TRIANGLE, /* draw triangles */
0, /* start at index 0 */
3, /* process 3 elements (of 3 scalars each) */ );
What I didn't include yet is setting up the transformation and viewport mapping.
The viewport defines how the readily projected and normalized geometry is placed in the window. This state is set using glViewport(pos_left, pos_bottom, width, height).
Transformation today happens in a vertex shader, Essentially a vertex shader is a small program written in a special language (GLSL), that takes the vertex attributes and calculates the clip space position of the resulting vertex. The usual approach for this is emulating the fixed function pipeline, which is a two stage process: First transform the geometry into view space (some calculations, like illumination are easier in this space), then project it into clip space, which is kind of the lens of the renderer. In the fixed function pipeline there are two transformation matrices for this: Modelview and Projection. You set them to whatever is required for the desired outcome. In the case of just a triangle, we leave the modelview identity and use a ortho projection from -1 to 1 in either dimension.
glMatrixMode(GL_PROJECTION);
/* the following function multiplies onto what's already on the stack,
so reset it to identity */
glLoadIdentity();
/* our clip volume is defined by 6 orthogonal planes with normals X,Y,Z
and ditance 1 from origin into each direction */
glOrtho(-1, 1, -1, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
/* now a identity matrix is loaded onto the modelview */
glLoadIdentity();
Having set up the transformation we can now draw the triangle as outlined above:
draw_triangle();
Finally we need to tell OpenGL we're done with sending commands and it should finish it's renderings.
if(singlebuffered)
glFinish();
However most of the time your window is double buffered, so you need to swap it to make things visime. Since swapping makes no sense without finishing the swap implies a finish
else
SwapBuffers();
You're using the API to set and change the OpenGL state machine.
You're not actually programming directly to the GPU, you're using a medium between your application and your GPU to do whatever you're trying to do.
The reason it is like this and doesn't work the same way as a CPU and memory, is because openGL was intended to run on os/system-independent hardware, so that your code can run on any OS and run on any hardware and not just the one your programming to.
Hence, because of this, you need to learn to use their preset code that makes sure that whatever you're trying to do it will be able to be run on all systems/OS/hardware within a reasonable range.
For example if you were to create your application on windows 8.1 with a certain graphics card(say amd's) you still want your application to be able to run on Andoird/iOS/Linux/other Windows systems/other hardware(gpus) such as Nvidia.
Hence why Khronos, when they created the API, they made it as system/hardware independent as possible so that it can run on everything and be a standard for everyone.
This is the price we have to pay for it, we have to learn their API instead of learning how to directly write to gpu memory and directly utilize the GPU to process information/data.
Although with the introduction of Vulkan things might be different when it is released(also from khronos)and we will find out how it will be working.