I'm just starting OpenGL programming in Win32 C++ so don't be too hard on me :) I've been wandering along the NeHe tutorials and 'the red book' a bit now, but I'm confused. So far I've been able to set up an OpenGL window, draw some triangles etc, no problem. But now I want to build a model and view it from different angles. So do we:
Load a model into memory (saving triangles/quads coordinates in structs on the heap) and in each scene render we draw all stuff we have to the screen using glVertex3f and so on.
Load/draw the model once using glVertex3f etc and we can just change the viewing position in each scene.
Other...?
It seems to me option 1 is most plausible from all I read so far, however it seems a bit ehh.. dumb! Do we have to decide which objects are visible, and only draw those. Isn't that very slow? Option 2 might seem more attractive :)
EDIT: Thanks for all the help, I've decided to do: read my model from file, then load it into the GPU memory using glBufferData and then feed that data to the render function using glVertexPointer and glDrawArrays.
First you need to understand, that OpenGL actually doesn't understand the term "model", all what OpenGL sees is a stream of vertices coming in and depending on the current mode it uses those streams of vertices to draw triangles to the screen.
Every frame drawing iteration follows some outline like this:
clear all buffers
for each window element (main scene, HUD, minimap, etc.):
set scissor and viewport
conditionally clear depth and/or stencil
set projection matrix
set modelview matrix for initial view
for each model
apply model transformation onto matrix stack
bind model data (textures, vertices, etc.)
issue model drawing commands
swap buffers
OpenGL does not remember what's happening up there. There was (is) some facility, called Display Lists but they are not able to store all kinds of commands – also they got deprecated and removed from recent OpenGL versions. The immediate mode commands glBegin, glEnd, glVertex, glNormal and glTexCoord have been removed as well.
So the idea is to upload some data (textures, vertex arrays, etc.) into OpenGL buffer objects. However only textures are directly understood by OpenGL as what they are (images). All other kinds of buffers require you telling OpenGL how to deal with them. This is done by calls to gl{Vertex,Color,TexCoord,Normal,Attrib}Pointer to set data access parameters and glDraw{Arrays,Elements} to trigger OpenGL fetching a stream of vertices to be fed to the rasterizer.
You should upload the data to the GPU memory once, and then draw each frame using as few commands as possible.
Previously, this was done using display lists. Nowadays, it's all about vertex buffer objects (a.k.a. VBOs), so look into those.
Here's a tutorial about VBOs, written before they were only an extension, and not a core part of OpenGL.
Related
I've been learning a bit of OpenGL lately, and I just got to the Framebuffers.
So by my current understanding, if you have a framebuffer of your own, and you want to draw the color buffer onto the window, you'll need to first draw a quad, and then wrap the texture over it? Is that right? Or is there something like glDrawArrays(), glDrawElements() version for framebuffers?
It seems a bit... Odd (clunky? Hackish?) to me that you have to wrap a texture over a quad in order to draw the framebuffer. This doesn't have to be done with the default framebuffer. Or is that done behind your back?
Well. The main point of framebuffer objects is to render scenes to buffers that will not get displayed but rather reused somewhere, as a source of data for some other operation (shadow maps, High dynamic range processing, reflections, portals...).
If you want to display it, why do you use a custom framebuffer in the first place?
Now, as #CoffeeandCode comments, there is indeed a glBlitFramebuffer call to allow transfering pixels from one framebuffer to another. But before you go ahead and use that call, ask yourself why you need that extra step. It's not a free operation...
I want to rotate an object in OpenGL without drawing again, to save time.
In the init method i want to draw the picture and then only rotate it according to mouse events.
Here is the full method:
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
gl.LoadIdentity();
gl.Rotate(camera_angle_v, 1.0f, 0.0, 0.0);
gl.Begin(OpenGL.GL_POINTS);
//Draw
gl.End();
OpenGL doesn't work this way fundamentally. The frame you're rendering in is essentially a 2d array of pixels. When you draw an image, it changes the values of some of those pixels to create the image for you. Once something's been drawn, it will stay there until you clear it. OpenGL doesn't keep track of what you rendered in the past (except for the pixels it fills in the frame), so it can't do any transformations on anything but the triangle/line it's currently rendering.
At the beginning of your draw method, you clear the frame (reset all the pixels to the clear color). You have to redraw the object after that. It's how OpenGL works and it's very fast at it. On a modern GPU, you can draw millions of triangles each frame and still maintain 60fps. If you don't clear the frame at the beginning, the image will be drawn on top of the old frame and you'll get a hall-of-mirrors sort of effect.
If performance is an issue, consider learning more modern OpenGL. What you're using right now is immediate mode OpenGL, which was part of the OpenGL 1.0 specification back in 1992. In 1997, OpenGL 1.1 introduced vertex arrays, which provides a significant speed boost for large amounts of vertices since there's only one method call for all the vertices instead of one method call per vertex. And with each new version of OpenGL comes more optimized ways of drawing.
You have to draw the object again - that's how OpenGL works.
Each frame gets rendered from scratch based on the current scene geometry.
Well... If you're using immediate mode, I guess you can use display lists and call glRotate accordingly. Though as Robert Rouhani said, the technique you're using is ancient (And what 99% of online tutorials teach you). Use VAO (or even better, VBO) to draw the object.
Is it a good or bad idea to use display lists for drawing textured rectangles?
The display list would be re-compiled only if the texture the sprite is using changes.
For drawing a single sprite, there's no real problem in doing it that way, but there's no advantage to it either, assuming you're using glRotate/glTranslate to position the sprite. Plenty of games have been written that way.
In my games, I use a vertex buffer object with GL_DYNAMIC_DRAW to store all the sprites which share each texture atlas. I update the vert positions on the CPU and send the whole batch in one draw call. I can draw many more sprites using this approach. I could do the positions in a vertex shader if I needed to draw even more.
Also, keep in mind that OpenGL ES2 doesn't support display lists, so if you're thinking of porting to an ES2 device you'd have to re-do it. (iPhone/iPad support ES1 but you can't mix and match with ES2, you can use display lists OR shaders but not both).
I'm creating a tile-based game in C# with OpenGL and I'm trying to optimize my code as best as possible.
I've read several articles and sections in books and all come to the same conclusion (as you may know) that use of VBOs greatly increases performance.
I'm not quite sure, however, how they work exactly.
My game will have tiles on the screen, some will change and some will stay the same. To use a VBO for this, I would need to add the coordinates of each tile to an array, correct?
Also, to texture these tiles, I would have to create a separate VBO for this?
I'm not quite sure what the code would look like for tiling these coordinates if I've got tiles that are animated and tiles that will be static on the screen.
Could anyone give me a quick rundown of this?
I plan on using a texture atlas of all of my tiles. I'm not sure where to begin to use this atlas for the textured tiles.
Would I need to compute the coordinates of the tile in the atlas to be applied? Is there any way I could simply use the coordinates of the atlas to apply a texture?
If anyone could clear up these questions it would be greatly appreciated. I could even possibly reimburse someone for their time & help if wanted.
Thanks,
Greg
OK, so let's split this into parts. You didn't specify which version of OpenGL you want to use - I'll assume GL 3.3.
VBO
Vertex buffer objects, when considered as an alternative to client vertex arrays, mostly save the GPU bandwidth. A tile map is not really a lot of geometry. However, in recent GL versions the vertex buffer objects are the only way of specifying the vertices (which makes a lot of sense), so we cannot really talked about "increasing performance" here. If you mean "compared to deprecated vertex specification methods like immediate mode or client-side arrays", then yes, you'll get a performance boost, but you'd probably only feel it with 10k+ vertices per frame, I suppose.
Texture atlases
The texture atlases are indeed a nice feature to save on texture switching. However, on GL3 (and DX10)-enabled GPUs you can save yourself a LOT of trouble characteristic to this technique, because a more modern and convenient approach is available. Check the GL reference docs for TEXTURE_2D_ARRAY - you'll like it. If GL3 cards are your target, forget texture atlases. If not, have a google which older cards support texture arrays as an extension, I'm not familiar with the details.
Rendering
So how to draw a tile map efficiently? Let's focus on the data. There are lots of tiles and each tile has the following infromation:
grid position (x,y)
material (let's call it "material" not "texture" because as you said the image might be animated and change in time; the "material" would then be interpreted as "one texture or set of textures which change in time" or anything you want).
That should be all the "per-tile" data you'd need to send to the GPU. You want to render each tile as a quad or triangle strip, so you have two alternatives:
send 4 vertices (x,y),(x+w,y),(x+w,y+h),(x,y+h) instead of (x,y) per tile,
use a geometry shader to calculate the 4 points along with texture coords for every 1 point sent.
Pick your favourite. Also note that directly corresponds to what your VBO is going to contain - the latter solution would make it 4x smaller.
For the material, you can pass it as a symbolic integer, and in your fragment shader - basing on current time (passed as an uniform variable) and the material ID for a given tile - you can decide on the texture ID from the texture array to use. In this way you can make a simple texture animation.
Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.