How can I implement a moving scene - c++

I would like to implement a moving scene in OpenGL.
Scene description: terrain is static but all other objects are moving towards the -x axis.
Terrain is a plane in xz plane.
I have a mesh that will appear a lot of times on the terrain in several places.
But all of them will be moving towards -x axis at a specific speed.
I've thought of these possible implementations:
Create one mesh only and display it several times (I prefer this one)
Create several meshes, save them to a vector and then move them. After they leave the viewport, maybe destroy them?
The problem with the 1st way, is that I'll create meshes with a x% possibility, so this entails not knowing the number of meshes that will be needed. So how can I display them?
In example if I knew I would create 3 meshes I would do this:
glPushMatrix();
glTranslatef(mesh1 position + speed)
mesh.dray();
glPopMatrix();
glPushMatrix();
glTranslatef(mesh2 position + speed)
mesh.dray();
glPopMatrix();
glPushMatrix();
glTranslatef(mesh3 position + speed)
mesh.dray();
glPopMatrix();
Now in case we need to create meshes as long as the animation continues, how would I implement that? And secondly, what about the meshes that left the viewport? Do they continue to exist?

This answer is useless if you intend to code this in pure openGL.
However, if you are willing to try a 3rd party library, try www.ogre3d.org - I would find this really very easy to do in ogre.
In fact the challenges on 'intermediate tutorial one', if I remember correctly, should address the equivalent ogre concept of the problem you are having with openGL.
http://www.ogre3d.org/tikiwiki/tiki-index.php?page=Intermediate+Tutorial+1&structure=Tutorials
(Would've made this as a comment, only recently became active!)

Use option 2. Just dont delete them just move them back and use them again. Like for example if I wanted to count sheep... I wouldnt create 1,000,000 meshs of sheep I would create maybe 1 or 2 and just rotate between using those.

Related

Change the position of a sphere without recreating it

I am writing a code in OpenGL, in c++ on Linux, where i need to draw a several sphere in the 3D space, for drawing it I use the glutSolidSphere( GLdouble(radius), GLint(slices), GLint(stacks) ) method, everytime, in the draw function, the glutSolidSphere is called a lot of times and after the sphere is traslated in the right position.
But I have noticed that when the program draw several spheres there is a framerate problem, so i was thinking if there was a method that allow me to "store" the model of the sphere without recreate it everytime and just change position.
I am not an OpenGL expert, sorry if i have committed english language errors.
The answer of #datenwolf in this topic is perfect, he said :
In OpenGL you don't create objects, you just draw them. Once they are drawn, OpenGL no longer cares about what geometry you sent it.
glutSolidSphere is just sending drawing commands to OpenGL.
So if you have performance issues you will need to look for others ways to improve performance like multithreading or maybe look to implement your own sphere draw function (google or stackoverflow research).
I heard that GluSolidSphere call glBegin(…) / glEnd() and these can be slow so maybe draw all your spheres with a loop so you have only one call of glBegin(…) / glEnd()

OpenGL, when should I draw and when should I not

Imagine Im having my camera, and two squares in a 3D openGL context (using perspective) as follows :
From Top:
/
/
/ Square 1
Camera -> + V Square 2
\ | V
\ | |
\ |
So, what I will do is draw both using glBegin() and glEnd() and let the Z buffer do it's job. So far so good.
Now, Imagine I want to draw 1 million of those squares someones will be behind others of course. What will be faster, doing the last mentioned proccess for all or maybe I can make some math and discard the ones I DONT need to draw. Example:
if (should_I_Draw_It)
{
glBegin();
/*Draw*/
glEnd();
}
EDIT:
It's a dynamic scene, objects may be created, destroyed, moved and/or modified.
What you want to do is called occlusion culling. Simple algorithms are very inefficient on the CPU and they should be used only when there are big objects in the foreground and small objects in the background.
Nvidia describes in the GPU Gems Chapter 29 an efficient way for occlusion culling. You can try this to improve the efficiency of your rendering
Occlusion culling for so many dynamic objects would almost always be slower than just drawing everything. Your best bet in a dynamic scene may be to just do very simple view frutrum culling. The issue is that since you are only drawing boxes with 6 quads each, it probably is still faster to just draw them all instead of spending the time to decide if you should draw them.
Regardless the simplest test you can do is check to see if the box is directly behind/perpendicular to the camera relative to the direction you are looking and if it is far enough away (bounding radius) from the camera to not intersect it.
Modern graphics drivers will automatically optimize what it does/does not draw to some degree so you can rely on that to help a bit.

Stuck building a game engine

I'm trying to build a (simple) game engine using c++, SDL and OpenGL but I can't seem to figure out the next step. This is what I have so far...
An engine object which controls the main game loop
A scene renderer which will render the scene
A stack of game states that can be pushed and popped
Each state has a collection of actors and each actor has a collection of triangles.
The scene renderer successfully sets up the view projection matrix
I'm not sure if the problem I am having relates to how to store an actors position or how to create a rendering queue.
I have read that it is efficient to create a rendering queue that will draw opaque polygons front to back and then draw transparent polygons from back to front. Because of this my actors make calls to the "queueTriangle" method of the scene renderer object. The scene renderer object then stores a pointer to each of the actors triangles, then sorts them based on their position and then renders them.
The problem I am facing is that for this to happen the triangle needs to know its position in world coordinates, but if I'm using glTranslatef and glRotatef I don't know these coordinates!
Could someone please, please, please offer me a solution or perhaps link me to a (simple) guide on how to solve this.
Thankyou!
If you write a camera class and use its functions to move/rotate it in the world, you can use the matrix you get from the internal quaternion to transform the vertices, giving you the position in camera space so you can sort triangles from back to front.
A 'queueTriangle' call sounds to me to be very inefficient. Modern engines often work with many thousands of triangles at a time so you'd normally hardly ever be working with anything on the level of a single triangle. And if you were changing textures a lot to accomplish this ordering then that is even worse.
I'd recommend a simpler approach - draw your opaque polygons in a much less rigorous order by sorting the actor positions in world space rather than the positions of individual triangles and render the actors from front to back, an actor at a time. Your transparent/translucent polygons still require the back-to-front approach (providing you're not using premultiplied alpha) but everything else should be simpler and faster.

Ways to implement manipulation handles in 3d view

I'm building a simple solid modeling application. Users need to be able to manipulate object in both orthogonal and perspective views. For example, when there's a box in the screen and the user clicks on it to select it, it needs to get 'handles' at the corners and in the center so that the user can move the mouse over such a handle and drag it to enlarge or move the box.
What strategies are there to do this, and which one is the best one? I can think of two obvious ones:
1) Treat the handles as 3d objects. I.e. for a box, add small boxes to the scene at the corners of the 'main' box. Problems: this won't work in perspective view, I'd need to determine the size of the boxes relative to the current zoom level (the handles need to have the same size no matter how far the user is zoomed in/out)
2) Add the handles after the scene has been rendered. Render to an offscreen buffer, determine the 2d locations of the corners somehow and use regular 2d drawing techniques to draw the handles. Problems: how will I do hittesting? I'd need to do a two-stage hittesting approach, as well; how do I draw in 2d on a 3d rendered image? Fall back to GDI?
There are probably more problems with both approaches. Is there an industry-standard way of tackling this problem?
I'm using OpenGL, if that makes a difference.
I would treat the handles as 3D objects. This provides many advantages - it's more consistant, they behave well, hit testing is easy, etc.
If you want the handles to be a constant size, you can still treat them as 3D objects, but you will have to scale their size as appropriate based off the distance to camera. This is a bit of a hassle, but since there are typically only a few handles, and these are usually small objects, it should be fine performance wise.
However, I'd actually say let the handles scale with the scene. As long as you pick a rendering style for the handle that makes them stand out (ie: bright orange boxes, etc), the perspective effects (smaller handles in the background) actually makes working with them easier for the end-user in many ways. It is difficult to get a sense of depth from a 3D scene - the perspective effects on the handles help provide more visual clues as to how "deep" the handle is into the screen.
First off, project the handle/corner co-ordinates onto the camera's plane (effectively converting them to 2D coordinates on the screen; normalize this against the screen dimensions.)
Here's some simple code to enable orthogonal/2D-overlay drawing:
void enable2D()
{
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
int wind[4];
glGetIntegerv(GL_VIEWPORT,wind);
glOrtho(0,wind[2],0,wind[3],-1,1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
}
void disable2D()
{
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
enable2D() caches the current modelview/projection matrices and replaces the projection matrix with one normalized to the screen (i.e. the width/height of the screen) and restores the identity matrix for modelview.
After making this call, you can make glVertex2f() calls using screen/pixel coordinates, allowing you to draw in 2D! (This will also allow you to hit-test since you can easily get the mouse's current pixel coords.)
When you're done, call disable2D to restore your old modelview/projection matrices :)
The hardest part is computing where the hitboxes fall on the 2D plane and dealing with overlaying (if two project to the same place, which to select on click?)
Hope this helped :)
I've coded up a manipulator with handles for a 3d editing package, and ran into a lot of these same issues.
First, there's an open source manipulator. I couldn't find it in my most recent search, probably because there's a plethora of names for these things - 3d widgets, gizmos, manipulators, gimbals, etc.
Anyhow, the way I did it was to add a manipulator object to the scene that, when drawn, draws all of the handles. It does the same thing for bounding box computation, and selection.
Reed's idea about keeping them the same size is interesting for handles that exist on objects, and might work there. For a manipulator, I found that it was more of a 3d UI element, and it was much more usable if it did not change size. I had a bug where the size was only determined based on the active viewport, which resulted in horrible huge/tiny manipulators in other viewports, very useless. If you're going to add them to the scene, you might want to add them per-viewport, or make them actually have a fixed size.
I know the question is really old. But just in case someone needs it:
Interactive Techniques in Three-dimensional Scenes (Part 1): Moving 3D Objects with the Mouse using OpenGL 2.1
Article is good and has an interesting link section at the bottom.

In openGL, how can you get items to draw back to front?

By default it seems that objects are drawn front to back. I am drawing a 2-D UI object and would like to create it back to front. For example I could create a white square first then create a slightly smaller black square on top of it thus creating a black pane with a white border. This post had some discussion on it and described this order as the "Painter's Algorithm" but ultimately the example they gave simply rendered the objects in reverse order to get the desired effect. I figure back to front (first objects go in back, subsequent objects get draw on top) rendering can be achieved via some transformation (gOrtho?) ?
I will also mention that I am not interested in a solution using a wrapper library such as GLUT.
I have also found that the default behavior on the Mac using the Cocoa NSOpenGLView appears to draw back to front, where as in windows I cannot get this behavior. The setup code in windows I am using is this:
glViewport (0, 0, wd, ht);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho (0.0f, wd, ht, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
The following call will turn off depth testing causing objects to be drawn in the order created. This will in effect cause objects to draw back to front.
glDepthFunc(GL_NEVER); // Ignore depth values (Z) to cause drawing bottom to top
Be sure you do not call this:
glEnable (GL_DEPTH_TEST); // Enables Depth Testing
For your specific question, no there is no standardized way to specify depth ordering in OpenGL. Some implementations may do front to back depth ordering by default because it's usually faster, but that is not guaranteed (as you discovered).
But I don't really see how it will help you in your scenario. If you draw a black square in front of a white square the black square should be drawn in front of the white square regardless of what order they're drawn in, as long as you have depth buffering enabled. If they're actually coplanar, then neither one is really in front of the other and any depth sorting algorithm would be unpredictable.
The tutorial that you posted a link to only talked about it because depth sorting IS relevant when you're using transparency. But it doesn't sound to me like that's what you're after.
But if you really have to do it that way, then you have to do it yourself. First send your white square to the rendering pipeline, force the render, and then send your black square. If you do it that way, and disable depth buffering, then the squares can be coplanar and you will still be guaranteed that the black square is drawn over the white square.
Drawing order is hard. There is no easy solution. The painter's alogorithm (sort objects by their distance in relation to your camera's view) is the most straightforward, but as you have discovered, it doesn't solve all cases.
I would suggest a combination of the painter's algroithm and layers. You build layers for specific elements on your program. So you got a background layer, objects layers, special effect layers, and GUI layer.
Use the painter's algorithm on each layer's items. In some special layers (like your GUI layer), don't sort with the painter's algorithm, but by your call order. You call that white square first so it gets drawn first.
Draw items that you want to be in back slightly behind the items that you want to be in the front. That is, actually change the z value (assuming z is perpendicular to the screen plane). You don't have to change it a lot to get the items to draw in front of eachother. And if you only change the z value slightly, you shouldn't notice much of an offset from their desired position. You could even go really fancy, and calculate the correct x,y position based on the changed z position, so that the item appears where it is supposed to be.
Your stuff will be drawn in the exact order you call the glBegin/glEnd functions in. You can get depth-buffering using the z-buffer, and if your 2d objects have different z values, you can get the effect you want that way. The only way you are seeing the behavior you describe on the Mac is if the program is drawing stuff in back-to-front order manually or using the z-buffer to accomplish this. OpenGL otherwise does not have any functionality automatically as you describe.
As AlanKley pointed out, the way to do this is to disable the depth buffer. The painter's algorithm is really a 2D scan-conversion technique used to render polygons in the correct order when you don't have something like a z-buffer. But you wouldn't apply it to 3D polygons. You'd typically transform and project them (handling intersections with other polygons) and then sort the resulting list of 2D projected polygons by their projected z-coordinate, then draw them in reverse z-order.
I've always thought of the painter's algorithm as an alternate technique for hidden surface removal when you can't (or don't want to) use a z-buffer.