Using glRotate and glTranslate with collision detection - c++

Say I use glRotate to translate the current view based on some arbitrary user input (i.e, if key left is pressed then rtri+=2.5f)
glRotatef(rtri,0.0f,1.0f,0.0f);
Then I draw the triangle in the rotated position:
glBegin(GL_TRIANGLES); // Drawing Using Triangles
glVertex3f( 0.0f, 1.0f, 0.0f); // Top
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glEnd(); // Finished Drawing The Triangle
How do I get the resulting translated vertexes for use in collision detection? Or will I have to manually apply the transform myself and thus doubling up the work?
The reason I ask is that I wouldn't mind implementing display lists.

The objects you use for collision detection are usually not the objects you use for display. They are usually simpler and faster.
So yes, the way to do it is to maintain the transformation you're using manually but you wouldn't be doubling up the work because the objects are different.

Your game loop should look like (with c++ syntax) :
void Scene::Draw()
{
this->setClearColor(0.0f, 0.0f, 0.0f);
for(std::vector<GameObject*>::iterator it = this->begin(); it != this->end(); ++it)
{
this->updateColliders(it);
glPushMatrix();
glRotatef(it->rotation.angle, it->rotation.x, it->rotation.y, it->rotation.z);
glTranslatef(it->position.x, it->position.y, it->position.z);
glScalef(it->scale.x, it->scale.y, it->scale.z);
it->Draw();
glPopMatrix();
}
this->runNextFrame(this->Draw, Scene::MAX_FPS);
}
So, for instance, if i use a basic box collider with a cube the draw method will :
Fill the screen with a black color (rgb : (0,0,0))
For each object
Compute the collisions with position and size informations
Save the actual ModelView matrix state
Transform the ModelView matrix (rotate, translate, scale)
Draw the cube
Restore the ModelView matrix state
Check the FPS and run the next frame at the right time
** The class Scene inherits from the vector class
I hope it will help ! :)

Related

OpenGL - Rotations on different axis independent of eachother

First off, I'm sorry if I confuse anyone because I don't know how to phrase this, exactly.
Anyway, What I want to do is rotate on 3 axis, but independent of eachother. If I have
glRotatef(getPitch(),1f,0,0);
glRotatef(getYaw(),0,1f,0);
glRotatef(getRoll(),0,0,1f);
Then, it rotates my object on the x axis just fine, but the other two axis rotate on the offset of the x rotation. How do I rotate these all independent of eachother? (On the same object)
Again, Sorry if I confused anyone.
You could push and pop the matrix onto and off the stack, so you could do:
glPushMatrix();
glRotatef( getPitch(), 1.0f, 0.0f ,0.0f );
glPopMatrix();
glPushMatrix();
glRotatef( getYaw(), 0.0f, 1.0f, 0.0f);
glPopMatrix();
glPushMatrix();
glRotatef( getRoll(), 0.0f, 0.0f, 1.0f);
glPopMatrix();
so basically, pushing the matrix, saves the transformation matrix in it's current state. You apply the transformations that you want on the object (in your case rotations around an axis), which updates the matrix. Popping it restores it to the original state before the rotation was applied. You can then apply each rotation independently of the other ones.

How to zoom out on an object with glTranslatef?

I'm trying to zoom out from a polygon with glTranslatef. However, whatever numbers I put in Z (trying to zoom out) inside glTranslatef function, it remains a black window. Here is code:
glClearColor (0.0f, 0.0f, 0.0f, 0.0f);
glClear (GL_COLOR_BUFFER_BIT);
glPushMatrix ();
glTranslatef(0, 0, 0.9f); //Here I'm translating
glBegin (GL_POLYGON);
glColor3f(100, 100, 0); glVertex2f(-1.0f, -1.0f);
glColor3f(100, 0, 100); glVertex2f(-1.0f, 1.0f);
glColor3f(25, 25, 25); glVertex2f(1.0f, 1.0f);
glColor3f(100, 50, 90); glVertex2f(1.0f, -1.0f);
glEnd ();
glPopMatrix ();
SwapBuffers (hDC);
Sleep (1);
I tried with following numbers in Z:
0.9 (works)
-0.9 (works)
1.1 (works not)
-1.1 (works not)
Do I need some other code for this or I'm doing it wrong?
If you haven't specified a projection matrix then the standard one will be an orthographic (non-perspective) projection with left-right top-bottom and near-far all being -1,1.
So translating outside that will make the vertices not draw at all.
The reason this does nothing is because you have no transformation matrices setup.
Right now you are drawing in a coordinate space known as Normalized Device Coordinates, which has the viewing volume encompass the range [-1.0, 1.0] in all directions. Any point existing outside that range is clipped.
Vertices specified with glVertex2f (...) are implicitly placed at z=0.0 and translating more than 1.0 unit along the Z-axis will push your vertices outside the viewing volume. This is why -1.1 and 1.1 fail, while 0.9 and -0.9 work fine.
Even if you translate to a position within the viewing volume, without a perspective projection, translating something along the Z-axis is not going to change its size. The only thing that will happen is that eventually the object will be translated far enough that it is clipped and suddenly disappears (which you already experienced with values > 1.0 or < -1.0).

How to draw a full screen quad and still see the objects behind it

I am creating a 3D game. I have objects in my game. When an enemy hits my position I want my screen to go red for a short time. I have chosen to do this by trying to render a full screen red square at my camera position. This is my attempt which is in my render method.
RenderQuadTerrain();
//Draw the skybox
CreateSkyBox(vNewPos.x, vNewPos.y, vNewPos.z,3500,3000,3500);
DrawCoins();
CollisionTest(g_Camera.Position().x, g_Camera.Position().y, g_Camera.Position().z);
DrawEnemy();
DrawEnemy1();
//Draw SecondaryObjects models
DrawSecondaryObjects();
//Apply lighting effects
LightingEffects();
escapeAttempt();
if(hitbyenemy==true){
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE); // additive blending
float blendFactor = 1.0;
glColor3f(blendFactor ,0,0); // when blendFactor = 0, the quad won't be visible. When blendFactor=1, the scene will be bathed in redness
glBegin(GL_QUADS); // Draw A Quad
glVertex3f(-1.0f, 1.0f, 0.0f); // Top Left
glVertex3f( 1.0f, 1.0f, 0.0f); // Top Right
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glEnd();
}
All this does, however, is turn all of the objects in my game a transparent colour, and I can't see the square anywhere. I don't even know how to position the quad. I'm very new to openGL.
How my game looks without an attempt to render a quad:
How my game looks after my attempt:
With Kevin's code and glDisable(GL_DEPTH_TEST);
EDIT: I have changed the code to the below paste..still looks like image 1.
http://pastebin.com/eiVFcQqM
There are several possible contributions to the problem:
You probably want regular blending, not additive blending; additive blending will not turn white, yellow, or purple objects red. Change the blend func to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); and use a color of glColor4f(1, 0, 0, blendFactor);
You should glDisable(GL_DEPTH_TEST); while drawing the overlay, to prevent it from being hidden by other geometry, and reenable it afterward (or use glPush/PopAttrib(GL_ENABLE_BIT)).
The projection and modelview matrixes should be the identity, to ensure a quad with those coordinates covers the entire screen. (However, you may have that implicitly already, since you say it is affecting the full screen, just not in the right way.)
If these suggestions do not fix it, please edit your question showing screenshots of your game with and without the red flash so we can understand the problem better.

OpenGL - Map points on a surface

I am looking for a technique in OpenGL that I can use in order to map color points on a surface.
Each point is defining a display color and three coordinates (X, Y, Z).
The surface on which to map those data is built from all the points' coordinates in the main usage (complex shape) but can be built normally from standard shape such as a cone or a sphere.
Since there are voids between the points (for example one millimeter step between two points along the X axis), it would be also needed to interpolate the points data on the surface.
I am thinking about building bitmaps from the points and then applying those bitmaps on my surfaces but I am wondering if OpenGL does have a feature that allow to do that in a "smart way".
It sounds to me like what you are asking for is basic OpenGl behaviour.
If you draw a triangle:
glBegin(GL_TRIANGLES);
glColor3f(1.0f,0.0f,0.0f); // Red
glVertex3f( 0.0f, 1.0f, 0.0f); // Top vertex
glColor3f(0.0f,1.0f,0.0f); // Green
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom left vertex
glColor3f(0.0f,0.0f,1.0f); // Blue
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom right vertex
glEnd();
The result is a smoothly ( if garishly) coloured solid triangle.
So your problem is to construct a series of polygons (possibly just triangles) which cover your surface and have the point set as vertices.
For a great intro to OpenGl, see NeHe's tutorials, including the above example.

C++/OpenGL - Rotating a rectangle

For my project i needed to rotate a rectangle. I thought, that would be easy but i'm getting an unpredictable behavior when running it..
Here is the code:
glPushMatrix();
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0, 0);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(width_sprite_, 0);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(width_sprite_, height_sprite_);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0, height_sprite_);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
The problem with that, is that my rectangle is making a translation somewhere in the window while rotating. In other words, the rectangle doesn't keep the position : vec_vehicle_position_.x and vec_vehicle_position_.y.
What's the problem ?
Thanks
You need to flip the order of your transformations:
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
becomes
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
To elaborate on the previous answers.
Transformations in OpenGL are performed via matrix multiplication. In your example you have:
M_r_ - the rotation transform
M_t_ - the translation transform
v - a vertex
and you had applied them in the order:
M_r_ * M_t_ * v
Using parentheses to clarify:
( M_r_ * ( M_t_ * v ) )
We see that the vertex is transformed by the closer matrix first, which is in this case the translation. It can be a bit counter intuitive because this requires you to specify the transformations in the order opposite of that which you want them applied in. But if you think of how the transforms are placed on the matrix stack it should hopefully make a bit more sense (the stack is pre-multiplied together).
Hence why in order to get your desired result you needed to specify the transforms in the opposite order.
Inertiatic provided a very good response. From a code perspective, your transformations will happen in the reverse order they appear. In other words, transforms closer to the actual drawing code will be applied first.
For example:
glRotate();
glTranslate();
glScale();
drawMyThing();
Will first scale your thing, then translate it, then rotate it. You effectively need to "read your code backwards" to figure out which transforms are being applied in which order. Also keep in mind what the state of these transforms is when you push and pop the model-view stack.
Make sure the rotation is applied before the translation.