OpenGL - Map points on a surface - c++

I am looking for a technique in OpenGL that I can use in order to map color points on a surface.
Each point is defining a display color and three coordinates (X, Y, Z).
The surface on which to map those data is built from all the points' coordinates in the main usage (complex shape) but can be built normally from standard shape such as a cone or a sphere.
Since there are voids between the points (for example one millimeter step between two points along the X axis), it would be also needed to interpolate the points data on the surface.
I am thinking about building bitmaps from the points and then applying those bitmaps on my surfaces but I am wondering if OpenGL does have a feature that allow to do that in a "smart way".

It sounds to me like what you are asking for is basic OpenGl behaviour.
If you draw a triangle:
glBegin(GL_TRIANGLES);
glColor3f(1.0f,0.0f,0.0f); // Red
glVertex3f( 0.0f, 1.0f, 0.0f); // Top vertex
glColor3f(0.0f,1.0f,0.0f); // Green
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom left vertex
glColor3f(0.0f,0.0f,1.0f); // Blue
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom right vertex
glEnd();
The result is a smoothly ( if garishly) coloured solid triangle.
So your problem is to construct a series of polygons (possibly just triangles) which cover your surface and have the point set as vertices.
For a great intro to OpenGl, see NeHe's tutorials, including the above example.

Related

How to zoom out on an object with glTranslatef?

I'm trying to zoom out from a polygon with glTranslatef. However, whatever numbers I put in Z (trying to zoom out) inside glTranslatef function, it remains a black window. Here is code:
glClearColor (0.0f, 0.0f, 0.0f, 0.0f);
glClear (GL_COLOR_BUFFER_BIT);
glPushMatrix ();
glTranslatef(0, 0, 0.9f); //Here I'm translating
glBegin (GL_POLYGON);
glColor3f(100, 100, 0); glVertex2f(-1.0f, -1.0f);
glColor3f(100, 0, 100); glVertex2f(-1.0f, 1.0f);
glColor3f(25, 25, 25); glVertex2f(1.0f, 1.0f);
glColor3f(100, 50, 90); glVertex2f(1.0f, -1.0f);
glEnd ();
glPopMatrix ();
SwapBuffers (hDC);
Sleep (1);
I tried with following numbers in Z:
0.9 (works)
-0.9 (works)
1.1 (works not)
-1.1 (works not)
Do I need some other code for this or I'm doing it wrong?
If you haven't specified a projection matrix then the standard one will be an orthographic (non-perspective) projection with left-right top-bottom and near-far all being -1,1.
So translating outside that will make the vertices not draw at all.
The reason this does nothing is because you have no transformation matrices setup.
Right now you are drawing in a coordinate space known as Normalized Device Coordinates, which has the viewing volume encompass the range [-1.0, 1.0] in all directions. Any point existing outside that range is clipped.
Vertices specified with glVertex2f (...) are implicitly placed at z=0.0 and translating more than 1.0 unit along the Z-axis will push your vertices outside the viewing volume. This is why -1.1 and 1.1 fail, while 0.9 and -0.9 work fine.
Even if you translate to a position within the viewing volume, without a perspective projection, translating something along the Z-axis is not going to change its size. The only thing that will happen is that eventually the object will be translated far enough that it is clipped and suddenly disappears (which you already experienced with values > 1.0 or < -1.0).

How do I specify texture coordinates for a GL_QUAD_STRIP?

I am trying to understand how to specify texture coordinates for a GL_QUAD_STRIP.
I have managed to get things working for one quad:
float vertices[] = { 0.0f, 0.0f, 1.0f, +1.0f, 0.0f, 0.0f, // bottom line
0.0f, 1.0f, 1.0f, +1.0f, 1.0f, 0.0f}; // top line
unsigned int indices[] = {2, 0, // x = 0
3, 1}; // x = +1
float textureCoordinates[] = { 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f};
...
glBindBuffer(GL_ARRAY_BUFFER, 0); // unbinds any buffer object previously bound
glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibufferid);
glDrawElements(GL_QUAD_STRIP, 4, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
And here is how the result looks (white rectangle with image, rest is drawn on to help explain):
However I do not understand the logic behind the choice of textureCoordinates[] :-(.
The first texture coordinate is (1,0); I would assume that this corresponds to lower right corner?
Also I would assume that when OpenGL reads the first index: 2, it uses this to look up the vertex: (0,1,1): upper left corner. Next it reads the first texture coordinate: (1,0).
But as mentioned above I would assume this to be the lower right corner of the texture !?
However the texture is shown unrotated so this can not be the case!?
Just like the vertices, the texture coordinates are also selected based on the indices used by glDrawElements(). So the first texture coordinate is not (1,0), but (1,1) because the first index is 2. Vertices and coordinates would be according to the following table, where i = index, v = vertex and t = texture coordinate. (I'll only take the x and y coordinates into consideration for the vertices, as the z coordinate doesn't really matter in this case.)
i v t
2 (0,1) (1,1)
0 (0,0) (1,0)
3 (1,1) (0,1)
1 (1,0) (0,0)
If we draw this on a piece of paper, we can see that this means the coordinates make more sense, since the indices matter. (I recommend that you do this! I had to do that to understand what was going on.) Notice in the table how the y coordinates match perfectly between the vertex and texture coordinate for a given index. But the x coordinates don't match: when the vertex has x = 0, the texture coordinate has x = 1 and vice versa. I assume this would make the image appear mirrored around the y axis instead of rotated in any way. What does the original image look like? Is it mirrored compared to what we see in the image you posted so that the building is on the left? If so, the texture coordinates would be the explanation. In that case, texture coordinate 2 and 3 should switch places.
In case you are curious, you could take a look at the OpenGL 2.1 specification on page 18, Figure 2.5(a), to see why the vertex indices were selected as they were. It would create a quad with vertices specified in a counterclockwise direction when projected on the screen. This is good because the initial value for glFrontFace() is GL_CCW, which means we see the front face of the polygons in the rendered image and the polygons would not have been culled if culling was enabled (see glCullFace()). (Culling is not enabled by default though, so it may or may not have mattered in your case.)
I hope this helped. Do comment if something is unclear!

How to draw a full screen quad and still see the objects behind it

I am creating a 3D game. I have objects in my game. When an enemy hits my position I want my screen to go red for a short time. I have chosen to do this by trying to render a full screen red square at my camera position. This is my attempt which is in my render method.
RenderQuadTerrain();
//Draw the skybox
CreateSkyBox(vNewPos.x, vNewPos.y, vNewPos.z,3500,3000,3500);
DrawCoins();
CollisionTest(g_Camera.Position().x, g_Camera.Position().y, g_Camera.Position().z);
DrawEnemy();
DrawEnemy1();
//Draw SecondaryObjects models
DrawSecondaryObjects();
//Apply lighting effects
LightingEffects();
escapeAttempt();
if(hitbyenemy==true){
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE); // additive blending
float blendFactor = 1.0;
glColor3f(blendFactor ,0,0); // when blendFactor = 0, the quad won't be visible. When blendFactor=1, the scene will be bathed in redness
glBegin(GL_QUADS); // Draw A Quad
glVertex3f(-1.0f, 1.0f, 0.0f); // Top Left
glVertex3f( 1.0f, 1.0f, 0.0f); // Top Right
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glEnd();
}
All this does, however, is turn all of the objects in my game a transparent colour, and I can't see the square anywhere. I don't even know how to position the quad. I'm very new to openGL.
How my game looks without an attempt to render a quad:
How my game looks after my attempt:
With Kevin's code and glDisable(GL_DEPTH_TEST);
EDIT: I have changed the code to the below paste..still looks like image 1.
http://pastebin.com/eiVFcQqM
There are several possible contributions to the problem:
You probably want regular blending, not additive blending; additive blending will not turn white, yellow, or purple objects red. Change the blend func to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); and use a color of glColor4f(1, 0, 0, blendFactor);
You should glDisable(GL_DEPTH_TEST); while drawing the overlay, to prevent it from being hidden by other geometry, and reenable it afterward (or use glPush/PopAttrib(GL_ENABLE_BIT)).
The projection and modelview matrixes should be the identity, to ensure a quad with those coordinates covers the entire screen. (However, you may have that implicitly already, since you say it is affecting the full screen, just not in the right way.)
If these suggestions do not fix it, please edit your question showing screenshots of your game with and without the red flash so we can understand the problem better.

Putting a texture on one surface of a cube isn't working

I'm trying to put a texture on one surface of a cube (if facing the XY plane the texture would be facing you).
No texture is getting drawn, only the wireframe and I'm wondering what I'm doing wrong. I think it's the vertex coordinates?
Here's some code:
struct paperVertex {
D3DXVECTOR3 pos;
DWORD color; // The vertex color
D3DXVECTOR2 texCoor;
paperVertex(D3DXVECTOR3 p, DWORD c, D3DXVECTOR2 t) {pos = p; color = c; texCoor = t;}
paperVertex() {pos = D3DXVECTOR3(0,0,0); color = 0; texCoor = D3DXVECTOR2(0,0);}
};
D3DCOLOR color1 = D3DCOLOR_XRGB(255, 255, 255);
D3DCOLOR color2 = D3DCOLOR_XRGB(200, 200, 200);
vertices[0] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,0)); // bottom left corner of tex
vertices[1] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,0)); // top left corner of tex
vertices[2] = paperVertex(D3DXVECTOR3( 1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,1)); // top right corner of tex
vertices[3] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,1)); // bottom right corner of tex
vertices[4] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
vertices[5] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[6] = paperVertex(D3DXVECTOR3(1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[7] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
D3DXCreateTextureFromFile( md3dDev, "texture.bmp", &gTexture);
md3dDev->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);
md3dDev->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);
md3dDev->SetTexture(0, gTexture);
md3dDev->SetStreamSource(0, mVtxBuf, 0, sizeof(paperVertex));
md3dDev->SetVertexDeclaration(paperDecl);
md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME);
md3dDev->SetIndices(mIndBuf);
md3dDev->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, VTX_NUM, 0, NUM_TRIANGLES);
disclaimer: I have no Direct3D experience, but solid OpenGL and general computer graphics experience. And since the underlying concepts don't really differ, I attempt an answer, of whose correctness I'm 99% sure.
You call md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME) immediately before rendering and wonder why only the wireframe is drawn?
Keep in mind that using a texture doesn't magically turn a wireframe model into a solid model. It is still a wireframe model with the texture only applied to the wireframe. You can only draw the whole primitve as wireframe or not.
Likewise does using texture coordinates of (0,0) not magically disable texturing for individual faces. You can only draw the whole primitive textured or not, though you might play with the texture coordinates and the texture's wrapping mode (and maybe the texture border) to make the "non-textured" faces use a uniform color from the texture and thus look like not textured.
But in general to achieve such deviating render styles (like textured/non-textured, but especially wireframe/solid) in a single primitive, you won't get around splitting the primitive into multiple ones and drawing each one with its dedicated render style.
EDIT: According to your comment: If you don't need wireframe, why enable it then? Besides disabling wireframe, with your current texture coordinates the other faces won't just have a single color from the texture but some strange distorted version of the texture. This is because your vertices (and their texture coordinates) are shared between different faces, but the texture coordinates at the moment are created only for the front face to look reasonable.
In such a situation, you won't get around duplicating vertices, so that each face uses a set of 4 unique vertices. In the case of a cube you won't actually need an index array anymore, because each face needs its own vertices. This is due to the fact, that a vertex conceptually represents all of the vertex' attributes (position, color, texCoord, ...) and you cannot have a two vertices sharing a position but having different texture coordinates (you can but you need two distinct vertices). Once you've duplicated the vertices accordingly, you can give each of the corner vertices their respective texture coordinates (which would be the usual [0,1]-quad if you want them textured normally, or all 0s if you want them to have a single color, in this case the color of the bottom left (or top left in D3D?) corner of the texture).
The same problem arises if you want to light the cube and need normals per-face, istead of interpolated per-vertex normals. In this case you also have to introduce duplicate vertices only deviating in their normal attribute. Always keep in mind that a vertex conceptually consists of all the vertex attributes and if two vertices have the same position but a different color/normal/texCoord/... they are conceptually (and practically) different vertices.

Using glRotate and glTranslate with collision detection

Say I use glRotate to translate the current view based on some arbitrary user input (i.e, if key left is pressed then rtri+=2.5f)
glRotatef(rtri,0.0f,1.0f,0.0f);
Then I draw the triangle in the rotated position:
glBegin(GL_TRIANGLES); // Drawing Using Triangles
glVertex3f( 0.0f, 1.0f, 0.0f); // Top
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glEnd(); // Finished Drawing The Triangle
How do I get the resulting translated vertexes for use in collision detection? Or will I have to manually apply the transform myself and thus doubling up the work?
The reason I ask is that I wouldn't mind implementing display lists.
The objects you use for collision detection are usually not the objects you use for display. They are usually simpler and faster.
So yes, the way to do it is to maintain the transformation you're using manually but you wouldn't be doubling up the work because the objects are different.
Your game loop should look like (with c++ syntax) :
void Scene::Draw()
{
this->setClearColor(0.0f, 0.0f, 0.0f);
for(std::vector<GameObject*>::iterator it = this->begin(); it != this->end(); ++it)
{
this->updateColliders(it);
glPushMatrix();
glRotatef(it->rotation.angle, it->rotation.x, it->rotation.y, it->rotation.z);
glTranslatef(it->position.x, it->position.y, it->position.z);
glScalef(it->scale.x, it->scale.y, it->scale.z);
it->Draw();
glPopMatrix();
}
this->runNextFrame(this->Draw, Scene::MAX_FPS);
}
So, for instance, if i use a basic box collider with a cube the draw method will :
Fill the screen with a black color (rgb : (0,0,0))
For each object
Compute the collisions with position and size informations
Save the actual ModelView matrix state
Transform the ModelView matrix (rotate, translate, scale)
Draw the cube
Restore the ModelView matrix state
Check the FPS and run the next frame at the right time
** The class Scene inherits from the vector class
I hope it will help ! :)