Rotate texture openGL c++ - c++

when i'm making a texture from openCV image, it's always rotated on 180, why is this ? There is my code, which bind texture and display this on screen.
If code norm, suggest me how to rotate texture properly, i can't get it .
glBindTexture( GL_TEXTURE_2D, slice.texture);
glLoadIdentity();
glEnable(GL_TEXTURE_2D); //enable 2D texturing
glBegin (GL_QUADS);
float nullX = slObj->rect.x/400.0;
float nullY = slObj->rect.y/300.0;
float sliceWidth = slObj->rect.width/400.0;
float sliceHeight = slObj->rect.height/300.0;
//with our vertices we have to assign a texcoord
//so that our texture has some points to draw to
glTexCoord2d(0.0,0.0); glVertex2f(nullX, nullY);
glTexCoord2d(1.0,0.0); glVertex2f(nullX + sliceWidth, nullY);
glTexCoord2d(1.0,1.0); glVertex2f(nullX + sliceWidth, nullY + sliceHeight);
glTexCoord2d(0.0,1.0); glVertex2f(nullX, nullY + sliceHeight);
glEnd();
glFlush();
UPDATE:
// initialize
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB);
glutInitWindowSize(screenWidth, screenHeight);
glClearColor(0.3,0.3,0.3,0.0);
glMatrixMode(GL_PROJECTION);
glOrtho(-400.0 ,400.0 ,-300.0 ,300.0 ,0 ,1.0);

How is your projection matrix setup?
Those coordinates are correct assuming you have a traditional projection matrix where the Y-axis increases from the bottom to top of the screen. If, on the other hand, you reversed your projection matrix so that (0,0) is effectively the top-left corner of your screen then you have a problem.
If this is the case, the texture is not really rotated, it is mirrored. There is no rotation that can produce such a situation, it is what is known as a change of chirality (also known as handedness). You can either use a traditional projection matrix where the Y-axis behaves as described earlier, or compensate when you compute your texture coordinates by flipping the second texture coordinate (T).

as i used openCV for load and slice image, i simply flip image.
IplImage *source = cvLoadImage("space.png",1);
if(source == NULL) source = cvLoadImage("C://space.png",1);
cvFlip(source, source, 0);
Thanks all for your help!

Related

Drawing a scene in xz plane opengl

I am working on an opengl program. The viewing parameters are :
eye 0 -4 6
viewup 0 1 0
lookat 0 0 0
I want to draw a background rectangle (with texture) such that I will be able to see it from the current eye location. Right now, the eye is looking from the -ve Y direction. I want to be able to draw a rectangle that covers the entire screen. I am not understanding what coordinates to give to the rectangle and how to get the texture mapping.
Currently I have this in my method:
What would be the code for the same. I have this in function:
glPushMatrix();
glLoadIdentity();
glBegin(GL_QUADS);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex2f(-1.0, -1.0);
glVertex2f(-1.0, 1.0);
glVertex2f(1.0, 1.0);
glVertex2f(1.0, -1.0);
glEnd();
glPopMatrix();
The easiest option for drawing a background image that is independent of the camera is to draw it in normalized device coordinates (NDC) and do not perform any transformations/projections on it.
To cover the whole screen, you have to draw a quad going from p = [-1, -1] to [1,1]. The texture coordinates can then be found by tex = (p + 1)/2
Normalized device coordinates are the coordinates one would normally get after applying projection and the perspective divide. They span a cube from [-1,-1,-1] to [1,1,1] where the near plane is mapped to z = -1 (at least in OpenGL, in DirectX the near plane is mapped to z=0). In your special case, the depth should not matter, as long as you draw the background plane as the first element in each frame and disable the depth-test.

Understanding the window coordinates' interpretation in OpenGL

I was trying to understand OpenGL a bit more deep and I got stuck with below issue.
This segment describes my understanding, and the outputs are as assumed.
glViewport(0, 0 ,800, 480);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -1);
glRotatef(0, 0, 0, 1);
glBegin(GL_QUADS);
glVertex3f(-128, -128, 0.0f);
glVertex3f(128, -128, 0.0f);
glVertex3f(128, 128, 0.0f);
glVertex3f(-128, 128, 0.0f);
glEnd();
The window coordinates (Wx, Wy, Wz) for the above snippet are
(272.00000286102295, 111.99999332427979, 5.9604644775390625e-008)
(527.99999713897705, 111.99999332427979, 5.9604644775390625e-008)
(527.99999713897705, 368.00000667572021, 5.9604644775390625e-008)
(272.00000286102295, 368.00000667572021, 5.9604644775390625e-008)
I did a glReadPixels() and dumped to a bmp file. In the image I get a quad as expected with the (Wx, Wy) mentioned above ( since incase of images, the origin is at the top left, while verifying the bmp image I took care of subtracting the the window height i.e 480). This output was as per my understanding - (Wx, Wy) will be used as a 2D coordinate and Wz will be used for depth purpose.
Now comes the issue. I tried the below code snippet.
glViewport(0, 0 ,800, 480);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(100, 0, -1);
glRotatef(30, 0, 1, 0);
glBegin(GL_QUADS);
glVertex3f(-128, -128, 0.0f);
glVertex3f(128, -128, 0.0f);
glVertex3f(128, 128, 0.0f);
glVertex3f(-128, 128, 0.0f);
glEnd()
The window coordinates for the above snippet are
(400.17224205479812, 242.03174613770986, 1.0261343689191909)
(403.24386530741430, 238.03076912806583, 0.99456100555566640)
(403.24386530741430, 241.96923087193414, 0.99456100555566640)
(400.17224205479812, 237.96825386229017, 1.0261343689191909)
When I dumped output to a bmp file, I expected to have a very small parallelogram(approx like a 4 x 4 square transformed to a parallelogram) based on the above (Wx, Wy). But this was not the case. The image had a different set of coordinates as below
(403, 238)
(499, 113)
(499, 366)
(403, 241)
I have mentioned the coordinates in CW direction as seen on the image.
I got lost here. Can anyone please help in understanding what and why it is happening in the 2nd case??
How come I got a point (499, 113) on the screen when it was no where in the calculated window coordinates?
I used gluProject() to the window coordinates.
Note : I'm using OpenGL 2.0. I'm just trying to understand the concepts here, so please don't suggest to use versions > OpenGL 3.0.
edit
This is an update for the answer posted by derhass
The homogenous coordinates after the projection matrix for the 2nd case is as follows
(-0.027128123630699719, -0.53333336114883423, -66.292930483818054, -63.000000000000000)
(0.52712811245482882, -0.53333336114883423, 64.292930722236633, 65.00000000000000)
(0.52712811245482882, 0.53333336114883423, 64.292930722236633, 65.000000000000000)
(-0.027128123630699719, 0.53333336114883423, -66.292930483818054, 63.000000000000000)
So here for the vertices where z > -1, the vertices will get clipped at the near plane. When this is the case, shouldn't GL use the projected point at z = -1 plane?
The thing you are missing here is clipping.
After this
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
you basically have a camera at origin, looking along the -z direction, and the near plane at z=-1, the far plane at z=-100. Now you draw a 128x128 square rotated at 30 degrees aliong the y (up) axis, and shifted by -1 along z (and 100 along x, but that is not the crucial point here). Since You rotated the square around its center point, the z value for two of the points will be way before the near plane, while the other two should fall into the frustum. (And you can also see that as those two points match your expectations).
Now directly projecting all 4 points to window space is not what GL does. It transforms the points to clip space, intersects the primitives with all 6 sides of the viewing frustum and finally projects the clipped primitives into window space for rasterization.
The projection you did is actually only meaningful for points which lie inside the frustum. Two of your points lie behind the camrea, and projecting points behind the camera will create an mirrored image of these points in front of the camera.

OpenGL connecting a skydome and a flat ground plane

I am building a simple city with OpenGL and GLUT, I created a textured skydome and now I would like to connect that with a flat plane to give an appearance of the horizon. To give relative size, the skydome is 3.0 in radius with depth mask turned off, and it only has the camera rotation applied and sits over the camera. A building is about 30.0 in size, and I am looking at it from y=500.0 down.
I have a ground plane that is 1000x1000, I am texturing with a 1024x1024 resolution texture that looks good up close when I am against the ground. My texture is loaded with GL_REPEAT with texture coordinate of 1000 to repeat it 1000 times.
Connecting the skydome with the flat ground plane is where I am having some issues. I will list a number of things I have tried.
Issues:
1) When I rotate my heading, because of the square nature of the plane, I see edge like the attached picture instead of a flat horizon.
2) I have tried a circular ground plane instead, but I get a curve horizon, that becomes more curvy when I fly up.
3) To avoid the black gap between the infinite skydome, and my limited size flat plane, I set a limit on how far up I can fly, and shift the skydome slightly down as I go up, so I don't see the black gap between the infinite skydome and my flat plane when I am up high. Are there other methods to fade the plane into the skydome and take care of the gap when the gap varies in size at different location (ie. Circle circumscribing a square)? I tried to apply a fog color of the horizon, but I get a purple haze over white ground.
4) If I attached the ground as the bottom lid of the skydome hemisphere, then it looks weird when I zoom in and out, it looks like the textured ground is sliding and disconnected with my building.
5) I have tried to draw the infinitely large plane using the vanishing point concept by setting w=0. Rendering infinitely large plane
The horizon does look flat, but texturing properly seems difficult, so I am stuck with a single color.
6) I am disable lighting for the skydome, if I want to enable lighting for my ground plane, then at certain pitch angle, my plane would look black, but my sky is still completely lit, and it looks unnatural.
7) If I make my plane larger, like 10000x10000, then the horizon will look seemingly flat, but, if I press the arrow key to adjust my heading, the horizon will shake for a couple of seconds before stabilizing, what is causing it, and how could I prevent it. A related question to this, it seems like tiling and texturing 1000x1000 ground plane and 10000x10000 does not affect my frame rate, why is that? Wouldn't more tiling mean more work?
8) I read some math-based approach with figuring out the clipping rectangle to draw the horizon, but I wonder if there are simpler approaches http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-super-simple-method-for-creating-infinite-sce-r2769
Most threads I read regarding horizon would say, use a skybox, use a skydome, but I haven't come across a specific tutorial that talks about merging skydome with a large ground plane nicely. A pointer to such a tutorial would be great. Feel free to answer any parts of the question by indicating the number, I didn't want to break them up because they are all related. Thanks.
Here is some relevant code on my setup:
void Display()
{
// Clear frame buffer and depth buffer
glClearColor (0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
camera.Update();
GLfloat accumulated_camera_rotation_matrix[16];
GetAccumulatedRotationMatrix(accumulated_camera_rotation_matrix);
SkyDome_Draw(accumulated_camera_rotation_matrix);
FlatGroundPlane_Draw();
// draw buildings
// swap buffers when GLUT_DOUBLE double buffering is enabled
glutSwapBuffers();
}
void SkyDome_Draw(GLfloat (&accumulated_camera_rotation_matrix)[16])
{
glPushMatrix();
glLoadIdentity();
glDepthMask(GL_FALSE);
glDisable(GL_LIGHTING);
glMultMatrixf(accumulated_camera_rotation_matrix);
// 3.0f is the radius of the skydome
// If we offset by 0.5f in camera.ground_plane_y_offset, we can offset by another 1.5f
// at skydome_sky_celing_y_offset of 500. 500 is our max allowable altitude
glTranslatef( 0, -camera.ground_plane_y_offset - camera.GetCameraPosition().y /c amera.skydome_sky_celing_y_offset/1.5f, 0);
skyDome->Draw();
glEnable(GL_LIGHTING);
glDepthMask(GL_TRUE);
glEnable(GL_CULL_FACE);
glPopMatrix();
}
void GetAccumulatedRotationMatrix(GLfloat (&accumulated_rotation_matrix)[16])
{
glGetFloatv(GL_MODELVIEW_MATRIX, accumulated_rotation_matrix);
// zero out translation is in elements m12, m13, m14
accumulated_rotation_matrix[12] = 0;
accumulated_rotation_matrix[13] = 0;
accumulated_rotation_matrix[14] = 0;
}
GLfloat GROUND_PLANE_WIDTH = 1000.0f;
void FlatGroundPlane_Draw(void)
{
glEnable(GL_TEXTURE_2D);
glBindTexture( GL_TEXTURE_2D, concreteTextureId);
glBegin(GL_QUADS);
glNormal3f(0, 1, 0);
glTexCoord2d(0, 0);
// repeat 1000 times for a plane 1000 times in width
GLfloat textCoord = GROUND_PLANE_WIDTH;
glVertex3f( -GROUND_PLANE_WIDTH, 0, -GROUND_PLANE_WIDTH);
// go beyond 1 for texture coordinate so it repeats
glTexCoord2d(0, textCoord);
glVertex3f( -GROUND_PLANE_WIDTH, 0, GROUND_PLANE_WIDTH);
glTexCoord2d(textCoord, textCoord);
glVertex3f( GROUND_PLANE_WIDTH, 0, GROUND_PLANE_WIDTH);
glTexCoord2d(textCoord, 0);
glVertex3f( GROUND_PLANE_WIDTH, 0, -GROUND_PLANE_WIDTH);
glEnd();
glDisable(GL_TEXTURE_2D);
}
Void Init()
{
concreteTextureId = modelParser->LoadTiledTextureFromFile(concreteTexturePath);
}
ModelParser::LoadTiledTextureFromFile(string texturePath)
{
RGBImage image; // wrapping 2-d array of data
image.LoadData(texturePath);
GLuint texture_id;
UploadTiledTexture(texture_id, image);
image.ReleaseData();
return texture_id;
}
void ModelParser::UploadTiledTexture(unsigned int &iTexture, const RGBImage &img)
{
glGenTextures(1, &iTexture); // create the texture
glBindTexture(GL_TEXTURE_2D, iTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// the texture would wrap over at the edges (repeat)
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, img.Width(), img.Height(), GL_RGB, GL_UNSIGNED_BYTE, img.Data());
}
Try using a randomized heightmap rather than using a flat plane. Not only will this look more realistic, it will make the edge of the ground plane invisible due to the changes in elevation. You can also try adding in some vertex fog, to blur the area where the skybox and ground plane meet. That's roughly what I did here.
A lot of 3D rendering relies on tricks to make things look realistic. If you look at most games, they have either a whole bunch of foreground objects that obscure the horizon, or they have "mountains" in the distance (a la heightmaps) that also obscure the horizon.
Another idea is to map your ground plane onto a sphere, so that it curves down like the earth does. That might make the horizon look more earthlike. This is similar to what you did with the circular ground plane.

How to clip with circles in OpenGL

I'm wondering if it is possible to simulate the effect of looking through the keyhole in OpenGL.
I have my 3D scene drawn but I want to make everythig black everything except a central circle.
I tried this solution but its doing the completely opposite of what I want:
// here i draw my 3D scene
// Begin 2D orthographic mode
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
GLint viewport [4];
glGetIntegerv(GL_VIEWPORT, viewport);
gluOrtho2D(0, viewport[2], viewport[3], 0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
// Here I draw a circle in the center of the screen
float radius=50;
glBegin(GL_TRIANGLE_FAN);
glVertex2f(x, y);
for( int n = 0; n <= 100; ++n )
{
float const t = 2*M_PI*(float)n/(float)100;
glVertex2f(x + sin(t)*r, y + cos(t)*r);
}
glEnd();
// end orthographic 2D mode
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
What I get is a circle drawn in the center, but I would like to obtain its complementary...
Like everything else in OpenGL, there are a few ways to do this. Here are two off the top of my head.
Use a circle texture: (recommended)
Draw the scene.
Switch to an orthographic projection, and draw a quad over the entire screen using a texture which has a white circle at the center. Use the appropriate blending function:
glEnable(GL_BLEND);
glBlendFunc(GL_ZERO, GL_SRC_COLOR);
/* Draw a full-screen quad with a white circle at the center */
Alternatively, you can use a pixel shader to generate the circular shape.
Use a stencil test: (not recommended, but it may be easier if you don't have textures or shaders)
Clear the stencil buffer, and draw the circle into it.
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 1);
glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE);
/* draw circle */
Enable the stencil test for the remainder of the scene.
glEnable(GL_STENCIL_TEST)
glStencilFunc(GL_EQUAL, 1, 1);
glStencileOp(GL_KEEP, GL_KEEP, GL_KEEP);
/* Draw the scene */
Footnote: I recommend avoiding use of immediate mode at any point in your code, and using arrays instead. This will improve the compatibility, maintainability, readibility, and performance of your code --- a win in all areas.

Previous calls to GL.Color3 are making my texture use the wrong colors

Making a 2D OpenGL game. When rendering a frame I need to first draw some computed quads geometry and then draw some textured sprites. When the body of my render method only draws the sprites, everything works fine. However, when I try to draw my geometric quads prior to the sprites the texture of the sprite changes to be the color of the last GL.Color3 used previously. How do I tell OpenGL (well, OpenTK) "Ok, we are done drawing geometry and its time to move on to sprites?"
Here is what the render code looks like:
// Let's do some geometry
GL.Begin(BeginMode.Quads);
GL.Color3(_dashboardBGColor); // commenting this out makes my sprites look right
int shakeBuffer = 100;
GL.Vertex2(0 - shakeBuffer, _view.DashboardHeightPixels);
GL.Vertex2(_view.WidthPixelsCount + shakeBuffer, _view.DashboardHeightPixels);
GL.Vertex2(_view.WidthPixelsCount + shakeBuffer, 0 - shakeBuffer);
GL.Vertex2(0 - shakeBuffer, 0 - shakeBuffer);
GL.End();
// lets do some sprites
GL.Begin(BeginMode.Quads);
GL.BindTexture(TextureTarget.Texture2D, _rockTextureId);
float baseX = 200;
float baseY = 200;
GL.TexCoord2(0, 0); GL.Vertex2(baseX, baseY);
GL.TexCoord2(1, 0); GL.Vertex2(baseX + _rockTextureWidth, baseY);
GL.TexCoord2(1, 1); GL.Vertex2(baseX + _rockTextureWidth, baseY - _rockTextureHeight);
GL.TexCoord2(0, 1); GL.Vertex2(baseX, baseY - _rockTextureHeight);
GL.End();
GL.Flush();
SwapBuffers();
The default texture environment mode is GL_MODULATE, which does that, it multiplies the texture color with the vertex color.
A easy solution is to set the vertex color before drawing a textured primitive to 1,1,1,1 with:
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
Another solution is to change the texture environment mode to GL_REPLACE, which makes the texture color replace the vertex color and doesn't have the issue:
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);