Is it possible to draw something in OpenGL on the drawing scene with giving window pixel coordinates?
For example, I'd like to draw a single point in a 400x400 window (e.g. in the middle of that window). Is there any quick way to set everything up so I could just type:
glVertex3f(200.0 , 200.0 , 1.0);?
You need to set up an orthogonal projection matrix for that first.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0.0f, WindowWidth, WindowHeight, 0.0f, 0.0f, 10.0f);
glMatrixMode(GL_MODELVIEW);
You can then render in window coordinates.
glPointSize(5.0f);
glBegin(GL_POINTS);
glVertex3f(100.0f, 100.0f, 1.0f);
glEnd();
Should render a point with a diameter of 5 pixels on window coordinates [100, 100]
Do note that this old way of rendering is deprecated and you should use VBOs and the like, but it is still good for testing.
Related
My openGL application draws the circle as an oval instead of a circle. My code is:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 800, 0.0f, 400, 0.0f, 1.0f);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
glColor3f(255.0, 255.0, 255.0);
drawRect(racket_left_x, racket_left_y, racket_width, racket_height);
drawRect(racket_right_x, racket_right_y, racket_width,
racket_height);
glPopMatrix();
// drawBall();
//drawBall2();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
drawBall();
glPopMatrix();
glPopMatrix();
glutSwapBuffers();
How can I fix this?
I've tried changing the glMatrixModes but that doesn't seems to work. Thanks.
The projection matrix transforms all vertex data from the eye coordinates to the clip coordinates.
Then, these clip coordinates are also transformed to the normalized device coordinates (NDC) by dividing with w component of the clip coordinates.
The normalized device coordinates is in range (-1, -1, -1) to (1, 1, 1).
With the orthographic projection, the eye space coordinates are linearly mapped to the NDC.
The orthographic projection can be set up by glOrtho. If you want to set up a projection that allows you to draw in window size scales, then you have to do it like this:
int wndWidth = 800;
int wndHeight = 400;
glOrtho( 0.0, (float)wndWidth, 0.0, (float)wndHeight, -1.0, 1.0 );
If the viewport is rectangular this has to be considerd by mapping the coordinates.
float aspect = (float)widht/height;
glOrtho(-aspect, aspect, -1.0f, 1.0f, -1.0f, 1.0f);
You set up a proper window size projection matrix before you draw the rectangles (drawRect)
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 800, 0.0f, 400, 0.0f, 1.0f);
.....
drawRect( ..... );
But you "clear" the projection matrix and do not care about the aspect of the view before you draw the circle (drawBall).
Change your code somehow like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float aspect = 800.0f/400.0f;
glOrtho(-aspect, aspect, -1.0f, 1.0f, -1.0f, 1.0f);
drawBall();
By the way, while glPushMatrix push an element to the matrix stack, glPopMatrix pop an element from the matrix stack. In OpenGL there is one matrix stack for each matrix mode (See glMatrixMode). The matrix modes are GL_MODELVIEW, GL_PROJECTION, and GL_TEXTURE and all matrix operations, are applied to the matrix stack you have specified by glMatrixMode.
This means the both glPopMatrix instructions at the end of your code snippet should not be there.
I want to use OpenGL to draw on top of a webcam stream. I'm using an SDL_Surface named screen_surface_ containing webcam data, that I'm rendering to the screen using (1):
SDL_UpdateTexture(screen_texture_, NULL, screen_surface_->pixels, screen_surface_->pitch);
SDL_RenderClear(renderer_);
SDL_RenderCopy(renderer_, screen_texture_, NULL, NULL);
Then I try to draw some geometry on top:
glLoadIdentity();
glColor3f(1.0, 0.0, 1.0);
glBegin( GL_QUADS );
glVertex3f( 10.0f, 50.0f, 0.0f ); /* Top Left */
glVertex3f( 50.0f, 50.0f, 0.0f ); /* Top Right */
glVertex3f( 50.0f, 10.0f, 0.0f ); /* Bottom Right */
glVertex3f( 10.0f, 10.0f, 0.0f ); /* Bottom Left */
glEnd( );
glColor3f(1.0, 1.0, 1.0); //<- I need this to make sure the webcam stream isn't pink?
SDL_RenderPresent(renderer_);
I have initialized OpenGL using (excerpt):
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);
glClearColor( 0.0f, 0.0f, 0.0f, 0.0f );
glViewport( 0, 0, res_width_, res_height_ );
glClear( GL_COLOR_BUFFER_BIT );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho(0.0f, res_width_, res_height_, 0.0f, -1.0f, 1.0f);
Subquestion: If I don't reset the glColor to white the whole webcam stream is colored pink. I find this odd, because I thought that SDl_RenderCopy had already rendered that texture before the first call to glColor. So how does SDL_RenderCopy actually work?
Main question: I get a neat 40x40 square in the top left of the screen on top of my webcam feed (good!). However, in stead of pink, it is a kind of flickering dark purple color; seemingly dependent on the camera feed in the background. Could you please tell me what I'm overlooking?
Edit:
As per #rodrigo's comment, these are some images with the color set to R, G, B and white, respectively:
Red Square
Green Square
Blue Square
White Square
Looking at these, it seems that the underlying texture has some effect on the color. Could it be that OpenGL is still applying (some part of) the texture to the quad?
Edit:
I suspect now that the geometry is drawn using the render texture as a texture, even though I've called glDisable(GL_TEXTURE_2D). Looking at the "White Square" screenshot (below), you can see that the white quad is the same color as the bottom-right pixel. I guess that the quad has no texture coordinates, so only the bottom-right texel is used. Knowing this, better question: how do I disable texturing?.
I have fixed the problem by simply adding
glcontext_ = SDL_GL_CreateContext(window_);
to the SDL_Init code. I think all my calls to openGL functions (like glDisable(GL_TEXTURE_2D) were applied to the wrong context. Explicitly creating a context from SDL sets the right context to be active, I'm guessing.
Something else to look out for: when using textured geometry after using SDL_RenderCopy; I had to make sure to reset
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
before calling glTexImage2d, because it uses
GL_UNPACK_ROW_LENGTH if it is greater than 0, the width argument to the pixel routine otherwise
(from the OpenGL docs)
i have a "normal" opengl scene, and want to overlay this scene with a simple quad.
gl.glMatrixMode(GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glMatrixMode(GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glTranslatef(-0.8f, 0.0f, 0.0f);
// draw background
gl.glBegin(GL_QUADS); // of the color cube
gl.glColor3f(1.0f, 0.5f, 0.5f);
gl.glVertex3f(0, 0, 1.0f);
gl.glVertex3f(0.5f, 0, 1.0f);
gl.glVertex3f(0.5f, 0.5f, 1.0f);
gl.glVertex3f(0, 0.5f, 1.0f);
gl.glEnd();
gl.glMatrixMode(GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL_MODELVIEW);
gl.glPopMatrix();
as u can see on the screenshot, the new matrix is distorted. the colored lines are the axes.
Now the question: How can i set an aspect ratio (like in gluPerspective) for my matrix to render a correct squad?
You need to account for the size and shape of the viewport. Transformations are done in "normalized coordinates". This means that the bottom left corner of the viewport is (0,0) and the top right corner of the viewport is (1,1). Your projection matrix needs to account for the shape of the viewport to compensate and make your quad square.
The ratio of the height and width of a viewport is known as the "aspect ratio".
See this page, the important section for your problem is The field of view and Image Aspect Ratio. There's too much content to post a usable excerpt here. You should have no trouble finding resources on projection matrix aspect ratios, and how to compute a correct projection matrix.
I want to create a 3d cube with openGL. Also, I want to cover each side with an image that I transform in a texture.
I find cube coordinates in 2D, and I create a QUADS for each side.
My problem is that when I render textures corresponding cube sides, I see these textures overlap each other, as you can see in this image:
my code is:
Initialization:
glGenTextures(2, textures);
glClearColor (0.0, 0.0, 0.0, 0.0);
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_ALWAYS);
Transform image in thexture:
up = imread("up.png");
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, up.cols, up.rows, GL_RGB, GL_UNSIGNED_BYTE, up.data);
Display cube:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
// Set Projection Matrix
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WIDTH, HEIGHT, 0);
// Switch to Model View Matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, textures[1]);
glBegin(GL_QUADS);
//sopra
glTexCoord2f(0.0f, 0.0f); glVertex2f((GLfloat)((coord[6].x)),(GLfloat)(coord[6].y));
glTexCoord2f(1.0f, 0.0f); glVertex2f((GLfloat)((coord[5].x)),(GLfloat)(coord[5].y));
glTexCoord2f(1.0f, 1.0f); glVertex2f((GLfloat)((coord[4].x)),(GLfloat)(coord[4].y));
glTexCoord2f(0.0f, 1.0f); glVertex2f((GLfloat)((coord[7].x)),(GLfloat)(coord[7].y));
glEnd();
I do the same for the other sides of the cube.
The order in which I render textures is:
bottom (ground) side
up side
behind side
front side
left side
right side
what is wrong or what am I missing? Or, Maybe cannot create a 3d cube with 2d coordinates (glVertex2f (...))?
Thanks for your help!
You can't create a cube with 2d coordinates. The sides are overlapping because they are all on the same plane in space. A cube is in 3d space so needs 3 coordinates, x, y, and z.
So try using:
glVertex3f(x, y, z);
and use some appropriate z values depending on where you want each face.
For the texture you can still use:
glTexCoord2f(x, y);
since the textures are in 2 dimensional space.
If you are still confused about what to use for your coordinates I suggest you read this to help you understand 3d space in openGL:
http://www.falloutsoftware.com/tutorials/gl/gl0.htm
I'm trying to draw a 2d character sprite on top of a 2d tilemap, but when I draw the character he's got odd stuff behind him. This isn't in the sprite, so I think its the blending.
This is how my openGL is set up:
void InitGL(int Width, int Height) // We call this right after our OpenGL window is created.
{
glViewport(0, 0, Width, Height);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // This Will Clear The Background Color To Black
glClearDepth(1.0); // Enables Clearing Of The Depth Buffer
glDepthFunc(GL_LESS); // The Type Of Depth Test To Do
glDisable(GL_DEPTH_TEST); // Enables Depth Testing
//glShadeModel(GL_SMOOTH); // Enables Smooth Color Shading
glEnable(GL_TEXTURE_2D); // Enable Texture Mapping ( NEW )
glShadeModel(GL_FLAT);
glMatrixMode(GL_PROJECTION);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE , GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glAlphaFunc(GL_GREATER, 0.5f);
glMatrixMode(GL_PROJECTION);//configuring projection matrix now
glLoadIdentity();//reset matrix
glOrtho(0, Width, 0, Height, 0.0f, 100.0f);//set a 2d projection matrix
}
How should I set this up to work properly (i.e. drawing the sprite without odd stuff behind him.
This is what I am talking about: http://i.stack.imgur.com/cmotJ.png
PS: I need to be able to put transparent/semi-transparent images on top of each other and have whats behind them visible too
Does your sprite have premultiplied alpha? Your glBlendFunc setup is a little unusual, if you don't have premultiplied alpha it could definitely be causing the issue.