how to use glDrawPixels to render a picture as a background - opengl

Here is the thing: I want to load a picture as a background filled in the whole viewport. This background should always face to the camera no matter where the camera face to.
First I naturally think use a texture as a background, my code is below:
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0,1,0,1,0,1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, myimage.GetID());
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(0, 0);
glTexCoord2f(1, 0); glVertex2f(1, 0);
glTexCoord2f(1, 1); glVertex2f(1, 1);
glTexCoord2f(0, 1); glVertex2f(0, 1);
glEnd();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
Believe me, myimage is a CIMAGE class that can load pics into textures, it works well.
However, for some unknown reason, my application cannot load a texture into a rectangle. (I described this problem here click) As a result, I only can see a rectangle frame around my viewport.
So, I figure out another solution.
I use the glDrawPixels instead of a texture. My code is below:
glRasterPos2i(0, 0);
glDrawPixels(myimage.GetWidth(), myimage.GetHeight(), (myimage.GetBPP() == 24)?GL_RGB:GL_RGBA, GL_UNSIGNED_BYTE,
myimage.GetData());
The picture appeared! However, the pic didn't always face to my camera. It only appears in a particular direction. You know, like a object in the scene, but not a background always face to the camera.
So anybody know how to use the glDrawPixels to implement a background?
By the way, I think this background is not a object placed in the 3D scene. So billboards may not be my solution. Again, this background filled in the whole view port and always face to camera.

One of the reasons your texture loading might not work is because it might not have power-of-two dimensions. Try a square 256x256 texture (or the like) to see if this is the problem. Look here for more info on Rectangle Textures.
Coming back to your background issue - the right way to do this would be to
Set up an orthographic projection/viewport that fills the entire screen.
glViewport(0,0,nw,nh);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,1,0,1,0,1);
glMatrixMode(GL_MODELVIEW);
Disable depth testing
Draw the fullscreen quad with the texture/texture rectangle you have loaded.
glBegin(GL_QUADS);
glVertex2f(0,0);
glVertex2f(1,0);
glVertex2f(1,1);
glVertex2f(0,1);
glEnd();
Set up your regular projection/modelview and continue.
Hope this helps!

Related

Drawing a primitive ( GL_QUADS ) on top of a 2D texture - no quad rendered, texture colour changed

I am trying to draw a 2D scene with a texture as background and then ( as the program flows and does computations ) draw different primitives on the "canvas". As a test case I wanted to draw a blue quad on the background image.
I have looked at several resources and SO questions to try get the information I need to accomplish the task ( e.g. this tutorial for first primitive rendering, SOIL "example" for texture loading ).
My understanding was that the texture will be drawn on Z=0, and quad as well. Quad would thus "cover" a portion of texture - be drawn on it, which is what I want. Instead the result of my display function is my initial texture in black/blue colour, and not my texture ( in original colour ) with a blue quad drawn on it. This is the display function code :
void display (void) {
glClearColor (0.0,0.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
// background render
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f); // window size is 1024x512
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture );
glBegin (GL_QUADS);
glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
glEnd(); // here I get the texture properly displayed in window
glDisable(GL_TEXTURE_2D);
// foreground render
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,100.0);
glVertex2d(700.0,500.0);
glEnd(); // now instead of a rendered blue quad I get my texture coloured in blue
glutSwapBuffers(); }
I have already tried with many modifications, but since I am just beginning with OpenGL and don't yet understand a lot of it, my attempts failed. For example, I tried with pushing and popping matrices before and after drawing the quad, clearing the depth buffer, changing parameters in gluPerspective etc.
How do I have to modify my code so it will render the quad properly on top of the background texture image of my 2D scene ? Being a beginner, extra explanations of the modifications ( as well as mistakes in the present code ) and principles in general will be greatly appreciated.
EDIT - after answer by Reto Koradi :
I have tried to follow the instructions, and the modified code now looks like :
// foreground render
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS); // same from here on
Now I can see the blue "quad", but it is not displayed properly, it looks something like this .
Beside that, the whole scene is flashing really quickly.
What do I have to change in my code so that quad will get displayed properly and screen won't be flashing ?
You are setting up a perspective transformation before rendering the blue quad:
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
The way gluPerspective() is defined, it sets up a transformation that looks from the origin down the negative z-axis, with the near and far values specifying the distance range that will be visible. With this transformation, z-values from -1.0 to -100.0 will be visible. Which does not include your quad at z = 0.0.
If you want to draw your quad in 2D coordinate space, the easiest solution is to not use gluPerspective() at all. Just use a glOrtho() type transformation like you did for your initial drawing.
If you want perspective, you will need a GL_MODELVIEW transformation as well. You can start with a translation in the negative z-direction, within a range of 1.0 to 100.0. You may have to adjust your coordinates for the different coordinate system as well, or use additional transformations that also translate in xy-direction, and possibly scale.
The code also has the coordinates in the wrong order for drawing the blue quad. You either have to change the draw call to GL_TRIANGLE_STRIP (recommended because it at least gets you one step closer to using features that are not deprecated), or swap the order of the last two vertices:
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,500.0);
glVertex2d(700.0,100.0);
glEnd(GL_QUADS);

How to use OpenGL and DevIL get user drawing pixels

I need to load an image, display the image, and let user draw some strokes on the image and get those drawing pixels.
I know OpenGL can load a texture image read by DevIL, and display it. But I am not sure how to use OpenGL to get user drawing pixels from loaded texture.
First off, note that a lot of this code is deprecated. But it is easier to understand from just code snippets. I'm not doing everything for you, but I hope to get you started by providing the basic workflow.
There are a few things you need to do to get the result you are looking for.
Firstly you have to load your texture in video memory. This is done with:
glGenTextures(1, texture_id); //generate a texture object
glBindTexture(GL_TEXTURE_2D, texture_id); //bind the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); //set filters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set filters
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, original_image_data); //create the actual texture in video ram
When this succeeds you can draw your texture with:
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
//set to ortographic projection
glOrtho(0.0, window_width, 0.0, window_height, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,texture_id);
glBegin(GL_QUADS);
glTexCoord2f(0, 1); glVertex2f(-1.0f, 1.0f);
glTexCoord2f(1, 1); glVertex2f( 1.0f, 1.0f);
glTexCoord2f(1, 0); glVertex2f( 1.0f, -1.0f);
glTexCoord2f(0, 0); glVertex2f(-1.0f, -1.0f);
glEnd();
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
The next thing you will need to do is capture your user's mouse input. If you are on windows you can use the windowprocedure callback and look for the WM_MOUSE event. If you use a library for window management then the library will probably provide functionality for keyboard and mouse intput.
Now that you have the mouse input, you should draw a line on the screen every time a user moves the mouse while holding down the button:
glLineWidth(2.5);
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_LINES);
glVertex2f(mouse_x_start, mouse_y_start);
glVertex2f(mouse_x_end, mouse_y_end);
glEnd();
glColor3f(1.0, 1.0, 1.0);
When all of the above goes well, you should see your texture on the screen and a red line if you hold the mouse button and move the mouse. You are nearly there. The last thing that needs to be done is read the pixels. You can do this with glReadPixels() like this:
void glReadPixels(0, 0, window_width, window_height, GL_RGB, GL_UNSIGNED_BYTE, new_image_data);
You now have a byte array with the user's strokes on it. I would highly recommend writing your own code for this process, because the code I used is deprecated, and should only be used when targeting older platforms. The workflow should remain the same though. I hope this is enough to get you started. Good luck!
I assume you are working on a plain 2D app.
The idea is that if performance isn't your concern you may consider doing everything in software by crudely manipulating pixel data and drawing the image with your graphics library of choice. I recommend the Simple Directmedia Layer library. It has also a sublibrary called SDL_image that can load a good assortment of formats.
An approach like this works until you mess with big/multiple textures. If you need the GPU horsepower for realtime framerates then you must fight your way through FrameBuffer Objects, but beware! This basically means "do-everything-you-can-inside-the-pixel-shaders" and limit as much as you can calls like glReadPixels/glTexImage2D &co.

projecting image with opengl and Antialiasing

In my work I overlap a part of a captured frame with an image. I open my webcam with openCV and then I transform the captured frame in a texture and display it in a GLUT window. Also, I overlap a part of this texture with this image:
I do this in real time, and the result is:
As you can see, edges of projected image are inaccurate. I think it is an aliasing problem, but I don't know how to do the antialiasing process with opengl. I've tried to look for on web, but I didn't find a good solution for my problem.
In my "calculate" function I transform the mat image into a texture usign the following code:
GLvoid calculate(){
...
...
cvtColor(image, image, CV_BGR2RGB);
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
//glTexImage2D(GL_TEXTURE_2D, 0, 4,image.cols, image.rows, 0, GL_RGB,GL_UNSIGNED_BYTE, image.data);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, image.cols, image.rows, GL_RGB, GL_UNSIGNED_BYTE, image.data);
}
and I show the result using this code:
GLvoid Show(void) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
// Matrice di proiezione
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WIDTH, HEIGHT, 0);
// Matrice model view
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
...
...
glBindTexture(GL_TEXTURE_2D, textures[1]);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f((GLfloat)((coord[3].x)),(GLfloat)(coord[3].y));
glTexCoord2f(1.0f, 0.0f); glVertex2f((GLfloat)((coord[0].x)),(GLfloat)(coord[0].y));
glTexCoord2f(1.0f, 1.0f); glVertex2f((GLfloat)((coord[1].x)),(GLfloat)(coord[1].y));
glTexCoord2f(0.0f, 1.0f); glVertex2f((GLfloat)((coord[2].x)),(GLfloat)(coord[2].y));
glEnd();
}
glFlush();
glutSwapBuffers();
}
In initialization function I write this:
GLvoid Init() {
glGenTextures(2, textures);
glClearColor (0.0, 0.0, 0.0, 0.0);
glEnable (GL_POLYGON_SMOOTH);
glHint (GL_POLYGON_SMOOTH_HINT, GL_DONT_CARE);
glDisable(GL_DEPTH_TEST);
}
but it doesn't work...
I work on Win7 x64, with OPenGL 4.0 and Glut 3.7. My video card is an NVidia GeForce gt 630. also I enabled antialiasing from Nvidia control panel, but nothing is changed.
does anyone know how to help me?
I solved my problem! I used GLFW insted of GLUT, as #Michael IV suggested me!
in order to do antialiasing with GLFW i used this line of code:
glfwOpenWindowHint(GLFW_FSAA_SAMPLES,4);
and the result now is very good, as you can see in the following image.
Thanks for your help!
First I wonder why you are using OpenGL 4.0 to work with fixed (deprecated ) pipeline...
But let's get to the problem.What you need is MSAA .I am not sure ,enabling it via control panel will always do the trick.Usually it is done inside the code.
Unfortunately for you , you selected to use GLUT which has no option to set hardware MSAA level.If you want to be able to do so switch to GLFW.Another option is do do it manually but that implies you use custom FBOs.In such a scenario you can create FBO with MSAA texture attachment setting MSAA level for the texture (also you can apply custom multisampling algorithms in fragment shader if you wish).
Here is a thread on this topic.
GLFW allows you specifying MSAA level on window setup.See the related API.
MSAA does degrade the performance ,but how much depends on your hardware and probably OpenGL drivers.

Keeping original colors

I'm trying to place a texture (with alpha) on another texture in OpenGL. I do it without any problems, but it's not as I wanted: my nearest image's color is strongly affected by background image (furthest texture) resulting in appearing orange in spite of red.
Anyone knows a way of blending (or getting rid of alpha) that will resolve this issue?
Blending initialization:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
Drawing scene:
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
//furthest image (background)
glBindTexture(GL_TEXTURE_2D, texture[1]);
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(0, 0);
glTexCoord2f(1, 0); glVertex2f(500, 0);
glTexCoord2f(1, 1); glVertex2f(500, 500);
glTexCoord2f(0, 1); glVertex2f(0, 500);
glEnd();
//nearest image (appears orange, should be red)
glBindTexture(GL_TEXTURE_2D, texture[0]);
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(100, 100);
glTexCoord2f(1, 0); glVertex2f(300, 100);
glTexCoord2f(1, 1); glVertex2f(300, 300);
glTexCoord2f(0, 1); glVertex2f(100, 300);
glEnd();
glutSwapBuffers();
EDIT.
Here's an image depicting how it looks:
Here's an image of how it should look:
I believe what you want is 'alpha testing', not blending. See
glEnable(GL_ALPHA_TEST)
glAlphaFunc()
If you want to leave blending enabled, you should use
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This will only use the source color in places where the source alpha is 1. Currently your function adds the source color to the background color.
If you don't want any mixing of colors, then using alpha test is the better way to go, as it uses less resources then blending.
This blend func
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
is the cause of your problems. The GL_ONE for destination means, that whatever is present already in the framebuffer will be added to the incoming colour regardles of the alpha value.
In your case your red texture gets added with the greenish background. And since red + greenish = orange this is what you get.
What you want is mask the previous content in the destination framebuffer with your alpha channel, which is done using
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Also remember that OpenGL state is meant to be set and reset on demand, so when drawing other textures then you might need other setting for blending and blend func.
With help of you all, I managed to resolve the issue by:
resaving my texture as PNG (in spite of BMP)
changing blending function to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Thanks to all contributtors :)

Opengl: issue with masking

I'm working on creating a hole in a wall using masking in opengl, my code is quit simple like this,
//Draw the mask
glEnable(GL_BLEND);
glBlendFunc(GL_DST_COLOR,GL_ZERO);
glBindTexture(GL_TEXTURE_2D, texture[3]);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex3f(-20,40,-20);
glTexCoord2d(0,1);glVertex3f(-20,40,40);
glTexCoord2d(1,1);glVertex3f(20,40,40);
glTexCoord2d(1,0);glVertex3f(20,40,-20);
glEnd();
//Draw the Texture
glBlendFunc(GL_ONE, GL_ONE);
glBindTexture(GL_TEXTURE_2D, texture[2]);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex3f(-20,40,-20);
glTexCoord2d(0,1);glVertex3f(-20,40,40);
glTexCoord2d(1,1);glVertex3f(20,40,40);
glTexCoord2d(1,0);glVertex3f(20,40,-20);
glEnd();
The problem is, I got the hole in the wall correctly but it's semi transparent, I'm getting like black shade over it, also I can see through it.
Here's a photo for what I'm getting:
any suggestions?
SOLVED :D
It was a problem with the surfaces' Normals, once I set the normals in the correct position.The black shade faded out.