projecting image with opengl and Antialiasing - c++

In my work I overlap a part of a captured frame with an image. I open my webcam with openCV and then I transform the captured frame in a texture and display it in a GLUT window. Also, I overlap a part of this texture with this image:
I do this in real time, and the result is:
As you can see, edges of projected image are inaccurate. I think it is an aliasing problem, but I don't know how to do the antialiasing process with opengl. I've tried to look for on web, but I didn't find a good solution for my problem.
In my "calculate" function I transform the mat image into a texture usign the following code:
GLvoid calculate(){
...
...
cvtColor(image, image, CV_BGR2RGB);
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
//glTexImage2D(GL_TEXTURE_2D, 0, 4,image.cols, image.rows, 0, GL_RGB,GL_UNSIGNED_BYTE, image.data);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, image.cols, image.rows, GL_RGB, GL_UNSIGNED_BYTE, image.data);
}
and I show the result using this code:
GLvoid Show(void) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
// Matrice di proiezione
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WIDTH, HEIGHT, 0);
// Matrice model view
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
...
...
glBindTexture(GL_TEXTURE_2D, textures[1]);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f((GLfloat)((coord[3].x)),(GLfloat)(coord[3].y));
glTexCoord2f(1.0f, 0.0f); glVertex2f((GLfloat)((coord[0].x)),(GLfloat)(coord[0].y));
glTexCoord2f(1.0f, 1.0f); glVertex2f((GLfloat)((coord[1].x)),(GLfloat)(coord[1].y));
glTexCoord2f(0.0f, 1.0f); glVertex2f((GLfloat)((coord[2].x)),(GLfloat)(coord[2].y));
glEnd();
}
glFlush();
glutSwapBuffers();
}
In initialization function I write this:
GLvoid Init() {
glGenTextures(2, textures);
glClearColor (0.0, 0.0, 0.0, 0.0);
glEnable (GL_POLYGON_SMOOTH);
glHint (GL_POLYGON_SMOOTH_HINT, GL_DONT_CARE);
glDisable(GL_DEPTH_TEST);
}
but it doesn't work...
I work on Win7 x64, with OPenGL 4.0 and Glut 3.7. My video card is an NVidia GeForce gt 630. also I enabled antialiasing from Nvidia control panel, but nothing is changed.
does anyone know how to help me?

I solved my problem! I used GLFW insted of GLUT, as #Michael IV suggested me!
in order to do antialiasing with GLFW i used this line of code:
glfwOpenWindowHint(GLFW_FSAA_SAMPLES,4);
and the result now is very good, as you can see in the following image.
Thanks for your help!

First I wonder why you are using OpenGL 4.0 to work with fixed (deprecated ) pipeline...
But let's get to the problem.What you need is MSAA .I am not sure ,enabling it via control panel will always do the trick.Usually it is done inside the code.
Unfortunately for you , you selected to use GLUT which has no option to set hardware MSAA level.If you want to be able to do so switch to GLFW.Another option is do do it manually but that implies you use custom FBOs.In such a scenario you can create FBO with MSAA texture attachment setting MSAA level for the texture (also you can apply custom multisampling algorithms in fragment shader if you wish).
Here is a thread on this topic.
GLFW allows you specifying MSAA level on window setup.See the related API.
MSAA does degrade the performance ,but how much depends on your hardware and probably OpenGL drivers.

Related

Fastest way to draw or blit rgb pixel array to window in SDL2 & OpenGL2 in C++

QUESTION:
How do I draw an rgb pixel image (array of rgb structs) to the display using SDL2 and OpenGL2 as fast & as efficiently as possible? I tried mapping to a 2d texture and blitting an SDL_Surface... I think the 2d texture is slightly faster but it uses more of my cpu...
CONTEXT:
I am making my own raytracer in C++ and I have some framework set up so that I can watch the image being raytraced in realtime. I am currently using SDL2 for the window and I am displaying my rendered image by mapping the image to a 2d texture via OpenGL. I should mention that I am using OpenGL2 for rendering because:
I am on WSL
I am using a GUI library which requires OpenGL (DearImGUI)
I am currently getting around 55fps but it is using a lot of cpu for drawing the window, which I did not expect it to. I was wondering if there is a way to display an rgb pixel array faster and reduce the amount of computation/stress on my cpu. I have a 2-core (lol) i7-5500U cpu (with integrated graphics) and I am rendering using my laptop. I am guessing that this is probably the limit of my laptop because it doesn't have a discrete gpu to help out, but still it is better to ask.
I am also a complete beginner at OpenGL so there is also a chance that there can be improvement in my code as well, so I also appreciate any feedback on my implementation.
METHOD:
So I want to detail the way I am showing the realtime rendered image. In terms of pseudo code and c++ code:
// I use this function to setup my window and opengl context - and I setup my textures here
void setup_window__and__image_texture(...args...) {
//SDL Window and OpenGL Context Creation
...
//Create an SDL_Surface from my rgb pixel array/image
SDL_Surface gImage = SDL_CreateRGBSurfaceFrom((void*)rgb_arr, width, height, depth, pitch,
0x000000ff, 0x0000ff00, 0x00ff0000, 0);
//Generate and Bind 2D Texture from surface
GLuint tex_img;
glGenTextures(1, &tex_img);
glBindTexture(GL_TEXTURE_2D, tex_img);
glTexImage2D(GL_TEXTURE_2D, 0, 3, gImage->w, gImage->h, 0, GL_RGB, GL_UNSIGNED_BYTE, gImage->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return status;
}
// This function is used in my main loop where I update the image after tracing some more rays in the scene
void render_loop__display_image() {
glTexImage2D(GL_TEXTURE_2D, 0, 3, gImage->w, gImage->h, 0, GL_RGB, GL_UNSIGNED_BYTE, gImage->pixels);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Don't use special coloring
glTexCoord2f(-1.0f,-1.0f); glVertex2d(display_width*2, display_height*2);
glTexCoord2f( 1.0, -1.0f); glVertex2d(-display_width*2, display_height*2);
glTexCoord2f( 1.0, 1.0); glVertex2d(-display_width*2, -display_height*2);
glTexCoord2f(-1.0f, 1.0); glVertex2d(display_width*2, -display_height*2);
glEnd();
glDisable(GL_TEXTURE_2D);
}
I know I can blit the screen via SDL and I've tried that but that doesn't work nicely with OpenGL. I end up getting some gnarly screen tearing and flickering.
So is this the best that I can do in terms of speed/efficiency?

How to use OpenGL and DevIL get user drawing pixels

I need to load an image, display the image, and let user draw some strokes on the image and get those drawing pixels.
I know OpenGL can load a texture image read by DevIL, and display it. But I am not sure how to use OpenGL to get user drawing pixels from loaded texture.
First off, note that a lot of this code is deprecated. But it is easier to understand from just code snippets. I'm not doing everything for you, but I hope to get you started by providing the basic workflow.
There are a few things you need to do to get the result you are looking for.
Firstly you have to load your texture in video memory. This is done with:
glGenTextures(1, texture_id); //generate a texture object
glBindTexture(GL_TEXTURE_2D, texture_id); //bind the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); //set filters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set filters
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, original_image_data); //create the actual texture in video ram
When this succeeds you can draw your texture with:
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
//set to ortographic projection
glOrtho(0.0, window_width, 0.0, window_height, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,texture_id);
glBegin(GL_QUADS);
glTexCoord2f(0, 1); glVertex2f(-1.0f, 1.0f);
glTexCoord2f(1, 1); glVertex2f( 1.0f, 1.0f);
glTexCoord2f(1, 0); glVertex2f( 1.0f, -1.0f);
glTexCoord2f(0, 0); glVertex2f(-1.0f, -1.0f);
glEnd();
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
The next thing you will need to do is capture your user's mouse input. If you are on windows you can use the windowprocedure callback and look for the WM_MOUSE event. If you use a library for window management then the library will probably provide functionality for keyboard and mouse intput.
Now that you have the mouse input, you should draw a line on the screen every time a user moves the mouse while holding down the button:
glLineWidth(2.5);
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_LINES);
glVertex2f(mouse_x_start, mouse_y_start);
glVertex2f(mouse_x_end, mouse_y_end);
glEnd();
glColor3f(1.0, 1.0, 1.0);
When all of the above goes well, you should see your texture on the screen and a red line if you hold the mouse button and move the mouse. You are nearly there. The last thing that needs to be done is read the pixels. You can do this with glReadPixels() like this:
void glReadPixels(0, 0, window_width, window_height, GL_RGB, GL_UNSIGNED_BYTE, new_image_data);
You now have a byte array with the user's strokes on it. I would highly recommend writing your own code for this process, because the code I used is deprecated, and should only be used when targeting older platforms. The workflow should remain the same though. I hope this is enough to get you started. Good luck!
I assume you are working on a plain 2D app.
The idea is that if performance isn't your concern you may consider doing everything in software by crudely manipulating pixel data and drawing the image with your graphics library of choice. I recommend the Simple Directmedia Layer library. It has also a sublibrary called SDL_image that can load a good assortment of formats.
An approach like this works until you mess with big/multiple textures. If you need the GPU horsepower for realtime framerates then you must fight your way through FrameBuffer Objects, but beware! This basically means "do-everything-you-can-inside-the-pixel-shaders" and limit as much as you can calls like glReadPixels/glTexImage2D &co.

Color dodge in OpenGL

I need to render an image on top of a background in OpenGL and I'm trying to get the same result as the "Color Dodge" in Photoshop but I'm not able to do it.
Right now I'm doing:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
// background
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, background);
glBegin(GL_TRIANGLE_STRIP);
glTexCoord2f(0.0, 0.0);
...
glEnd();
glDisable(GL_TEXTURE_2D);
// image
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, image);
glBegin(GL_TRIANGLE_STRIP);
glTexCoord2f(0.0, 0.0);
...
glEnd();
glDisable(GL_TEXTURE_2D);
The background is a tga with no alpha channel. The image is a tga with alpha channel.
This renders the image with alpha on the background but way too bright.
I read that it should be as easy as:
glBlendFunc(GL_ONE, GL_ONE);
But the image despite of having alpha channel gets rendered as a white square.
Clearly I'm doing something wrong.
You're not going to be able to use blending to get the equivalent of the Photoshop "Color Dodge" effect. It's a more complicated mathematical function than can be expressed using standard blending logic. So you're going to have to come up with some programmatic blending methodology to make it work.
There is a way to make color dodge in GL blend func. It's like the Photoshop version of that mixing mode, but only it's darker than photoshop's "Color Dodge". You have to use this type of function:
glBlendFunc(GL_DST_COLOR, GL_ONE);

Keeping original colors

I'm trying to place a texture (with alpha) on another texture in OpenGL. I do it without any problems, but it's not as I wanted: my nearest image's color is strongly affected by background image (furthest texture) resulting in appearing orange in spite of red.
Anyone knows a way of blending (or getting rid of alpha) that will resolve this issue?
Blending initialization:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
Drawing scene:
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
//furthest image (background)
glBindTexture(GL_TEXTURE_2D, texture[1]);
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(0, 0);
glTexCoord2f(1, 0); glVertex2f(500, 0);
glTexCoord2f(1, 1); glVertex2f(500, 500);
glTexCoord2f(0, 1); glVertex2f(0, 500);
glEnd();
//nearest image (appears orange, should be red)
glBindTexture(GL_TEXTURE_2D, texture[0]);
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(100, 100);
glTexCoord2f(1, 0); glVertex2f(300, 100);
glTexCoord2f(1, 1); glVertex2f(300, 300);
glTexCoord2f(0, 1); glVertex2f(100, 300);
glEnd();
glutSwapBuffers();
EDIT.
Here's an image depicting how it looks:
Here's an image of how it should look:
I believe what you want is 'alpha testing', not blending. See
glEnable(GL_ALPHA_TEST)
glAlphaFunc()
If you want to leave blending enabled, you should use
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This will only use the source color in places where the source alpha is 1. Currently your function adds the source color to the background color.
If you don't want any mixing of colors, then using alpha test is the better way to go, as it uses less resources then blending.
This blend func
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
is the cause of your problems. The GL_ONE for destination means, that whatever is present already in the framebuffer will be added to the incoming colour regardles of the alpha value.
In your case your red texture gets added with the greenish background. And since red + greenish = orange this is what you get.
What you want is mask the previous content in the destination framebuffer with your alpha channel, which is done using
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Also remember that OpenGL state is meant to be set and reset on demand, so when drawing other textures then you might need other setting for blending and blend func.
With help of you all, I managed to resolve the issue by:
resaving my texture as PNG (in spite of BMP)
changing blending function to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Thanks to all contributtors :)

how to use glDrawPixels to render a picture as a background

Here is the thing: I want to load a picture as a background filled in the whole viewport. This background should always face to the camera no matter where the camera face to.
First I naturally think use a texture as a background, my code is below:
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0,1,0,1,0,1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, myimage.GetID());
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex2f(0, 0);
glTexCoord2f(1, 0); glVertex2f(1, 0);
glTexCoord2f(1, 1); glVertex2f(1, 1);
glTexCoord2f(0, 1); glVertex2f(0, 1);
glEnd();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
Believe me, myimage is a CIMAGE class that can load pics into textures, it works well.
However, for some unknown reason, my application cannot load a texture into a rectangle. (I described this problem here click) As a result, I only can see a rectangle frame around my viewport.
So, I figure out another solution.
I use the glDrawPixels instead of a texture. My code is below:
glRasterPos2i(0, 0);
glDrawPixels(myimage.GetWidth(), myimage.GetHeight(), (myimage.GetBPP() == 24)?GL_RGB:GL_RGBA, GL_UNSIGNED_BYTE,
myimage.GetData());
The picture appeared! However, the pic didn't always face to my camera. It only appears in a particular direction. You know, like a object in the scene, but not a background always face to the camera.
So anybody know how to use the glDrawPixels to implement a background?
By the way, I think this background is not a object placed in the 3D scene. So billboards may not be my solution. Again, this background filled in the whole view port and always face to camera.
One of the reasons your texture loading might not work is because it might not have power-of-two dimensions. Try a square 256x256 texture (or the like) to see if this is the problem. Look here for more info on Rectangle Textures.
Coming back to your background issue - the right way to do this would be to
Set up an orthographic projection/viewport that fills the entire screen.
glViewport(0,0,nw,nh);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,1,0,1,0,1);
glMatrixMode(GL_MODELVIEW);
Disable depth testing
Draw the fullscreen quad with the texture/texture rectangle you have loaded.
glBegin(GL_QUADS);
glVertex2f(0,0);
glVertex2f(1,0);
glVertex2f(1,1);
glVertex2f(0,1);
glEnd();
Set up your regular projection/modelview and continue.
Hope this helps!