I want to detect a marker with OpenCV, and then I want to overlay it with an image, with OpenGL. First part is ok (I achieve detection marker perfectly), but I have some problems with the second.
marker is:
and image is:
but the result is this:
code to generate the texture of image is:
GLuint *textures = new GLuint[2];
glGenTextures(2, textures);
Mat image = imread("Immagine.png");
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, 4,image.cols, image.rows, 0, GL_RGB,GL_UNSIGNED_BYTE, image.data);
and to show the texture i use:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WIDTH, HEIGHT, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, textures[1]);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f((GLfloat)(coord[1].x),(GLfloat)(coord[1].y));
glTexCoord2f(1.0f, 0.0f); glVertex2f((GLfloat)(coord[2].x),(GLfloat)(coord[2].y));
glTexCoord2f(1.0f, 1.0f); glVertex2f((GLfloat)(coord[3].x),(GLfloat)(coord[3].y));
glTexCoord2f(0.0f, 1.0f); glVertex2f((GLfloat)(coord[0].x),(GLfloat)(coord[0].y));
glEnd();
I don't know why textured image is so strange. Also, I use the same code to show (on opengl window) captured frames from webcam and I have no problem.
I've noted also that if I use the same image without numbers at corners, it works correctly (even if it isn't in the same place/coordinates of marker).
Does anyone have any idea?
This is a padding/alignment issue.
You're loading a format which has different padding requirements to GL (which by default expects rows of pixels to be padded to a multiple of 4 bytes).
Possible fixes for this are:
Tell GL how your texture is packed, using (for example) glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Change the dimensions of your texture, such that the padding matches without changing anything
Change the loading of your texture, such that the padding is consistent with what GL expects
Related
Currently I'm trying to render multiple passes with different shaders in a simple OpenGL application. Here's my (simplified) code:
void InitScene()
{
glViewport(0, 0, mWindowWidth, mWindowHeight);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, mWindowWidth, mWindowHeight, 0, -1, 1);
mFramebufferName = CreateFrameBuffer(mWindowWidth, mWindowHeight);
}
void DrawScene()
{
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
if(drawDirectlyToScreen)
{
// This works fine, image will fill the whole screen
// Directly draw to the screen
DrawFullScreenQuad();
}
else
{
// This does not work. The image from the first pass will only be a small quadrat
// Draw to frame buffer instead of screen
glBindFramebuffer(GL_FRAMEBUFFER, mFramebufferName);
DrawFullScreenQuad();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Get ready for second pass
BindFrameBufferTextureAndActivateAnotherShader();
// Now draw to the screen
DrawFullScreenQuad();
}
glPopMatrix();
}
void DrawFullScreenQuad()
{
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 1.0f); glVertex3f(0.0f, mWindowHeight, 0.0f);
glTexCoord2f(1.0f, 1.0f); glVertex3f(mWindowWidth, mWindowHeight, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex3f(mWindowWidth, 0.0f, 0.0f);
glTexCoord2f(0.0f, 0.0f); glVertex3f(0.0f, 0.0f, 0.0f);
glEnd();
}
void CreateFrameBuffer(int width, int height)
{
// Generate and bind the frame buffer
mFramebufferName = 0;
glGenFramebuffers(1, &mFramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, mFramebufferName);
// Create and bind the render texture
glGenTextures(1, &mSecondPassRenderTexture);
glBindTexture(GL_TEXTURE_2D, mSecondPassRenderTexture);
// Give an empty image to OpenGL ( the last "0" )
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
// Set "mSecondPassRenderTexture" as colour attachement #0
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, mSecondPassRenderTexture, 0);
// Set the list of draw buffers.
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers
}
When rendering only one pass everything is fine. The image covers the whole screen. When rendering with two passes, the resulting image will only cover a small square area in the top left corner of the screen (see the attached images).
The problem seems to come from the first pass. The texture created in that pass is already wrong (i.e. the image is only in the corner, the rest of the texture is black). The second pass then works correctly (i.e. the broken texture is drawn correctly to the whole screen).
So my question is: why does my call to DrawFullScreenQuad() yield different results when
Rendering to the screen directly
Rendering to a frame buffer (which has the same size as the window)
As the title says, whenever i enable blending like this:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I cannot draw any texture using immediate mode. It is an RGBA texture. I confirmed that image loading and generating works correctly, as when i "downloaded" the pixel from the GPU to debug this, the alpha values seemed to be correct (not all of them were 255, for example.). However, the texture just disappears when drawing it like this with blending enabled:
glColor4ub(255, 255, 255, 0);
_texture->setActive(0);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(_x, _y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(_x + _width, _y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(_x + _width, _y + _height);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(_x, _y + _height);
glEnd();
Where _texture->setActive() simply calls this:
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, m_ID);
Without blending, i get the following result:
But with blending it simply draws nothing (again, alpha values of the texture are confirmed to be correct!):
When it should look something like this:
What is the issue here?
Update
After applying glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);, i now get the expected output:
I think I found it:
glColor4ub(255, 255, 255, 0);
Assuming you apply texture using GL_MODULATE the alpha will be mixed with the "primitive" color. If setting alpha of the primitive to 0 then any mix with texture will end up 0 alpha.
I´m not sure what you want to do so I propose doing one of the following:
Apply texture using GL_REPLACE instead. Call glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE). Texture environment is part of your (active) texture unit config. From https://www.opengl.org/sdk/docs/man2/xhtml/glTexEnv.xml
For OpenGL versions 1.3 and greater, or when the ARB_multitexture
extension is supported, glTexEnv controls the texture environment for
the current active texture unit, selected by glActiveTexture
Set primitive color to (255, 255, 255, 255)ub.
I'm using SDL, I have OpenGL rendering setup, I want to draw some text to the screen.
I found this question: Using SDL_ttf with OpenGL which seems to address the same concern, but I think my GL-fu isn't good enough to understand what to do next.
Ie, I have the texture from that other post, which, if I understand everything correctly, is an integer ID representing a GL texture that contains the rendered text. How do I actually render this texture onto my screen (everything's 2D here so I just want to render it as flat text on the screen, not onto some fancy surface in a 3D scene or anything).
Alternatively, is there any easier way of getting this working? (ie, rendering some text via SDL while also being able to render GL shapes)
edit: Or, even more generally, is there an easier way of being able to render both text and shapes onto the screen via SDL?
Use SDL_ttf to turn your string into a bitmap; this is just a single function call.
Use the code in the link you posted to turn it into a texture.
Draw a rectangle using said texture. That is, glBindTexture, glBegin, (glTexCoord, glVertex) x4, glEnd. If you have doubts about this step, you better go through some basic OpenGL tutorials (covering texture mapping, of course) first.
Try this snippet, it renders a SDL true type font extension library font starting from the bottom left side of the screen.
Note the arrangment of the texcords to the shape on the screen. The pixels from the text texture start 0,0 at the top left of the image. So I need to flip that around to make it show properly on a screen with x,y at the bottom left.
void RenderText(std::string message, SDL_Color color, int x, int y,
TTF_Font* font)
{
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, gWindow->getWidth(),0,gWindow->getHeight());
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
SDL_Surface * sFont = TTF_RenderText_Blended(font, message.c_str(), color);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sFont->w, sFont->h, 0, GL_BGRA,
GL_UNSIGNED_BYTE, sFont->pixels);
glBegin(GL_QUADS);
{
glTexCoord2f(0,1); glVertex2f(x, y);
glTexCoord2f(1,1); glVertex2f(x + sFont->w, y);
glTexCoord2f(1,0); glVertex2f(x + sFont->w, y + sFont->h);
glTexCoord2f(0,0); glVertex2f(x, y + sFont->h);
}
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glDeleteTextures(1, &texture);
SDL_FreeSurface(sFont);
}
I want to create a 3d cube with openGL. Also, I want to cover each side with an image that I transform in a texture.
I find cube coordinates in 2D, and I create a QUADS for each side.
My problem is that when I render textures corresponding cube sides, I see these textures overlap each other, as you can see in this image:
my code is:
Initialization:
glGenTextures(2, textures);
glClearColor (0.0, 0.0, 0.0, 0.0);
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_ALWAYS);
Transform image in thexture:
up = imread("up.png");
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, up.cols, up.rows, GL_RGB, GL_UNSIGNED_BYTE, up.data);
Display cube:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
// Set Projection Matrix
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WIDTH, HEIGHT, 0);
// Switch to Model View Matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, textures[1]);
glBegin(GL_QUADS);
//sopra
glTexCoord2f(0.0f, 0.0f); glVertex2f((GLfloat)((coord[6].x)),(GLfloat)(coord[6].y));
glTexCoord2f(1.0f, 0.0f); glVertex2f((GLfloat)((coord[5].x)),(GLfloat)(coord[5].y));
glTexCoord2f(1.0f, 1.0f); glVertex2f((GLfloat)((coord[4].x)),(GLfloat)(coord[4].y));
glTexCoord2f(0.0f, 1.0f); glVertex2f((GLfloat)((coord[7].x)),(GLfloat)(coord[7].y));
glEnd();
I do the same for the other sides of the cube.
The order in which I render textures is:
bottom (ground) side
up side
behind side
front side
left side
right side
what is wrong or what am I missing? Or, Maybe cannot create a 3d cube with 2d coordinates (glVertex2f (...))?
Thanks for your help!
You can't create a cube with 2d coordinates. The sides are overlapping because they are all on the same plane in space. A cube is in 3d space so needs 3 coordinates, x, y, and z.
So try using:
glVertex3f(x, y, z);
and use some appropriate z values depending on where you want each face.
For the texture you can still use:
glTexCoord2f(x, y);
since the textures are in 2 dimensional space.
If you are still confused about what to use for your coordinates I suggest you read this to help you understand 3d space in openGL:
http://www.falloutsoftware.com/tutorials/gl/gl0.htm
I'll begin by apologizing for the length of the question. I believe I've committed some small, dumb error, but since I'm entirely unable to find it, I decided to post all relevant code just in case.
I finally got texture loading working using QImage, and am able to render textures in immediate mode.
However, vertex arrays don't work, and I'm at a loss as to why.
The most obvious things like "Have you enabled vertex arrays and texture coordinate arrays?" are probably not the answer. I'll post the initialization code.
Here's the init function:
/* general OpenGL initialization function */
int initGL()
{
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0, 0, 0, 1); // Black Background
glEnable ( GL_COLOR_MATERIAL );
glColorMaterial ( GL_FRONT, GL_AMBIENT_AND_DIFFUSE );
glDisable(GL_DEPTH_TEST);
//ENABLED VERTEX ARRAYS AND TEXTURE COORDINATE ARRAYS
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
//ENABLED 2D TEXTURING
glEnable ( GL_TEXTURE_2D );
glPixelStorei ( GL_UNPACK_ALIGNMENT, 1 );
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
//seed random
srand(time(NULL));
return( TRUE );
}
I have initialization, resize and draw functions that are called by a QGLWidget (which is itself just a skeleton that calls the real work functions)
The texture loading function:
GLuint LoadGLTextures( const char * name )
{
//unformatted QImage
QImage img;
//load the image from a .qrc resource
if(!img.load(":/star.bmp"))
{
qWarning("ERROR LOADING IMAGE");
}
//an OpenGL formatted QImage
QImage GL_formatted_image;
GL_formatted_image = QGLWidget::convertToGLFormat(img);
//error check
if(GL_formatted_image.isNull())
qWarning("IMAGE IS NULL");
else
qWarning("IMAGE NOT NULL");
//texture ID
GLuint _textures[1];
//enable texturing
glEnable(GL_TEXTURE_2D);
//generate textures
glGenTextures(1,&_textures[0]);
//bind the texture
glBindTexture(GL_TEXTURE_2D,_textures[0]);
//texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D,_textures[0]);
//generate texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, GL_formatted_image.width(),
GL_formatted_image.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE,
GL_formatted_image.bits());
glBindTexture(GL_TEXTURE_2D,_textures[0]);
//return the texture ID
return _textures[0];
}
Here's the draw code:
//this does draw
//get the texture ID
GLuint tex_id = LoadGLTextures(":/star.png");
glBindTexture(GL_TEXTURE_2D, tex_id); // Actually have an array of images
glColor3f(1.0f, 0.0f, 0.5f);
glBegin(GL_QUADS);
glTexCoord2f(1.0f, 0.0f);glVertex2f(1.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f);glVertex2f(1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f);glVertex2f(0.0f, 1.0f);
glTexCoord2f(0.0f, 0.0f);glVertex2f(0.0f, 0.0f);
glEnd();
//this does not draw
//translations code
glLoadIdentity();
glTranslatef(-1.0f, 0.0f, 0.0f);
//bind the texture
glBindTexture(GL_TEXTURE_2D, tex_id);
//set color state
glColor4f(0.0f, 1.0f, 0.0f, 0.5);
//vertices to be rendered
static GLfloat vertices[] =
{
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f
};
static GLshort coord_Data[] =
{
1, 0,
1, 1,
0, 1,
0, 0
};
//bind the texture
glBindTexture(GL_TEXTURE_2D, tex_id);
//pointer to the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertices);
//texture coordinate pointer
glTexCoordPointer(2, GL_SHORT, 0, coord_Data);
//draw the arrays
glDrawArrays(GL_QUADS, 0, 4);
Thanks for all help,
Dragonwrenn
One possibility is that the problem stems from calling glVertexCoordPointer before calling glTexCoordPointer. Weird things happen when you specify the texture coordinate after the vertex coordinate. I know this is true for drawing a single vertex with a texture coordinate. I'm not sure if it's true with arrays.
A few other things...
Have you tried using QPixMap instead of QImage? I doubt this is the answer to your problem since it sounds like the texture is applied to the first quad properly.
There are two calls to bindTexture.
Have you tried just drawing the vertices (without the texture) in the second part of the code?
Finally, do you get any compiler warnings?
The way you place your OpenGL state manipulations, it is difficult to keep track of things. It's a good idea to set OpenGL state on demand. So
Move this
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_CORRD_ARRAY);
right before
//bind the texture
glBindTexture(GL_TEXTURE_2D, tex_id);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_SHORT, 0, coord_Data);
//draw the arrays
glDrawArrays(GL_QUADS, 0, 4);
also you should move the other code from initGL.
Belonging into the texture loader, before supplying the data to glTexImage:
glPixelStorei ( GL_UNPACK_ALIGNMENT, 1 );
Belonging to the beginning of the drawing function:
glShadeModel(GL_SMOOTH);
glClearColor(0, 0, 0, 1);
glEnable( GL_COLOR_MATERIAL );
glColorMaterial ( GL_FRONT, GL_AMBIENT_AND_DIFFUSE );
glDisable(GL_DEPTH_TEST);
Following the scheme you should set viewport and projection matrices in the drawing function, too. I'm just telling this, because most of the tutorials do it differently, which tricks people into thinking this was the right way. Technically projection and viewport and on-demand-states as well.
You should not re-load the texture with every draw call. Note that initializing the texture on demand through the drawing handler is a good idea, you should just add some flag to the texture encapsulating class telling, if the referenced texture is already available to OpenGL.
Just for debugging purposes try changing the type of the texture coordinates to floats.