Stencil picking test always returns 0 - opengl

I'm trying to implement a simple picking method in my OpenGL program.
First, I clear the stencil with
glClearStencil(0);
and at the start of my displayCallback function, I call
GLbitfield mask = GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT;
glClear(mask);
then I draw all object in scene, except 4 object, which I want to be selectable (the rest is non interactive). When I draw these 4 object, I first enable and set Stencil
glEnable(GL_STENCIL_TEST);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
and then draw
glStencilFunc(GL_ALWAYS, i + 1, -1);
drawItem( ... );
where i is the index of an item, and then call
glDisable(GL_STENCIL_TEST);
When I try to pick an item when user clicks left mosue button, I use this in mouseCallback function
if ((buttonPressed == GLUT_LEFT_BUTTON) && (buttonState == GLUT_DOWN)) {
unsigned char column = 0;
glReadPixels(mouseX,
(WINDOW_WIDTH - mouseY + 1),
1,
1,
GL_STENCIL_INDEX,
GL_UNSIGNED_BYTE,
&column);
if (column == 0) {
printf("Clicked at %d %d, No column clicked on\n", mouseX, mouseY);
}
else {
printf("Clicked on column with ID: %d\n", (int)column);
}
}
But no matter where I click, I never select any column. I even added the mouseX and mouseY to check if I'm not clicking on the same pixel, but I found out that this part works ok.
Does anyone see any problem with this code? Thanks a lot-

Related

Incorrect values written in stencil buffer

I am trying to pick objects on mouse click. For this I have followed this tutorial, and tried to use the stencil buffer for this purpose.
Inside "game" loop I am trying to draw 10 (5 pairs) 'pick'able triangles as follows:
...
glClearColor(red, green, blue, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glClearStencil(0); // this is the default value
/* Enable stencil operations */
glEnable(GL_STENCIL_TEST);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
/*Some other drawing not involving stencil buffer*/
GLuint index = 1;
for (GLshort i = 0; i < 5; i++)
{
//this returns 2 model matrices
auto modelMatrices = trianglePairs[i].getModelMatrices();
for (GLshort j = 0; j < 2; j++)
{
glStencilFunc(GL_ALWAYS, index, -1);
glUniformMatrix4fv(glGetUniformLocation(ourShader.Program, "model"), 1, GL_FALSE, glm::value_ptr(modelMatrices[j]));
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, BUFFER_OFFSET(2));
index++;
}
/*Some other drawing not involving stencil buffer*/
}
/*Some other drawing not involving stencil buffer*/
...
However, when I am trying to read back the stencil values, I am getting wrong values. I am reading back the values as (this is also a part of the above-mentioned tutorial):
GLuint index;
glReadPixels(xpos, Height - ypos - 1, 1, 1, GL_STENCIL_INDEX, GL_UNSIGNED_INT, &index);
Whenever, I click the first triangle of the pair I am getting values as i+1, whereas the correct value should have been i, and for the second triangle of the pair, I am getting 0 as index.
Please let me know what am I missing here?
Update
I have found that stencil values can be applied on quads. When I tried to apply the stencil value on unit square it worked correctly. However, when the quad is not unit square, it returns 0. What is the reason for this?

Only glClear(..) color is displayed, nothing else rendered (CUDA/OpenGL interop)

I have a WinForms application with a panel (500x500 pixels) that I want to render something in. At this point I am just trying to fill it in with a specific color. I want to use OpenGL/CUDA interop to do this.
I got the panel configured to be the region to render stuff in, however when I run my code, the panel just gets filled with the glClear(..) color, and nothing assigned by the kernel is displayed. It sort of worked this morning (inconsistently), and in my attempt to sort out the SwapBuffers() mess, I think I screwed it up.
Here is the pixel format initialization for OpenGL. It seems to work fine, I have the two buffers as I expected, and the context is correct:
static PIXELFORMATDESCRIPTOR pfd=
{
sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor
1, // Version Number
PFD_DRAW_TO_WINDOW | // Format Must Support Window
PFD_SUPPORT_OPENGL | // Format Must Support OpenGL
PFD_DOUBLEBUFFER, // Must Support Double Buffering
PFD_TYPE_RGBA, // Request An RGBA Format
16, // Select Our Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
16, // 16Bit Z-Buffer (Depth Buffer)
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored
};
GLint iPixelFormat;
// get the device context's best, available pixel format match
if((iPixelFormat = ChoosePixelFormat(hdc, &pfd)) == 0)
{
MessageBox::Show("ChoosePixelFormat Failed");
return 0;
}
// make that match the device context's current pixel format
if(SetPixelFormat(hdc, iPixelFormat, &pfd) == FALSE)
{
MessageBox::Show("SetPixelFormat Failed");
return 0;
}
if((m_hglrc = wglCreateContext(m_hDC)) == NULL)
{
MessageBox::Show("wglCreateContext Failed");
return 0;
}
if((wglMakeCurrent(m_hDC, m_hglrc)) == NULL)
{
MessageBox::Show("wglMakeCurrent Failed");
return 0;
}
After this is done, I set up the ViewPort as such:
glViewport(0,0,iWidth,iHeight); // Reset The Current Viewport
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
glEnable(GL_DEPTH_TEST);
Then I set up the clear color and do a clear:
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
Now I set up the CUDA/OpenGL interop:
cudaDeviceProp prop; int dev;
memset(&prop, 0, sizeof(cudaDeviceProp));
prop.major = 1; prop.minor = 0;
checkCudaErrors(cudaChooseDevice(&dev, &prop));
checkCudaErrors(cudaGLSetGLDevice(dev));
glBindBuffer = (PFNGLBINDBUFFERARBPROC)GET_PROC_ADDRESS("glBindBuffer");
glDeleteBuffers = (PFNGLDELETEBUFFERSARBPROC)GET_PROC_ADDRESS("glDeleteBuffers");
glGenBuffers = (PFNGLGENBUFFERSARBPROC)GET_PROC_ADDRESS("glGenBuffers");
glBufferData = (PFNGLBUFFERDATAARBPROC)GET_PROC_ADDRESS("glBufferData");
GLuint bufferID;
cudaGraphicsResource * resourceID;
glGenBuffers(1, &bufferID);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, bufferID);
glBufferData(GL_PIXEL_UNPACK_BUFFER_ARB, fWidth*fHeight*4, NULL, GL_DYNAMIC_DRAW_ARB);
checkCudaErrors(cudaGraphicsGLRegisterBuffer( &resourceID, bufferID, cudaGraphicsMapFlagsNone ));
Now I try to call my kernel (which just paints each pixel a specific color) and have that displayed.
uchar4* devPtr;
size_t size;
// First clear the back buffer:
glClearColor(1.0f, 0.5f, 0.0f, 0.0f); // orange
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
checkCudaErrors(cudaGraphicsMapResources(1, &resourceID, NULL));
checkCudaErrors(cudaGraphicsResourceGetMappedPointer((void**)&devPtr, &size, resourceID));
animate(devPtr); // This will call the kernel and do a sync (see later)
checkCudaErrors(cudaGraphicsUnmapResources(1, &resourceID, NULL));
// Swap buffers to bring back buffer forward:
SwapBuffers(m_hDC);
At this point I expect to see the kernel colors on the screen, but no! I see orange, which is the clear color that I just set.
Here is the call to the kernel:
void animate(uchar4* dispPtr)
{
checkCudaErrors(cudaDeviceSynchronize());
animKernel<<<blocks, threads>>>(dispPtr, envdim);;
checkCudaErrors(cudaDeviceSynchronize());
}
Here envdim is just the dimensions (so 500x500). The kernel itself:
__global__ void animKernel(uchar4 *optr, dim3 matdim)
{
int x = threadIdx.x + blockIdx.x * blockDim.x;
int y = threadIdx.y + blockIdx.y * blockDim.y;
int offset = x + y * matdim.x;
if (x < matdim.x && y < matdim.y)
{
// BLACK:
optr[offset].x = 0; optr[offset].y = 0; optr[offset].z = 0;
}
}
Things I've done:
The value returned by cudaGraphicsResourceGetMappedPointer's size is 1000000, which corresponds to the 500x500 matrix of uchar4, so that's good.
Each kernel printed the value and location that it was writing to, and that seemed ok.
Played with the alpha value for the clear color, but that doesn't seem to do anything (yet?)
Ran the animate() function several times. Don't know why I thought that would help, but I tried it...
So I guess I'm missing something, but I'm going kind of crazy looking for it. Any advice? Help?
It's another one of those questions I answer myself! Hmph, as I figured, it was a one line issue. The problem resides in the rendering call itself.
The configuration is fine, the one issue I have with the code above is:
I never called glDrawPixels(), which is necessary in order for the OpenGL driver to copy the shared buffer (GL_PiXEL_UNPACK_BUFFER_ARB) source to the display buffer. The correct rendering sequence is then:
uchar4* devPtr;
size_t size;
// First clear the back buffer:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
checkCudaErrors(cudaGraphicsMapResources(1, &resourceID, NULL));
checkCudaErrors(cudaGraphicsResourceGetMappedPointer((void**)&devPtr, &size, resourceID));
animate(devPtr); // This will call the kernel and do a sync (see later)
checkCudaErrors(cudaGraphicsUnmapResources(1, &resourceID, NULL));
// This is necessary to copy the shared buffer to display
glDrawPixels(fWidth, fHeight, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// Swap buffers to bring back buffer forward:
SwapBuffers(m_hDC);
I'd like to thank the Acade-- uh, CUDA By Example, once again for helping me. Even though the example code from the book used GLUT (which was completely useless for this...), the book referenced normal gl functions.

Outlining a 3D model using Stencil-buffer

First I'd like to mention that I found what I believe to be the exact same question, unfortunately without an answer, here:
Java Using OpenGL Stencil to create Outline
I will post my code below, but first here is the problem: from this capture**, you can see that the entire frame structure is showing, instead of a single line around the sphere. I would like to get rid of all those lines inside!
** Apparently I cannot add pictures: see this link - imagine a sphere with all the edges of the quads visible in big 3 pixels large lines.
http://srbwks36224-03.engin.umich.edu/kdi/images/gs_sphere_with_frame.jpg
Here is the code giving that result:
// First render the sphere:
// inside "show" is all the code to display a textured sphere
// looking like earth
sphe->show();
// Now get ready for stencil buffer drawing pass:
// 1. Clear and initialize it
// 2. Activate stencil buffer
// 3. On the first rendering pass, we want to "SUCCEED ALWAYS"
// and write a "1" into the stencil buffer accordingly
// 4. We don't need to actually render the object, hence disabling RGB mask
glClearStencil(0); //Edit: swapped this line and below
glClear(GL_STENCIL_BUFFER_BIT);
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_NEVER, 0x1, 0x1); //Edit: GL_ALWAYS
glStencilOp(GL_REPLACE, GL_KEEP, GL_KEEP); //Edit: GL_KEEP, GL_KEEP, GL_REPLACE
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE); // As per Andon's comment
sphe->show();
// At this point, I expect to have "1" on the entire
// area covered by the sphere, so...
// 1. Stencil test should fail for anything, but 0 value
// RM: commented is another option that should work too I believe
// 2. The stencil op instruction at the point is somewhat irrelevant
// (if my understanding is correct), because we won't do anything
// else with the stencil buffer after that.
// 3. Re-enable RGB mask, because we want to draw this time
// 4. Switch to LINE drawing instead of FILL and
// 5. set a bigger line width, so it will exceed the model boundaries.
// We do want this, otherwise the line would not show
// 6. Don't mind the "uniform" setting instruction, this is so
// that my shader knows it should draw in plain color
// 7. Draw the sphere's frame
// 8. The principle, as I understand it is that all the lines should
// find themselves matched to a "1" in the stencil buffer and therefore
// be ignored for rendering. Only lines on the edges of the model should
// have half their width not failing the stencil test.
glStencilFunc(GL_EQUAL, 0x0, 0x1);
//glStencilFunc(GL_NOTEQUAL, 0x1, 0x1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
glLineWidth(3);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
psa::shader::setUniform("outlining", 1);
sphe->show();
psa::shader::setUniform("outlining", 0);
Now just to prove a point, I tired to do something different using the stencil buffer - I just wanted to make sure that everything was in place in my code, for it to work.
** Again I can unfortunately not show a screen capture of the result I get: the scene is like this
http://mathworld.wolfram.com/images/eps-gif/SphereSphereInterGraphic_700.gif
But the smaller sphere is invisible (RGB mask deactivated) and one can see the world background through the hole (instead of the inside of the bigger sphere - face culling is deactivated).
And this is the code... Interestingly, I can change many things like activate/deactivate the STENCIL_TEST, change operation to GL_KEEP everywhere, or even change second stencilFunc to "NOT EQUAL 0"... The result is always the same! I think I am missing something basic here.
void testStencil()
{
// 1. Write a 1 in the Stencil buffer for
// every pixels of the first sphere:
// All colors disabled, we don't need to see that sphere
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 0x1, 0x1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE); // Edit: added this
{
sphe->W = mat4::trans(psa::vec4(1.0, 1.0, 1.0)) * mat4::scale(0.9);
sphe->show();
}
// 2. Draw the second sphere with the following rule:
// fail the stencil test for every pixels with a 1.
// This means that any pixel from first sphere will
// not be draw as part of the second sphere.
glStencilFunc(GL_EQUAL, 0x0, 0x1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE); // Edit: added this
{
sphe->W = mat4::trans(psa::vec4(1.2, 1.2, 1.2)) * mat4::scale(1.1);
sphe->show();
}
}
Et voila! If anyone could point me in the right direction I would very much appreciate it. I'll also make sure to refer your answer to this other post I found.
The OpenGL code posted in this question works. The cause of the problem lied in the window initialization/creation:
Below are respectively the SDL1.2 and SDL2 versions of the code that work. Note that in both cases the SetAttribute statements are placed before the window creation. The main problem being that miss-placed statements will not necessarily fail at run-time, but won't work either.
SDL1.2:
if(SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
throw "Video initialization failed";
}
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES,16);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
const SDL_VideoInfo * i;
if((i = SDL_GetVideoInfo()) == NULL)
{
throw "Video query failed";
}
int flag = (fs ? SDL_OPENGL | SDL_FULLSCREEN : SDL_OPENGL);
if(SDL_SetVideoMode(w, h, i->vfmt->BitsPerPixel, flag) == 0)
{
throw "Video mode set failed";
}
glewExperimental = GL_TRUE;
if(glewInit() != GLEW_OK)
{
throw "Could not initialize GLEW";
}
if(!glewIsSupported("GL_VERSION_3_3"))
{
throw "OpenGL 3.3 not supported";
}
SDL2 (same code essentially, just the window creation functions change):
if(SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
throw "Video initialization failed";
}
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES,16);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
int flag = SDL_WINDOW_OPENGL;
if((win = SDL_CreateWindow("engine", 100, 100, w, h, flag)) == NULL)
{
throw "Create SDL Window failed";
}
context = SDL_GL_CreateContext(win);
glewExperimental = GL_TRUE;
if(glewInit() != GLEW_OK)
{
throw "Could not initialize GLEW";
}
if(!glewIsSupported("GL_VERSION_3_3"))
{
throw "OpenGL 3.3 not supported";
}
A few points worth mentioning:
1. Even-though I understand it is not advised, SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 1) works too
2. In SDL2, SDL_GL_SetAttribute SDL_GL_MULTISAMPLESAMPLES goes up to 16 for me, after that glewInit() fails, BUT, move the multisampling setting after the window creation and suddenly glewInit stops complaining: my guess is that it is just being ignored.
3. In SDL1.2, any value of Multisampling "seem" to work
4. Considering the stencil buffer feature only the code below works too, but I post it essentially to raise the question: how many of the Attribute settings are actually working? And how to know, since the code compiles and runs without apparent problem?
// PROBABLY WRONG:
// ----
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8);
const SDL_VideoInfo * i;
if((i = SDL_GetVideoInfo()) == NULL)
{
throw "Video query failed";
}
int flag = (fs ? SDL_OPENGL | SDL_FULLSCREEN : SDL_OPENGL);
if(SDL_SetVideoMode(w, h, i->vfmt->BitsPerPixel, flag) == 0)
{
throw "Video mode set failed";
}
// No idea if the below is actually applied!
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 16);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);

SDL OpenGL Rendering Issue

I am just learning how to use SDL and openGL. I've been through the tutorials over at SDLTutorials.com. I am now trying to take things further and create a loading screen for an application; an oop menu system.
I am starting off real simple to test things out. I have the main backdrop of the window done via a texture class. The two relevant functions are defined here:
void Texture::Bind() {
glBindTexture(GL_TEXTURE_2D, TextureID);
}
void Texture::RenderQuad(int X, int Y, int Width, int Height) {
Bind();
glBegin(GL_QUADS);
glTexCoord2f(0, 0); glVertex3f(X, Y, 0);
glTexCoord2f(1, 0); glVertex3f(X + Width, Y, 0);
glTexCoord2f(1, 1); glVertex3f(X + Width, Y + Height, 0);
glTexCoord2f(0, 1); glVertex3f(X, Y + Height, 0);
glEnd();
}
I also have created a menu class that will basically be the parent of any menu created in the system and classes to be developed would be like a click button or an edit box. So far, the only relevent function in class Menu is:
void Menu::OnRender() {
if( Visible ) {
if(MenuTexture == NULL) {
glColor4f(1.,G,B,A);
glBegin(GL_QUADS);
glVertex2d(X, Y);
glVertex2d(X+Width,Y);
glVertex2d(X+Width,Y+Height);
glVertex2d(X,Y+Height);
glEnd();
}else{
MenuTexture->RenderQuad(X,Y,Width,Height);
}
}
}
Finally, my application has created one texture that it binds to the whole window. i.e. the backround.
void App::OnRender() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
MainScreen.RenderQuad(0,0,winWidth,winHeight); //Texture Object with bkg.jpg
MainMenu.OnRender(); //Menu object to be 150x300 and all black
SDL_GL_SwapBuffers();
}
I then create a menu item and try to draw it over the backdrop. If I leave the initialized colors black, the entire screen turns black almost instantly: I have one frame probably the first cycle through where the menu is drawn, the black square is placed on the window, and then it turns black.. If I have 1.0 for the Red, then the entire picture has like some sort of red filter placed on it and the menu box draws an opaque red square.
I've been through several examples of how rendering but I must be missing something. Essentially, if we put it all in line of whats being done, I bind the texture to a surface, define the coordinates of the surface and texture binding. Then I render a quad. I've put the color call in my menu inside the glBegin(), defining the color at all 4 vertices.
Thanks to my chosen answer: This solved the issue. However, it didn't render the Menu object with the correct color until I returned it to modulate. Not sure, I'll be playing around with it and reading up on this to learn about it. Thank you!
[SOVLED]
Changes only made to App::OnRender():
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
MainScreen.RenderQuad(0,0,winWidth,winHeight);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
MainMenu.OnRender();
GL_TEXTURE_ENV_MODE defaults to GL_MODULATE which multiplies incoming texel values by the current color.
If the current color is black (RGB(0,0,0)) then that will render all your textures black. Same for single-channel colors: RGB(1,0,0) * RGB(x,y,z) == RGB(x,0,0).
Try using GL_DECAL.

Texture stretched when drawn outside viewport in OpenGL

I've got a following problem - I'm creating a GUI in OpenGL, I have a Window component which can be dragged around the screen. Normally it looks like this:
Everything is ok unless you try to move it to left so that part of it disappears outside the viewport - For example left button in horizontal scroll is a separate texture and when we move Window to left so far that a whole button disappears, then first pixel of it's texture is being stretched across a whole screen, like this:
My CBackground::draw() method looks like this:
bool CBackground::draw(const CPoint &pos, unsigned int width, unsigned int height)
{
if(width <= 0 || height <= 0)
return false;
GLfloat maxTexCoordWidth = (float)width/m_width;
GLfloat maxTexCoordHeight = (float)height/m_height;
glPushMatrix();
glBindTexture(GL_TEXTURE_2D, m_texture); // Select Our Texture
switch(m_rotation)
{
case BACKGROUND_ROTATION_0_DG:
glBegin(GL_QUADS);
glTexCoord2f(0.0f, maxTexCoordHeight); glVertex2d(pos.x, pos.y + height);
glTexCoord2f(0.0f, 0.0f); glVertex2d(pos.x, pos.y);
glTexCoord2f(maxTexCoordWidth, 0.0f); glVertex2d(pos.x + width, pos.y);
glTexCoord2f(maxTexCoordWidth, maxTexCoordHeight); glVertex2d(pos.x + width, pos.y + height);
glEnd();
break;
}
glPopMatrix();
return true;
}
I tried to add such code just before glPushMatrix():
if((pos.x <= 0 && pos.x + width <= 0)
return false;
So that it shouldn't even draw this one pixel (I'm leaving a method with return), but it still draws this stretched texture.
Your width and height are both unsigned, which means this check has no effect for values other than precisely zero:
if(width <= 0 || height <= 0)
return false;
Most likely, you have some width which would otherwise be negative, but due to the unsigned nature of your variables is wrapping around and becoming very large. This would explain why you seem to have a bar stretching out very far to the right (large positive value in the horizontal dimension).
A good solution would be to make the width and height signed, which would prevent this type of issues. Otherwise, you'll have to move such checks to (all places) where the widths and heights are calculated. After all, you are not likely to have a window with width and height beyond what can be represented with a signed int.