I've got a following problem - I'm creating a GUI in OpenGL, I have a Window component which can be dragged around the screen. Normally it looks like this:
Everything is ok unless you try to move it to left so that part of it disappears outside the viewport - For example left button in horizontal scroll is a separate texture and when we move Window to left so far that a whole button disappears, then first pixel of it's texture is being stretched across a whole screen, like this:
My CBackground::draw() method looks like this:
bool CBackground::draw(const CPoint &pos, unsigned int width, unsigned int height)
{
if(width <= 0 || height <= 0)
return false;
GLfloat maxTexCoordWidth = (float)width/m_width;
GLfloat maxTexCoordHeight = (float)height/m_height;
glPushMatrix();
glBindTexture(GL_TEXTURE_2D, m_texture); // Select Our Texture
switch(m_rotation)
{
case BACKGROUND_ROTATION_0_DG:
glBegin(GL_QUADS);
glTexCoord2f(0.0f, maxTexCoordHeight); glVertex2d(pos.x, pos.y + height);
glTexCoord2f(0.0f, 0.0f); glVertex2d(pos.x, pos.y);
glTexCoord2f(maxTexCoordWidth, 0.0f); glVertex2d(pos.x + width, pos.y);
glTexCoord2f(maxTexCoordWidth, maxTexCoordHeight); glVertex2d(pos.x + width, pos.y + height);
glEnd();
break;
}
glPopMatrix();
return true;
}
I tried to add such code just before glPushMatrix():
if((pos.x <= 0 && pos.x + width <= 0)
return false;
So that it shouldn't even draw this one pixel (I'm leaving a method with return), but it still draws this stretched texture.
Your width and height are both unsigned, which means this check has no effect for values other than precisely zero:
if(width <= 0 || height <= 0)
return false;
Most likely, you have some width which would otherwise be negative, but due to the unsigned nature of your variables is wrapping around and becoming very large. This would explain why you seem to have a bar stretching out very far to the right (large positive value in the horizontal dimension).
A good solution would be to make the width and height signed, which would prevent this type of issues. Otherwise, you'll have to move such checks to (all places) where the widths and heights are calculated. After all, you are not likely to have a window with width and height beyond what can be represented with a signed int.
Related
Here's what I want to achieve, I have a flag called switch_2D_3D in the code below, and when it's true I switch to 2D mode, otherwise 3D.
void reshape(GLsizei width, GLsizei height)
{
if (switch_2D_3D)
{
// GLsizei for non-negative integer
// Compute aspect ratio of the new window
if (height == 0)
height = 1; // To prevent divide by 0
GLfloat aspect = (GLfloat)width / (GLfloat)height;
// Reset transformations
glLoadIdentity();
// Set the aspect ratio of the clipping area to match the viewport
glMatrixMode(GL_PROJECTION); // To operate on the Projection matrix
// Set the viewport to cover the new window
glViewport(0, 0, width, height);
if (width >= height)
{
// aspect >= 1, set the height from -1 to 1, with larger width
gluOrtho2D(-1.0 * aspect, 1.0 * aspect, -1.0, 1.0);
}
else
{
// aspect < 1, set the width to -1 to 1, with larger height
gluOrtho2D(-1.0, 1.0, -1.0 / aspect, 1.0 / aspect);
}
winWidth = width;
winHeight = height;
} // 2D mode
else
{
// Prevent a divide by zero, when window is too short
// (you cant make a window of zero width).
if (height == 0)
height = 1;
float ratio = width * 1.0 / height;
// Use the Projection Matrix
glMatrixMode(GL_PROJECTION);
// Reset Matrix
glLoadIdentity();
// Set the viewport to be the entire window
glViewport(0, 0, width, height);
// Set the correct perspective.
gluPerspective(45.0f, ratio, 0.1f, 100.0f);
// Get Back to the Modelview
glMatrixMode(GL_MODELVIEW);
winWidth = width;
winHeight = height;
}// 3D mode
}
Everything works perfectly when drawing only in 2d mode, but when I change the flag to switch to the 3d mode, here comes the problem
Every time I resize the window, the things I draw in the 3d scene(for example a cube) would be come smallerand smaller, eventually disappeared, why is this happening
And if I switch back to 2D mode, everything in 2d mode still works fine, the problem is with the 3d mode
Also, if I start the program with the flag set to false, I would see a cube and it still gets smaller as I resize the window each time
Why is this happening?
You should look at your glLoadIdentity() / glMatrixMode() interactions.
Right now, you have two different behaviors:
In 2D: you're resetting your matrix for whatever is active when you enter the function, presumably GL_MODELVIEW, which causes the gluOrtho2D calls to "stack up".
In 3D: you're always resetting the projection matrix, which seems more correct.
Try swapping the order of the glLoadIdentity and glMatrixMode calls in your first path (2D) only.
It's a wise idea to always explicitly set the matrix you want to modify before actually modifying it.
How to scroll a textured quad "pixel-exact" over the screen
using float-positions with GL_LINEAR-filtering?
If I try this task, I could always see a hard pixel-change
during the very smooth movement if a subpixels-coordinate is
greater or equal than 0.5.This looks like a really ugly stuttering.
I think the problem is here, that after scrolling 0.5 subpixels
the quad is so misaligned to the texture-coordinates, that OpenGL
takes now another neighbour pixel for texturing which is newly
interpolated and so the new drawn subpixel is not aligned to
the rendered subpixel before?!
Could be the solution here, to realign the texture-coordinates
on positions > 0.5f? Can somebody help me out with a function,
that deals with this problem and calculates the right uv-coordinates?
I think the uv-coordinates should moved to a new position
if the quads subpixel-position is >= 0.5f.
Here is a link to screenshot, that shows exact the hard pixel-jump
in x-direction (left) on x=0.5f in the last frame.
(zoom the screenshot with strg+mousewheel)
http://i.stack.imgur.com/n0GVA.png
Here are the relevant code-fragments:
float calcTexPos(float fTexPos,float spriteX, float spriteY){return (fTexPos)/(64.0f);}
void addSpriteToLocalVerticeArray(Vertex * ptrVertexArrayLocal, Sprite *sprite, int *ptrSpriteVerticeCounter, float fCamX, float fCamY)
{
Sprite oSpriteTranslated = *sprite;
oSpriteTranslated.x = (sprite->x) - (fCamX);
oSpriteTranslated.y = (sprite->y) - (fCamY);
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Top
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y-oSpriteTranslated.height,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(31,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Bottom
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x+oSpriteTranslated.width,oSpriteTranslated.y,0.0f,calcTexPos(33,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Right Top
ptrVertexArrayLocal[(*ptrSpriteVerticeCounter)++] = Vertex(oSpriteTranslated.x,oSpriteTranslated.y,0.0f,calcTexPos(1,oSpriteTranslated.x,oSpriteTranslated.y),calcTexPos(62,oSpriteTranslated.x,oSpriteTranslated.y)); //Left Top
}
What I have tried that didn't work:
Adding an offset of 0.5f to my quad-coords. That moves the problem to the stage, that
now each full 1.0f subpixel-position a hard subpixel-change appears.
Changing the calcTexPos-Function: -> (2.0f*fTexPos+1.0f)/(2.0f*64.0f)
Checking, If MSAA is activated on the driver-panel
Added transparent borders to the texture instead of copied borders
I have some questions about the screen set up. Originally when I would draw a triangle the x vector 1 would be all the way to the right and -1 would be all the way to the left. Now I have adjusted it to account for the different aspect ratio of the window. My new question how do I make the numbers which are used to render a 2d tri go along with the pixel values. If my window is 480 pixels wide and 320 tall I want to have to enter this to span the screen with a tri
glBegin(GL_TRIANGLES);
glVertex2f(240, 320);
glVertex2f(480, 0);
glVertex2f(0, 0);
glEnd();
but instead it currently looks like this
glBegin(GL_TRIANGLES);
glVertex2f(0, 1);
glVertex2f(1, -1);
glVertex2f(-1, -1);
glEnd();
Any ideas?
You need to use functions glViewport and glOrtho with correct values. Basically glViewport sets the part of your window capable of rendering 3D-Graphics using OpenGL. glOrtho establishes coordinate system within that part of a window using OpenGL's coordinates.
So for your task you need to know exact width and height of your window. If you are saying they are 480 and 320 respectively then you need to call
glViewport(0, 0, 480, 320)
// or: glViewport ( 0,0,w,h)
somewhere, maybe in your SizeChanging-handler(if you are using WINAPI it is WM_SIZE message)
Next, when establishing OpenGL's scene you need to specify OpenGL's coordinates. For orthographic projection they will be the same as dimensions of a window so
glOrtho(-240, 240, -160, 160, -100, 100)
// or: glOrtho ( -w/2, w/2, -h/2, h/2, -100, 100 );
is siutable for your purppose. Not that here I'm using depth of 200 (z goes from -100 to 100).
Next on your rendering routine you may draw your triangle
Since the second piece of code is working for you, I assume your transformation matrices are all identity or you have a shader that bypasses them. Also your viewport is spanning the whole window.
In general if your viewport starts at (x0,y0) and has WxH size, the normalized coordinates (x,y) you feed to glVertex2f will be transformed to (vx,vy) as follows:
vx = x0 + (x * .5f + .5f) * W
vy = y0 + (y * .5f + .5f) * H
If you want to use pixel coordinates you can use the function
void vertex2(int x, int y)
{
float vx = (float(x) + .5f) / 480.f;
float vy = (float(y) + .5f) / 320.f;
glVertex3f(vx, vy, -1.f);
}
The -1 z value is the closest depth to the viewer. It's negative because the z is assumed to be reflected after the transformation (which is identity in your case).
The addition of .5f is because the rasterizer considers a pixel as a 1x1 quad and evaluates the coverage of your triangle in the middle of this quad.
When my window is resized, i don't want the contents to scale but just to increase the view port size. I found this while searching on stackoverflow (http://stackoverflow.com/questions/5894866/resize-viewport-crop-scene) which is pretty much the same as my problem. However I'm confused as to what to set the Zoom to and where, i tried it with 1.0f but then nothing was shown at all :s
This is resize function code at the moment which does scaling:
void GLRenderer::resize() {
RECT rect;
int width, height;
GLfloat aspect;
GetClientRect(hWnd, &rect);
width = rect.right;
height = rect.bottom;
if (height == 0) {
height = 1;
}
aspect = (GLfloat) width / height;
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, aspect, 0.1, 100.0);
glMatrixMode(GL_MODELVIEW);
}
And my function to render a simple triangle:
void GLRenderer::render() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslated(0, 0, -20);
glBegin(GL_TRIANGLES);
glColor3d(1, 0, 0);
glVertex3d(0, 1, 0);
glVertex3d(1, -1, 0);
glVertex3d(-1, -1, 0);
glEnd();
SwapBuffers(hDC);
}
You can change the zoom in y (height) with the "field of view" parameter to gluPerspective. The one that is 45 degrees in your code. As it is currently always 45 degrees, you will always get the same view angle (in y). How to change this value as a function of the height of the window is not obvious. A linear relation would fail for big values (180 degrees and up). I would try to use arctan(height/k), where 'k' is something like 500.
Notice also that when you extend the window in x, you will already get what you want (the way your source code currently is). That is, you get a wider field of view. That is because you change the aspect (second argument) to a value depending on the ratio between x and y.
Height and Width is measured in pixels, so a value of 1 is not good.
Notice that you are using deprecated legacy OpenGL. See Legacy OpenGL for more information.
I need to do some CPU operations on the framebuffer data previously drawn by openGL. Sometimes, the resolution at which I need to draw is higher than the texture resolution, therefore I have thought about picking a SIZE for the viewport and the target FBO, drawing, reading to a CPU bufffer, then moving the viewport somewhere else in the space and repeating. In my CPU memory I will have all the needed colordata. Unfortunately, for my purposes, I need to keep an overlap of 1 pixel between the vertical and horizontal borders of my tiles. Therefore, imagining a situation with four tiles with size SIZE x SIZE:
0 1
2 3
I need to have the last column of data of tile 0 holding the same data of the first column of data of tile 1, and the last row of data of tile 0 holding the same data of the first row of tile 2, for example. Hence, the total resolution I will draw at will be
SIZEW * ntilesHor -(ntilesHor-1) x SIZEH * ntilesVer -(ntilesVer-1)
For semplicity, SIZEW and SIZEH will be the same, and the same for ntilesVer and ntilesHor. My code now looks like
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glViewport(0, 0, tilesize, tilesize);
glPolygonMode(GL_FRONT, GL_FILL);
for (int i=0; i < ntiles; ++i)
{
for (int j=0; j < ntiles; ++j)
{
tileid = i * ntiles +j;
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, -1, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw display list
glCallList(DList);
// Texture target of the fbo
glReadBuffer(tex_render_target);
// Read to CPU to preallocated buffer
glReadPixels(0, 0, tilesize, tilesize, GL_BGRA, GL_UNSIGNED_BYTE, colorbuffers[tileid]);
}
}
The code runs and in the various buffers "colorbuffers" I seem to have what looks like colordata, and also similar to what I should have given my draw; only, the overlap I need is not there, namely, last column of tile 0 and first column of tile 1 yield different values.
Any idea?
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
I'm not sure about those margins. If your intention is a pixel based mapping, as suggested by your viewport, with some constant overlap, then the -j and -i terms make no sense, as they're nonuniform. I think you want some constant value there. Also you don't need that max there. You want a 1 pixel overlap though, so your constant will be 0. Because then you have
right_j == left_(j+1)
and the same for bottom and top, which is exactly what you intend.