Read different parts of an openGL framebuffer keeping overlap - opengl

I need to do some CPU operations on the framebuffer data previously drawn by openGL. Sometimes, the resolution at which I need to draw is higher than the texture resolution, therefore I have thought about picking a SIZE for the viewport and the target FBO, drawing, reading to a CPU bufffer, then moving the viewport somewhere else in the space and repeating. In my CPU memory I will have all the needed colordata. Unfortunately, for my purposes, I need to keep an overlap of 1 pixel between the vertical and horizontal borders of my tiles. Therefore, imagining a situation with four tiles with size SIZE x SIZE:
0 1
2 3
I need to have the last column of data of tile 0 holding the same data of the first column of data of tile 1, and the last row of data of tile 0 holding the same data of the first row of tile 2, for example. Hence, the total resolution I will draw at will be
SIZEW * ntilesHor -(ntilesHor-1) x SIZEH * ntilesVer -(ntilesVer-1)
For semplicity, SIZEW and SIZEH will be the same, and the same for ntilesVer and ntilesHor. My code now looks like
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glViewport(0, 0, tilesize, tilesize);
glPolygonMode(GL_FRONT, GL_FILL);
for (int i=0; i < ntiles; ++i)
{
for (int j=0; j < ntiles; ++j)
{
tileid = i * ntiles +j;
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, -1, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw display list
glCallList(DList);
// Texture target of the fbo
glReadBuffer(tex_render_target);
// Read to CPU to preallocated buffer
glReadPixels(0, 0, tilesize, tilesize, GL_BGRA, GL_UNSIGNED_BYTE, colorbuffers[tileid]);
}
}
The code runs and in the various buffers "colorbuffers" I seem to have what looks like colordata, and also similar to what I should have given my draw; only, the overlap I need is not there, namely, last column of tile 0 and first column of tile 1 yield different values.
Any idea?

int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
I'm not sure about those margins. If your intention is a pixel based mapping, as suggested by your viewport, with some constant overlap, then the -j and -i terms make no sense, as they're nonuniform. I think you want some constant value there. Also you don't need that max there. You want a 1 pixel overlap though, so your constant will be 0. Because then you have
right_j == left_(j+1)
and the same for bottom and top, which is exactly what you intend.

Related

Shader texture values not the same as written when creating the texture

I have created a texture and filled it with ones:
size_t size = width * height * 4;
float *pixels = new float[size];
for (size_t i = 0; i < size; ++i) {
pixels[i] = 1.0f;
}
glTextureStorage2D(texture_id, 1, GL_RGBA16F, width,
height);
glTextureSubImage2D(texture_id, 0, 0, 0, width, height, GL_RGBA,
GL_FLOAT, pixels);
I use linear filtering (GL_LINEAR) and clamp to border.
But when I draw the image:
color = texture(atlas, uv);
the last row looks like it has alpha values of less than 1. If in the shader I set the alpha to 1:
color.a = 1.0f;
it draws it correctly. What could be the reason for this?
The problem comes from the combination of GL_LINEAR and GL_CLAMP_TO_BORDER:
Clamp to border means that every texture coordinate outside of [0, 1]
will return the border color. This color can be set with
glTexParameterf(..., GL_TEXTURE_BORDER_COLOR, ...) and is black by
default.
Linear filter will take into account pixels that are adjacent to the
sampling location (unless sampling happens exactly at texel centers1),
and will thus also read border color texels (which are here black).
If you don't want this behavior, the simplest solution would be to use GL_CLAMP_TO_EDGE instead which will repeat the last row/column of texels to infinity. The different wrapping modes are explained very well explained at open.gl.
1) Sampling happens most probably not at pixel centers as explained in this answer.

Why OpenGL state changes very slow on VirtualBox?

I'm on Linux host and using a Windows guest.
I'm creating the OpenGL context with SDL, and I just draw 1000 objects in different positions, each object is just 6 vertices and 3 lines, just a 3D cross.
I'm using vertex and index buffers and GLSL shaders. It doesn't do anything special: it binds the buffers and sets the vertex attrib pointers, set the matrix and draws the elements.
It renders the scene in 2 seconds, if I hoist the buffer binding and attribute setting outside the loop it renders in 200ms, if I remove glUniformMatrix4fv that sets the matrix due to the updated positions it'll render about 10ms, though I only see 1 object and thousand other is just overdrawn on it.
On Linux and Windows host the same thing renders on stable 60FPS.
OpenGL games like OpenArena run on 60FPS in VirtualBox...
Is buffer binding and uniform setting a slow operation in OpenGL in general?
Does anyone has experience testing 3D programs on VirtualBox?
Update: added some code, error checking removed for clarity:
void drawStuff()
{
GLfloat projectView[16];
int ms;
RenderingContext rc; /*< Nothing special contains the currently bound render object.*/
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
createViewMatrix(
viewMatrix,
&viewedObject->attitude.k,
&viewedObject->attitude.j,
&viewedObject->pos
);
multiplyMatrix(projectView, viewMatrix, projMatrix); /*< Combine projection and view matrices. */
/* Draw 10×10×10 grid of 3D crosses. One cross is 6 vertices and 3 lines. */
bindRenderObject(&rc, &cross); /*< This binds buffers and sets up vertex attrib arrays. It's very slow if I put it into the loop*/
{
int i, j, k;
for (i = -5; i < 5; i++)
{
for (j = -5; j < 5; j++)
{
for (k = -5; k < 5; k++)
{
createTranslationMatrix(modelMatrix, i * 10, j * 10, k * 10);
multiplyMatrix(combinedMatrix, modelMatrix, projectView);
glUniformMatrix4fv(renderingMatrixId, 1, GL_FALSE, combinedMatrix); /*< This is slow for some reason.*/
drawRenderObject(&rc); /*< This is just a call to glDrawElements. No performance bottleneck here at all. */
}
}
}
}
/* Draw some UI. */
glDisable(GL_DEPTH_TEST);
/* ... */
glEnable(GL_DEPTH_TEST);
SDL_GL_SwapBuffers();
}

3d drawing in OpenGL

I'm trying to draw a chess board in OpenGL. I can draw the squares of the game board exactly as I want. But I also want to add a small boarder around the perimeter of the game board. Somehow, my perimeter is way bigger than I want. In fact, each edge of the border is the exact width of the entire game board itself.
My approach is to draw a neutral gray rectangle to represent the entire "slab" of wood that would be cut to make the board. Then, inside of this slab, I place the 64 game squares, which should be exactly centered and take up just slightly less 2d space as the slab does. I'm open to better ways, but keep in mind that I'm not very bright.
EDIT: in the image below all that gray area should be about 1/2 the size of a single square. But as you can see, each edge is the size of the entire game board. Clearly I'm not understanding something.
Here is the display function that I wrote. Why is my "slab" so much too large?
void display()
{
// Clear the image
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Reset any previous transformations
glLoadIdentity();
// define the slab
float square_edge = 8;
float border = 4;
float slab_thickness = 2;
float slab_corner = 4*square_edge+border;
// Set the view angle
glRotated(ph,1,0,0);
glRotated(th,0,1,0);
glRotated(zh,0,0,1);
float darkSquare[3] = {0,0,1};
float lightSquare[3] = {1,1,1};
// Set the viewing matrix
glOrtho(-slab_corner, slab_corner, slab_corner, -slab_corner, -slab_corner, slab_corner);
GLfloat board_vertices[8][3] = {
{-slab_corner, slab_corner, 0},
{-slab_corner, -slab_corner, 0},
{slab_corner, -slab_corner, 0},
{slab_corner, slab_corner, 0},
{-slab_corner, slab_corner, slab_thickness},
{-slab_corner, -slab_corner, slab_thickness},
{slab_corner, -slab_corner, slab_thickness},
{slab_corner, slab_corner, slab_thickness}
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_INT, 0, board_vertices);
// this defines each of the six faces in counter clockwise vertex order
GLubyte slabIndices[] = {0,3,2,1,2,3,7,6,0,4,7,3,1,2,6,5,4,5,6,7,0,1,5,4};
glColor3f(0.3,0.3,0.3); //upper left square is always light
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, slabIndices);
// draw the individual squares on top and centered inside of the slab
for(int x = -4; x < 4; x++) {
for(int y = -4; y < 4; y++) {
//set the color of the square
if ( (x+y)%2 ) glColor3fv(darkSquare);
else glColor3fv(lightSquare);
glBegin(GL_QUADS);
glVertex2i(x*square_edge, y*square_edge);
glVertex2i(x*square_edge+square_edge, y*square_edge);
glVertex2i(x*square_edge+square_edge, y*square_edge+square_edge);
glVertex2i(x*square_edge, y*square_edge+square_edge);
glEnd();
}
}
glFlush();
glutSwapBuffers();
}
glVertexPointer(3, GL_INT, 0, board_vertices);
specifies that board_vertices contains integers, but it's actually of type GLfloat. Could this be the problem?

Texture stretched when drawn outside viewport in OpenGL

I've got a following problem - I'm creating a GUI in OpenGL, I have a Window component which can be dragged around the screen. Normally it looks like this:
Everything is ok unless you try to move it to left so that part of it disappears outside the viewport - For example left button in horizontal scroll is a separate texture and when we move Window to left so far that a whole button disappears, then first pixel of it's texture is being stretched across a whole screen, like this:
My CBackground::draw() method looks like this:
bool CBackground::draw(const CPoint &pos, unsigned int width, unsigned int height)
{
if(width <= 0 || height <= 0)
return false;
GLfloat maxTexCoordWidth = (float)width/m_width;
GLfloat maxTexCoordHeight = (float)height/m_height;
glPushMatrix();
glBindTexture(GL_TEXTURE_2D, m_texture); // Select Our Texture
switch(m_rotation)
{
case BACKGROUND_ROTATION_0_DG:
glBegin(GL_QUADS);
glTexCoord2f(0.0f, maxTexCoordHeight); glVertex2d(pos.x, pos.y + height);
glTexCoord2f(0.0f, 0.0f); glVertex2d(pos.x, pos.y);
glTexCoord2f(maxTexCoordWidth, 0.0f); glVertex2d(pos.x + width, pos.y);
glTexCoord2f(maxTexCoordWidth, maxTexCoordHeight); glVertex2d(pos.x + width, pos.y + height);
glEnd();
break;
}
glPopMatrix();
return true;
}
I tried to add such code just before glPushMatrix():
if((pos.x <= 0 && pos.x + width <= 0)
return false;
So that it shouldn't even draw this one pixel (I'm leaving a method with return), but it still draws this stretched texture.
Your width and height are both unsigned, which means this check has no effect for values other than precisely zero:
if(width <= 0 || height <= 0)
return false;
Most likely, you have some width which would otherwise be negative, but due to the unsigned nature of your variables is wrapping around and becoming very large. This would explain why you seem to have a bar stretching out very far to the right (large positive value in the horizontal dimension).
A good solution would be to make the width and height signed, which would prevent this type of issues. Otherwise, you'll have to move such checks to (all places) where the widths and heights are calculated. After all, you are not likely to have a window with width and height beyond what can be represented with a signed int.

Gradient "miter" in OpenGL shows seams at the join

I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(