I'm trying to get a texture to be rendered on top of another one, like in the image below:
However, only that image gets rendered properly. My other images get garbled and "twisted". If you look carefully, it's as if the rows were shifted:
In the above example, I used the very same cat picture in the background. Both this cat picture, and all other images I generate end up garbled, except that one special picture, for some reason. I have looked at EXIF data, and other than the fact that it doesn't use sRGB, it is in the exact same format as the others. It has an alpha channel and everything.
I believe it has something to do with pixel alignment, given how the rows are shifted, but I have tried literally every possible combination of alignment and nothing as worked so far. Here is my code:
int height, width = 512;
m_pSubImage = SOIL_load_image("sample.png", &width, &height, 0, SOIL_LOAD_RGBA);
glGenTextures(1, &m_textureObj);
glBindTexture(m_textureTarget, m_textureObj);
...
glActiveTexture(TextureUnit);
glBindTexture(m_textureTarget, m_textureObj);
glTexSubImage2D(GL_TEXTURE_2D, 0, 20, 10, 100, 100, GL_RGBA, GL_UNSIGNED_BYTE, m_pSubImage);
The code for loading the background image is similar, except that it uses this call instead of glTexSubImage2D:
glTexImage2D(m_textureTarget, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_pImage);
It appears that you aren't passing the width and height correctly to glTexSubImage2D. Note that you need the number of pixels stored per scanline, which is often not exactly the "logical" width of the image, but rounded up to a multiple of 4.
The difference between the "logical" and "storage" width will leave a few padding pixels left over on each scan line, which will be interpreted as the leftmost pixels of the next scanline, and accumulate as you move down the image. That creates the slant effect you observe.
You don't appear to be checking for failures. The following failure modes of glTexSubImage2D are especially relevant here:
GL_INVALID_VALUE is generated if xoffset < 0, xoffset + width > w, yoffset < 0, yoffset + height > h, where w is the width and h is the height of the texture image being modified.
GL_INVALID_VALUE is generated if width or height is less than 0.
GL_INVALID_OPERATION is generated if the texture array has not been defined by a previous glTexImage2D or glCopyTexImage2D operation whose internalformat matches the format of glTexSubImage2D.
Related
I have created a texture and filled it with ones:
size_t size = width * height * 4;
float *pixels = new float[size];
for (size_t i = 0; i < size; ++i) {
pixels[i] = 1.0f;
}
glTextureStorage2D(texture_id, 1, GL_RGBA16F, width,
height);
glTextureSubImage2D(texture_id, 0, 0, 0, width, height, GL_RGBA,
GL_FLOAT, pixels);
I use linear filtering (GL_LINEAR) and clamp to border.
But when I draw the image:
color = texture(atlas, uv);
the last row looks like it has alpha values of less than 1. If in the shader I set the alpha to 1:
color.a = 1.0f;
it draws it correctly. What could be the reason for this?
The problem comes from the combination of GL_LINEAR and GL_CLAMP_TO_BORDER:
Clamp to border means that every texture coordinate outside of [0, 1]
will return the border color. This color can be set with
glTexParameterf(..., GL_TEXTURE_BORDER_COLOR, ...) and is black by
default.
Linear filter will take into account pixels that are adjacent to the
sampling location (unless sampling happens exactly at texel centers1),
and will thus also read border color texels (which are here black).
If you don't want this behavior, the simplest solution would be to use GL_CLAMP_TO_EDGE instead which will repeat the last row/column of texels to infinity. The different wrapping modes are explained very well explained at open.gl.
1) Sampling happens most probably not at pixel centers as explained in this answer.
When using OpenGL's glTexSubImage2D and glTexSubImage3D functions, with a sub image that does not equal the actual dimensions of the texture, should the data pointer contain data packed to match the actual texture dimensions or the dimensions of the sub image?
For example, if you had a simple 3x3 texture, and you wanted to upload only the center pixel, that would be a sub image with x offset 1, y offset 1, width 1, and height 1, and you would call...
glTexSubImage2D(GL_TEXTURE_2D, 0, 1, 1, 1, 1, GL_RED, GL_UNSIGNED_BYTE, data)
Should data look like { 255 } or like { 0, 0, 0, 0, 255, 0, 0, 0, 0 } ?
The size of the texture doesn't matter.
The size of the subregion updated does. Specifically, glTexSubImage2D(target, level, xoffset, yoffset, width, height, format, type, data) expects data to point to a rectangular image of size (width, height) of appropriate type and format. The way the data is unpacked from the memory is governed by the GL_UNPACK_ALIGNMENT, GL_UNPACK_ROW_LENGTH, and friends. See the OpenGL specification ยง8.4 Pixel Rectangles.
In your particular case data has to point to a single value like { 255 }.
When I resize my window, I need to resize my textures that are attached to my framebuffer. I tried calling glTexStorage2D again, with different size parameters. However that does not work.
How can I resize the textures attached to my framebuffer? (Including the depth attachment)
EDIT
Code I tried:
glBindTexture(m_target, m_name);
glTexStorage2D(m_target, 1, m_format, m_width, m_height);
glBindTexture(m_target, 0);
where m_name, m_target and m_format are saved from the original texture and m_width and m_height are the new dimensions.
EDIT2
Please tell me why this has been downvoted so I can fix the question.
EDIT3
Here, someone else had the same problem.
I found that the texture was being rendered correctly to the FBO, but that it was being displayed at the wrong size. It was as if the first time the texture was sent to the default framebuffer the texture size was set permanently, and then when a resized texture was sent it was being treated as if it was the original size. For example, if the first texture was 100x100 and the second texture was 50x50 then the entire texture would be displayed in the bottom left quarter of the screen. Conversely, if the original texture was 50x50 and the new texture 100x100 then the result would be the bottom left quarter of the texture being displayed over the whole screen.
However, he uses a shader to fix this. That's not how I want to do this. There has to be another solution, right?
If you were using glTexImage2D (...) to allocate storage for your texture, it would be possible to re-allocate the storage for any image in the texture at any time without first deleting the texture.
However, you are not using glTexImage2D (...), you are using glTexStorage2D (...). This creates an immutable texture object, whose storage requirements are set once and can never be changed again. Any calls to glTexImage2D (...) or glTexStorage2D (...) after you allocate storage initially will generate GL_INVALID_OPERATION and do nothing else.
If you want to create a texture whose size can be changed at any time, do not use glTexStorage2D (...). Instead, pass some dummy (but compatible) values for the data type and format to glTexImage2D (...).
For instance, if you want to allocate a texture with 1 LOD that is m_widthxm_height:
glTexImage2D (m_target, 0, m_format, m_width, m_height, 0, GL_RED, GL_FLOAT, NULL);
If m_width or m_height change later on, you can re-allocate storage the same way:
glTexImage2D (m_target, 0, m_format, m_width, m_height, 0, GL_RED, GL_FLOAT, NULL);
This is a very different situation than if you use glTexStorage2D (...). That will prevent you from re-allocating storage, and will simply create a GL_INVALID_OPERATION error.
You should review the manual page for glTexStorage2D (...), it states the following:
Description
glTexStorage2D specifies the storage requirements for all levels of a two-dimensional texture or one-dimensional texture array simultaneously. Once a texture is specified with this command, the format and dimensions of all levels become immutable unless it is a proxy texture. The contents of the image may still be modified, however, its storage requirements may not change. Such a texture is referred to as an immutable-format texture.
The behavior of glTexStorage2D depends on the target parameter.
When target is GL_TEXTURE_2D, GL_PROXY_TEXTURE_2D, GL_TEXTURE_RECTANGLE, GL_PROXY_TEXTURE_RECTANGLE or GL_PROXY_TEXTURE_CUBE_MAP, calling glTexStorage2D is equivalent, assuming no errors are generated, to executing the following pseudo-code:
for (i = 0; i < levels; i++) {
glTexImage2D(target, i, internalformat, width, height, 0, format, type, NULL);
width = max(1, (width / 2));
height = max(1, (height / 2));
}
When target is GL_TEXTURE_CUBE_MAP, glTexStorage2D is equivalent to:
for (i = 0; i < levels; i++) {
for (face in (+X, -X, +Y, -Y, +Z, -Z)) {
glTexImage2D(face, i, internalformat, width, height, 0, format, type, NULL);
}
width = max(1, (width / 2));
height = max(1, (height / 2));
}
When target is GL_TEXTURE_1D or GL_TEXTURE_1D_ARRAY, glTexStorage2D is equivalent to:
for (i = 0; i < levels; i++) {
glTexImage2D(target, i, internalformat, width, height, 0, format, type, NULL);
width = max(1, (width / 2));
}
Since no texture data is actually provided, the values used in the pseudo-code for format and type are irrelevant and may be considered to be any values that are legal for the chosen internalformat enumerant. [...] Upon success, the value of GL_TEXTURE_IMMUTABLE_FORMAT becomes GL_TRUE. The value of GL_TEXTURE_IMMUTABLE_FORMAT may be discovered by calling glGetTexParameter with pname set to GL_TEXTURE_IMMUTABLE_FORMAT. No further changes to the dimensions or format of the texture object may be made. Using any command that might alter the dimensions or format of the texture object (such as glTexImage2D or another call to glTexStorage2D) will result in the generation of a GL_INVALID_OPERATION error, even if it would not, in fact, alter the dimensions or format of the object.
So my glReadPixel call:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
GLfloat lebuf[128 * 128 * 4];
glReadPixels(0, 0, 128, 128, GL_RGBA, GL_FLOAT, lebuf);
just puts 1.0 values in the lebuf array. This is just after finishing drawing the page, and the resultant result is a "white" image.
checking the GL errors indicate that there's nothing wrong.
what could have possibly gone wrong?
Make sure glReadBuffer(GL_FRONT) is set before glReadPixels. If it is not, you could be reading from a different buffer, such as the back buffer when double buffering.
And of course, ensure that your capture area - 128x128 - is not all white.
I have an image in OpenGL that I am attempting to apply a simple HSB filter to. The user selects a hue value, I shade the image appropriately, display it, and everyone is happy. The problem I am running into is that the code I have inherited that worked on a previous system (Solaris, presuming OpenGL 2.1) does not work on our current system (RHEL 5, OpenGL 3.0).
Right now, the image appears in grey-scale, no matter what saturation is set to. However, brightness does seem to be acting appropriately. The relevant code has been reproduced below:
// imageData - unsigned char[3*width*height]
// (red|green|blue)Channel - unsigned char[width*height]
// brightnessBias - float in range [-1/3,1/3]
// hsMatrix - float[4][4] Described by algorithm from
// http://www.graficaobscura.com/matrix/index.html
// (see Hue Rotation While Preserving Luminance)
glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, imageData);
// Split into RGB channels
glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, redChannel);
glReadPixels(0, 0, width, height, GL_GREEN, GL_UNSIGNED_BYTE, greenChannel);
glReadPixels(0, 0, width, height, GL_BLUE, GL_UNSIGNED_BYTE, blueChannel);
// Redraw and blend RGB channels with scaling and bias
glPixelZoom(1.0, 1.0);
glRasterPos2i(0, height);
glPixelTransferf(GL_RED_BIAS, brightnessBias);
glPixelTransferf(GL_GREEN_BIAS, brightnessBias);
glPixelTransferf(GL_BLUE_BIAS, brightnessBias);
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][0]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][0]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][0]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, redChannel);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][1]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][1]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][1]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, greenChannel);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][2]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][2]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][2]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, blueChannel);
// Reset pixel transfer parameters
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, 1.0f);
glPixelTransferf(GL_GREEN_SCALE, 1.0f);
glPixelTransferf(GL_BLUE_SCALE, 1.0f);
glPixelTransferf(GL_RED_BIAS, 0.0f);
glPixelTransferf(GL_GREEN_BIAS, 0.0f);
glPixelTransferf(GL_BLUE_BIAS, 0.0f);
The brightness control works as intended, however, when the glPixelTransferf(GL_*_SCALE) calls are left in, the image is displayed in greyscale. Compounding all of this is the fact that I have no prior experience with OpenGL, so I find a lot of links for what I presume are more modern techniques that I simply can't make sense of.
EDIT:
I believe the theory behind what was being done was a hack at doing the matrix multiplication through the draw calls, because GL_LUMINANCE treats the one value as the value for all three components, so if you follow the components through that drawing, you expect
// After glDrawPixels(..., redChannel)
new_red = red*hsMatrix[0][0]
new_green = red*hsMatrix[1][0]
new_blue = red*hsMatrix[2][0]
// After glDrawPixels(..., greenChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1]
// After glDrawPixels(..., blueChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1] + blue*hsMatrix[0][2]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1] + blue*hsMatrix[1][2]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1] + blue*hsMatrix[2][2]
Because it was turning out greyscale anyway and from a similar-ish example, I had thought that I might have needed to do the glPixelTransfer calls before calling glDrawPixels, but that was amazingly slow.
Wow, what the hell is that ?!
For your question, I'd replace GL_LUMINANCE in your 3 glDrawPixels by GL_RED, GL_GREEN and GL_BLUE respectively.
However :
glPixelTransfer is bad
glDrawPixels is bad
Is there a single reason why you're not using a super-simple fragment shader to do the conversion ? It's a simple matrix multiplication, and you're under ogl3.0...
Create a texture from imageData, this needs to be done only once.
Make a shader that reads the color from the texture, multiply it by the color conversion matrix, and display it
Bind the computed color matrix
Draw a fullscreen quad. Even an 5 year old card will get 500 fps out of this.