When using OpenGL's glTexSubImage2D and glTexSubImage3D functions, with a sub image that does not equal the actual dimensions of the texture, should the data pointer contain data packed to match the actual texture dimensions or the dimensions of the sub image?
For example, if you had a simple 3x3 texture, and you wanted to upload only the center pixel, that would be a sub image with x offset 1, y offset 1, width 1, and height 1, and you would call...
glTexSubImage2D(GL_TEXTURE_2D, 0, 1, 1, 1, 1, GL_RED, GL_UNSIGNED_BYTE, data)
Should data look like { 255 } or like { 0, 0, 0, 0, 255, 0, 0, 0, 0 } ?
The size of the texture doesn't matter.
The size of the subregion updated does. Specifically, glTexSubImage2D(target, level, xoffset, yoffset, width, height, format, type, data) expects data to point to a rectangular image of size (width, height) of appropriate type and format. The way the data is unpacked from the memory is governed by the GL_UNPACK_ALIGNMENT, GL_UNPACK_ROW_LENGTH, and friends. See the OpenGL specification ยง8.4 Pixel Rectangles.
In your particular case data has to point to a single value like { 255 }.
Related
I am trying to update only a part of one texture in a large texture array with 16 4096*4096 textures using glTexSubImage3D. However, I can't get it to work. The call does not throw any errors. (While setting width to more than 4096 does).
glTexSubImage3D( GL_TEXTURE_2D_ARRAY, // target
0, // level
0, // x offset
0, // y offset
0, // z offset
TEXTURE_DIM, // width
TEXTURE_DIM, // height
0, // depth
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
textures[1]); // zeroed memory
This is strange because when I replace this call with glTexImage3D, the texture is updated:
glTexImage3D(GL_TEXTURE_2D_ARRAY, // target
0, // level
GL_RGBA8, // Internal format
TEXTURE_DIM, // width
TEXTURE_DIM, // height
1, // the number of layers
0, // 0 required
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
textures[1]); // zeroed memory
Am I missing some additional steps that glTexSubImage3D needs? What might be the problem?
Thanks for any pointers
The depth parameter has to be to be 1 in your case. Note, the size of the region which is updated is widht * height * depth. If depth is 0, then nothing is changed at all.
I am trying to display a JPG texture using OpenGL, but I have some problems. This is the important part of my code:
unsigned char* data = stbi_load("resources/triangle.jpg", &width, &height, &nrChannels, 0);
if (data)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
}
The JPG file that I am trying to load can be downloaded here. It works with certain JPG files but not this one, so it is clearly something regarding the formatting - but what and why?
This is how the texture is displayed
It works with certain JPG files but not this one, so it is clearly something regarding the formatting - but what and why?
By default OpenGL assumes that the size of each row of an image is aligned 4 bytes.
This is because the GL_UNPACK_ALIGNMENT parameter by default is 4.
Since the image has 3 color channels (because its a JPG), and is tightly packed the size of a row of the image may not be aligned to 4 bytes. Note if the width of an image would by 4, then it would be aligned to 4 bytes, because 4 * 3 = 12 bytes. But if the width would be 5, it wouldn't be aligned to 4, because 5 * 3 = 15 bytes.
This cause that the rows of the image seems to be misplaced. Set the GL_UNPACK_ALIGNMENT to 1, to solve your issue:
glPixelStore( GL_UNPACK_ALIGNMENT, 1 );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
Further note, you are assuming that the image has 3 color channels, because of the GL_RGB format parameter in glTexImage2D. In this case this works, because of the JPG file format.
stbi_load returns the number of color channels contained in the image buffer (nrChannels).
Take respect on it, by either using GL_RGB or GL_RGBA for the format parameter, somehow like that:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGB, width, height, 0,
nrChannels == 3 ? GL_RGB : GL_RGBA,
GL_UNSIGNED_BYTE, data);
Usually a texel is an RGBA value. What data does a texel represent in the following code:
const int TEXELS_W = 2, TEXELS_H = 2;
GLubyte texels[] = {
100, 200, 0, 0,
200, 250, 0, 0
};
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(
GL_TEXTURE_2D,
0, // mipmap reduction level
GL_LUMINANCE,
TEXELS_W,
TEXELS_H,
0, // border (must be 0)
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
texels);
GLubyte texels[] = {
100, 200, 0, 0,
200, 250, 0, 0
};
OpenGL will only read 4 of these values. Because GL_UNPACK_ALIGNMENT defaults to 4, OpenGL expects each row of pixel data to be aligned to 4 bytes. So the two 0's in each row are just padding, because the person who wrote this code didn't know how to change the alignment.
So OpenGL will read 100, 200 as the first row, then skip to the next 4 byte boundary and read 200, 250 as the second row.
GL_LUMINANCE:
Each element is a single luminance value. The GL converts it to floating point, then assembles it into an RGBA element by replicating the luminance value three times for red, green, and blue and attaching 1 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1] (see glPixelTransfer).
I have a sprite_sheet (example sheet):
I loaded as vector data and I need to make a new texture from a specified area. Like this:
const std::vector <char> image_data = LoadPNG("sprite_sheet.png"); // This works.
glMakeTexture([0, 0], [50, 50], configs, image_data) //to display only one sqm.
But i don't know how, 'couse mostly GL functions only works in full area, like glTexImage2D.
// Only have width and height, the bottomLeft coords is 0, 0 -- I want to change this.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, &(_sprite_sheet.data[0]));
Have a way to do that without load the full sprite_sheet as a texture?
OBS: I'm using picoPNG to decode PNG and I CAN load png, but not make a texture from specified area.
Because you show no code I assume that:
char *data; // data of 8-bit per sample RGBA image
int w, h; // size of the image
int sx, sy, sw, sh; // position and size of area you want to crop.
glTexImage2D does support regions-of-interest. You do it as follows:
glPixelStorei(GL_UNPACK_ROW_LENGTH, w);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, sx);
glPixelStorei(GL_UNPACK_SKIP_ROWS, sy);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // LodePNG tightly packs everything
glTexImage2D(GL_TEXTURE_2D, level, internalFormat, sw, sh, border, format, type, data);
I'm trying to get a texture to be rendered on top of another one, like in the image below:
However, only that image gets rendered properly. My other images get garbled and "twisted". If you look carefully, it's as if the rows were shifted:
In the above example, I used the very same cat picture in the background. Both this cat picture, and all other images I generate end up garbled, except that one special picture, for some reason. I have looked at EXIF data, and other than the fact that it doesn't use sRGB, it is in the exact same format as the others. It has an alpha channel and everything.
I believe it has something to do with pixel alignment, given how the rows are shifted, but I have tried literally every possible combination of alignment and nothing as worked so far. Here is my code:
int height, width = 512;
m_pSubImage = SOIL_load_image("sample.png", &width, &height, 0, SOIL_LOAD_RGBA);
glGenTextures(1, &m_textureObj);
glBindTexture(m_textureTarget, m_textureObj);
...
glActiveTexture(TextureUnit);
glBindTexture(m_textureTarget, m_textureObj);
glTexSubImage2D(GL_TEXTURE_2D, 0, 20, 10, 100, 100, GL_RGBA, GL_UNSIGNED_BYTE, m_pSubImage);
The code for loading the background image is similar, except that it uses this call instead of glTexSubImage2D:
glTexImage2D(m_textureTarget, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_pImage);
It appears that you aren't passing the width and height correctly to glTexSubImage2D. Note that you need the number of pixels stored per scanline, which is often not exactly the "logical" width of the image, but rounded up to a multiple of 4.
The difference between the "logical" and "storage" width will leave a few padding pixels left over on each scan line, which will be interpreted as the leftmost pixels of the next scanline, and accumulate as you move down the image. That creates the slant effect you observe.
You don't appear to be checking for failures. The following failure modes of glTexSubImage2D are especially relevant here:
GL_INVALID_VALUE is generated if xoffset < 0, xoffset + width > w, yoffset < 0, yoffset + height > h, where w is the width and h is the height of the texture image being modified.
GL_INVALID_VALUE is generated if width or height is less than 0.
GL_INVALID_OPERATION is generated if the texture array has not been defined by a previous glTexImage2D or glCopyTexImage2D operation whose internalformat matches the format of glTexSubImage2D.