When I load a texture in OpenGL and this has one (GL_ALPHA) or three components per pixel (GL_RGB), the texture appears tilted. What makes this happen?
As additional detail, the relation width/height seems to affect. For example, an image of 1366x768(683/384) appears tilted while an image of 1920x1080(16/9) is mapped correctly.
This is probably a padding/alignment issue.
GL, by default, expects rows of pixels to be padded to a multiple of 4 bytes. A 1366 wide texture with 1 byte or 3 byte wide pixels, will not be naturally 4 byte aligned.
Possible fixes for this are:
Tell GL how your texture is packed, using (for example) glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Change the dimensions of your texture, such that the padding matches without changing anything
Change the loading of your texture, such that the padding is consistent with what GL expects
Related
P.S. Yes, I posted this question on Computer Graphics Stack Exchange. But posting there also in hope more people will see
Intro
I'm trying to render multi-channel images (more than 4 channels, for the purposes of feeding it to a Neural Network). Since OpenGL doesn't support it natively, I have multiple 4-channel render buffers, into which I render a corresponding portion of channels.
For example, I need multi-channel image of size 512 x 512 x 16, in OpenGL I have 4 render buffers of size 512 x 512 x 4. Now the problem is that the Neural Network expects the data with strides 512 x 512 x 16, i.e. 16 values of channels of one pixel are followed by 16 values of channels from the next pixel. However currently I can efficiently read my 4 render buffers via 4 calls to glReadPixels, basically making the data having strides 4 x 512 x 512 x 4. Manual reordering of data on the client side will not suffice me as it's too slow.
Main question
I've got an idea to render to a single 4-channel render buffer of size 512*4 x 512 x 4, because stride-wise it's equivalent to 512 x 512 x 16, we just treat a combination of 4 pixels in a row as a single pixel of 16-channel output image. Let's call it an "interleaved rendering"
But this requires me to magically adjust my fragment shader, so that every group of consequent 4 fragments would have exactly the same interpolation of vertex attributes. Is there any way to do that?
This bad illustration with 1 render buffer of 1024 x 512 4-channel image, is an example of how it should be rendered. With that I can in 1 call glReadPixels extract the data with stride 512 x 512 x 8
EDIT: better pictures
What I have now (4 render buffers)
What I want to do natively in OpenGL (this image is done in Python offline)
But this requires me to magically adjust my fragment shader, so that every group of consequent 4 fragments would have exactly the same interpolation of vertex attributes.
No, it would require a bit more than that. You have to fundamentally change how rasterization works.
Rendering at 4x the width is rendering at 4x the width. That means stretching the resulting primitives, relative to a square area. But that's not the effect you want. You need the rasterizer to rasterize at the original resolution, then replicate the rasterization products.
That's not possible.
From the comments:
It just got to me, that I can try to get a 512 x 512 x 2 image of texture coordinates from vertex+fragment shaders, then stitch it with itself to make 4 times wider (thus we'll get the same interpolation) and from that form the final image
This is a good idea. You'll need to render whatever interpolated values you need to the original size texture, similar to how deferred rendering works. So it may be more than just 2 values. You could just store the gl_FragCoord.xy values, and then use them to compute whatever you need, but it's probably easier to store the interpolated values directly.
I would suggest doing a texelFetch when reading the texture, as you can specify exact integer texel coordinates. The integer coordinates you need can be computed from gl_FragCoord as follows:
ivec2 texCoords = ivec2(int(gl_FragCoord.x * 0.25f), int(gl_FragCoord.y));
I have the following situation:
A non-power-of-two (NPOT) texture, in RGB format.
I add some padding at the right side of my texture to make sure the texture scanlines are a multiple of 4 bytes.
I need mipmap generation.
glGenerateMipmaps() is unreliable (I believe it is actually broken on the Intel HD Graphics 3000 driver from Apple, as it gives wrong results, but correct results on Linux, same device) and slow (again on the Apple driver for my chip, fast on Linux).
To address the last problem, I decided I want to do mipmap generation "manually" using framebuffers with a render to texture approach.
However, I'm stuck at how I should deal with the padding to make the scanlines multiples of four bytes, while having consitent mipmap sampling in the shader. Without mipmaps, I'm using a uniform vec2 that I use to multiply the uv with in order to compensate for the few columns of padding at the right side of the texture. Now, the problem is that at every level of mipmap, I have to do the padding to make the scanlines 4-bytes-aligned; but this padding can be different at every level, which would require me to use a different uv-multiplier for every level, which is something I can't do in a shader, because I want to use automatic LOD (level of detail) selection when sampling.
I guess that this question is equivalent to: "What does a succesfull glGenerateMipmaps algorithm do in my scenario?" The documenation on this function is very short. Actually I'm surprised the Linux driver does the job right, in my complex scenario.
Easy solutions that are not acceptable in my scenario (because of the increase in memory usage):
Use RGBA format such that the scanlines are always 4-bytes-aligned.
Use POT textures such that the scanlines are always 4-bytes-aligned.
I add some padding at the right side of my texture to make sure the texture scanlines are a multiple of 4 bytes.
Well, stop doing that. The article you link to doesn't say "always make your textures aligned to 4 bytes". It says to make sure that your byte alignment in your uploaded texture data matches the pixel pack byte alignment you give to OpenGL.
Just make your texture sizes what you need them to be, then upload the data with the proper alignment. Texture widths do not need to be multiples of 4.
My problem is, that I can't read the values, stored in a texture which has only a red component correctly. My first implementation caused a buffer overflow. So I read the openGL reference and it says:
If the selected texture image does not contain four components, the following mappings are applied. Single-component textures are treated as RGBA buffers with red set to the single-component value, green set to 0, blue set to 0, and alpha set to 1. Two-component textures are treated as RGBA buffers with red set to the value of component zero, alpha set to the value of component one, and green and blue set to 0. Finally, three-component textures are treated as RGBA buffers with red set to component zero, green set to component one, blue set to component two, and alpha set to 1.
The first confusing thing is, that the nvidia implementation packs the values tight together. If I have four one byte values I only need four bytes space, not 16.
So I read the openGL specification and it told me on page 236 in table 8.18 the same, except that a two component texture stores it second value not in the alpha channel, but in the green channel, which makes also more sense for me. But which definition is correct?
It also says:
If format is a color format then the components are assigned
among R, G, B, and A according to table 8.18[...]
So I ask you: "What is a color format?" and "Is my texture data tight packed if the format is not a color format"?
My texture is defined like this:
type: GL_UNSIGNED_BYTE
format: GL_RED
internalformat: GL_R8
Another thing is that when my texture has a size of two times two pixels the first two values are being saved in the first two bytes, but the other two values in the fith and sixth bytes of my buffer. The two bytes in between are padding. So I got the "GL_PACK_ALIGNMENT" state and it says four bytes. How can that be?
The GetTexImage call:
std::vector<GLubyte> values(TEXTURERESOLUTION * TEXTURERESOLUTION);
GLvoid *data = &values[0];//values are being passed through a function which does that
glBindTexture(GL_TEXTURE_2D, TEXTUREINDEX);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, 0);
The first confusing thing is, that the nvidia implementation packs the values tight together.
That is exactly what should to happen. The extension to 2, 3 or 4 components is only relevant when you actually read back with GL_RG, GL_RGB or GL_RGBA formats (and the source texture hass less components. If you just aks for GL_RED you will also only get GL_RED
[...] except that a two component texture stores it second value not in the alpha channel, but in the green channel, which makes also more sense for me. But which definition is correct?
The correct definition is the one in the spec. The reference pages have often small inaccuracies or omissions, unfortunately. In this case, I think the reference is just outdated. The description matches the old and now deprecated GL_LUMINANCE and GL_LUMINANCE_ALPHA formats for one and two channels, repsectively, not the modern GL_RED and GL_RG ones.
So I ask you: "What is a color format?"
A color format is one for color textures, in contrast to non-color formats like GL_DEPTH_COMPONENT or GL_STENCIL_INDEX.
Concerning your problem with GL_PACK_ALIGNMENT: The GL behaves exactly as it is intended to behave. You have a 2x2 texture and GL_PACK_ALIGNMENT of 4, which means that data will be padded at each row so the distance from one row tow the next will be a multiple of 4. So you will get the first row tightly packed, 2 padding bytes, and finally the second row.
I have a texture atlas, only few pieces of it changes and this happens sometimes,so not everyframe
My PBO strategy:
create a PBO of the size of the atlas
use glMapBufferRange to specify a memory area inside PBO large enough to accomodate all pixels of the tiles that changes
use glUnmapBuffer
use glTexSubImage2D for each tile (using correct offset instead of pointer for the currently bound PBO)
Question number 1)
Is there any direct mapping between PBO and Texture Object?
so assume this is the atlas
Can I happily upload following data into the PBO ?
Question number 2)
Assume
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
What about alignment of subimages? Should I assume that each image is aligned to the ATLAS (1), or should I assume that each image is aligned to a texture of same size (2)?
Assuming 1:
In the picture above you can see red vertical lines (4 bytes columns) that are referred to texture ATLAS, orange area should be padding bytes I have to add to the subimage before each row of pixels.
Assuming 2:
There will be no padding before each pixel row, but there will be some padding after each pixel row (like in ordinary textures that are RGB8 and not a power of 2 as size)
Question number 3)
How should I update miplevels? Have I to create and update all miplevels manually or there is a efficient way to let OpenGL update mipmap levels on its own?
I have a large sprite library and I'd like to cut GPU memory requirements. Can I store textures on the gpu with only 1 byte per pixel and use that for an RGB color look up in a fragment shader? I see conflicting reports on the use of GL_R8.
I'd say this really depends on whether your hardware supports that texture format or not. How about skipping the whole issue by using a A8R8G8B8 texture instead? It would just be compressed, i.e. using a bit mask (or r/g/b/a members in glsl) to read "sub pixel" values. Like the first pixel is stored in alpha channel, second pixel in red channel, third pixel in green channel, etc.
You could even use this to store up to 4 layers in a single image (cutting max texture width/height); picking just one shouldn't be an issue.