openGL reading red-component-only texture data with GetTextureImage - opengl

My problem is, that I can't read the values, stored in a texture which has only a red component correctly. My first implementation caused a buffer overflow. So I read the openGL reference and it says:
If the selected texture image does not contain four components, the following mappings are applied. Single-component textures are treated as RGBA buffers with red set to the single-component value, green set to 0, blue set to 0, and alpha set to 1. Two-component textures are treated as RGBA buffers with red set to the value of component zero, alpha set to the value of component one, and green and blue set to 0. Finally, three-component textures are treated as RGBA buffers with red set to component zero, green set to component one, blue set to component two, and alpha set to 1.
The first confusing thing is, that the nvidia implementation packs the values tight together. If I have four one byte values I only need four bytes space, not 16.
So I read the openGL specification and it told me on page 236 in table 8.18 the same, except that a two component texture stores it second value not in the alpha channel, but in the green channel, which makes also more sense for me. But which definition is correct?
It also says:
If format is a color format then the components are assigned
among R, G, B, and A according to table 8.18[...]
So I ask you: "What is a color format?" and "Is my texture data tight packed if the format is not a color format"?
My texture is defined like this:
type: GL_UNSIGNED_BYTE
format: GL_RED
internalformat: GL_R8
Another thing is that when my texture has a size of two times two pixels the first two values are being saved in the first two bytes, but the other two values in the fith and sixth bytes of my buffer. The two bytes in between are padding. So I got the "GL_PACK_ALIGNMENT" state and it says four bytes. How can that be?
The GetTexImage call:
std::vector<GLubyte> values(TEXTURERESOLUTION * TEXTURERESOLUTION);
GLvoid *data = &values[0];//values are being passed through a function which does that
glBindTexture(GL_TEXTURE_2D, TEXTUREINDEX);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, 0);

The first confusing thing is, that the nvidia implementation packs the values tight together.
That is exactly what should to happen. The extension to 2, 3 or 4 components is only relevant when you actually read back with GL_RG, GL_RGB or GL_RGBA formats (and the source texture hass less components. If you just aks for GL_RED you will also only get GL_RED
[...] except that a two component texture stores it second value not in the alpha channel, but in the green channel, which makes also more sense for me. But which definition is correct?
The correct definition is the one in the spec. The reference pages have often small inaccuracies or omissions, unfortunately. In this case, I think the reference is just outdated. The description matches the old and now deprecated GL_LUMINANCE and GL_LUMINANCE_ALPHA formats for one and two channels, repsectively, not the modern GL_RED and GL_RG ones.
So I ask you: "What is a color format?"
A color format is one for color textures, in contrast to non-color formats like GL_DEPTH_COMPONENT or GL_STENCIL_INDEX.
Concerning your problem with GL_PACK_ALIGNMENT: The GL behaves exactly as it is intended to behave. You have a 2x2 texture and GL_PACK_ALIGNMENT of 4, which means that data will be padded at each row so the distance from one row tow the next will be a multiple of 4. So you will get the first row tightly packed, 2 padding bytes, and finally the second row.

Related

How to get a floating-point color from GLSL

I am currently faced with a problem closely related to the OpenGL pipeline, and the use of shaders.
Indeed, I work on a project whose one of the steps consists of reading pixels from an image that we generate using OpenGL, with as much accuracy as possible : I mean that instead of reading integers, I would like to read float numbers. (So, instead of reading the value (134, 208, 108) for a pixel, I would like to obtain something like (134.180, 207.686, 108.413), for example.)
For this project, I used both vertex and fragment shaders to render my scene. I assume that the color computed and returned by the fragment shader, is a vector of 4 floats (one per RGBA component) belonging to the "continuous" [0, 1] internal. But, how can I get it in my C++ file ? Is there a way of doing it ?
I thought of calling the glReadPixels() function just after having rendered my scene in a buffer, by setting the format argument to GL_RGBA, and the data type of the pixel data to GL_FLOAT. But I have the feeling that the values associated to the pixels that we read, have already been casted to a integer in the meanwhile, because the float numbers that I finally get, correspond to the interval [0, 255] clamped to [0, 1], without any gain in precision. A closer look on the OpenGL spectifications strengthens this idea : I think there is indeed a cast somewhere between rendering my scene, and callingglReadPixels().
Do you have any idea about the way I can reach my objective ?
The GL_RGBA format returned by the fragment shader stores pixels components in 8-bit integers. You should use a floating point format, such as GL_RGBA16F or GL_RGBA32F, where 16 and 32 are the depths for each component.

OpenGL texture tilted

When I load a texture in OpenGL and this has one (GL_ALPHA) or three components per pixel (GL_RGB), the texture appears tilted. What makes this happen?
As additional detail, the relation width/height seems to affect. For example, an image of 1366x768(683/384) appears tilted while an image of 1920x1080(16/9) is mapped correctly.
This is probably a padding/alignment issue.
GL, by default, expects rows of pixels to be padded to a multiple of 4 bytes. A 1366 wide texture with 1 byte or 3 byte wide pixels, will not be naturally 4 byte aligned.
Possible fixes for this are:
Tell GL how your texture is packed, using (for example) glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Change the dimensions of your texture, such that the padding matches without changing anything
Change the loading of your texture, such that the padding is consistent with what GL expects

Passing an 8-bit alpha-only texture to GLSL

how does one pass an 8-bit alpha-only texture to GLSL?
You don't say what OpenGL version you're working with. But really, since you're using GLSL, you shouldn't care whether the 8-bits-per-pixel data is in the alpha component or not. What you care is that your texture data has only one channel, it's 8-bits-per-pixel, and that it is accessible by a known component.
GL 3.x+ provides the GL_R8 image format. Before that, you could just use GL_INTENSITY8 (which was removed from core OpenGL 3.1). The difference is that GL_R8 only puts the single channel into the red component, so GB will be 0 and A will be 1. The intensity format broadcasts the single channel into all four components, so the RGBA will each be the same value.
Your shader doesn't need to be changed. Just get the red component of the sampled value.

Regarding channels in Depth textures

I have implemented depth texture and getting different outputs on 2 different drivers.
I am reading all channels in texture() in fragment shader :
“vec4 color = texelFetch(tk_diffuseMap, ivec3(tmp), i);”
In this case I get a red and grey image on A and B respectively. If I read red channel and replicate it to all the 4 channels I get a grey image on A like:
“vec4 color = vec4(texelFetch(tk_diffuseMap, ivec3(tmp), i).x)”.
Which one is correct?
It's irrelevant which one is right, because you shouldn't be looking at the other three channels at all. It's a depth texture; it only has one channel, the first one. That's the only one you should be touching. Even if OpenGL defined what the other values are, it would just be some default values which are irrelevant, since you don't care. You wanted the depth, so quit looking at the non-depth values.
However, if you want the spec answer, you should get the same thing you would from a GL_RED texture: 0 for green and blue, and 1 for alpha.

OpenGL color index in frag shader?

I have a large sprite library and I'd like to cut GPU memory requirements. Can I store textures on the gpu with only 1 byte per pixel and use that for an RGB color look up in a fragment shader? I see conflicting reports on the use of GL_R8.
I'd say this really depends on whether your hardware supports that texture format or not. How about skipping the whole issue by using a A8R8G8B8 texture instead? It would just be compressed, i.e. using a bit mask (or r/g/b/a members in glsl) to read "sub pixel" values. Like the first pixel is stored in alpha channel, second pixel in red channel, third pixel in green channel, etc.
You could even use this to store up to 4 layers in a single image (cutting max texture width/height); picking just one shouldn't be an issue.