Texel data in case of GL_LUMINANCE when glTexImage2D is called - opengl

Usually a texel is an RGBA value. What data does a texel represent in the following code:
const int TEXELS_W = 2, TEXELS_H = 2;
GLubyte texels[] = {
100, 200, 0, 0,
200, 250, 0, 0
};
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(
GL_TEXTURE_2D,
0, // mipmap reduction level
GL_LUMINANCE,
TEXELS_W,
TEXELS_H,
0, // border (must be 0)
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
texels);

GLubyte texels[] = {
100, 200, 0, 0,
200, 250, 0, 0
};
OpenGL will only read 4 of these values. Because GL_UNPACK_ALIGNMENT defaults to 4, OpenGL expects each row of pixel data to be aligned to 4 bytes. So the two 0's in each row are just padding, because the person who wrote this code didn't know how to change the alignment.
So OpenGL will read 100, 200 as the first row, then skip to the next 4 byte boundary and read 200, 250 as the second row.

GL_LUMINANCE:
Each element is a single luminance value. The GL converts it to floating point, then assembles it into an RGBA element by replicating the luminance value three times for red, green, and blue and attaching 1 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1] (see glPixelTransfer).

Related

How does data get laid out in am RGBA WebGL texture?

I'm trying to pass a list of integers to the fragment shader and need random access to any of its positions. I can't use uniforms since index must be a constant, so I'm using the usual technique of passing the data through a texture.
Things seem to work, but calling texture2D to obtain specific pixels is not behaving as I'd expect.
My data looks like this:
this.textureData = new Uint8Array([
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
I then copy that over through a texture:
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_S, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_T, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MIN_FILTER, this.gl.NEAREST);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MAG_FILTER, this.gl.NEAREST);
this.gl.texImage2D(
this.gl.TEXTURE_2D,
0,
this.gl.RGBA,
4, // width: using 4 since its 4 bytes per pixel
2, // height
0,
this.gl.RGBA,
this.gl.UNSIGNED_BYTE,
this.textureData);
So this texture is 4x2 pixels.
When I call texture2D(uTexture, vec2(0,0)); I get a vec4 pixel with the correct values (0,0,0,10).
However, when I call with locations such as (1,0), (2,0), (3,0), (4,0), etc they all return a pixel with (0,0,0,30).
Same for the second row. If I call with (0,1) I get the first pixel of the second row.
Any number greater than 1 for the X coordinate returns the last pixel of the second row.
I'd expect the coordinates to be:
this.textureData = new Uint8Array([
// (0,0) (1,0) (2,0) (3,0)
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
// (0,1) (1,1) (2,1) (3,1)
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
What am I missing? How can I correctly access the pixels?
Thanks!
Texture coordinates are not integral, they are in the range [0.0, 1.0]. They map the vertices of the geometry to a point in the texture image. The texture coordinates specifies which part of the texture is placed on an specific part of the geometry and together with the texture parameters (see gl.texParameteri) it specifies how the geometry is wrapped by the texture. In general, the lower left point of the texture is addressed by the texture coordinate (0.0, 0.0) and the upper right point of the texture is addressed by (1.0, 1.0).
Texture coordinates work the same in OpenGL, OpenGL Es and WebGL. See How do opengl texture coordinates work?

glTexSubImage3D not updating texture

I am trying to update only a part of one texture in a large texture array with 16 4096*4096 textures using glTexSubImage3D. However, I can't get it to work. The call does not throw any errors. (While setting width to more than 4096 does).
glTexSubImage3D( GL_TEXTURE_2D_ARRAY, // target
0, // level
0, // x offset
0, // y offset
0, // z offset
TEXTURE_DIM, // width
TEXTURE_DIM, // height
0, // depth
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
textures[1]); // zeroed memory
This is strange because when I replace this call with glTexImage3D, the texture is updated:
glTexImage3D(GL_TEXTURE_2D_ARRAY, // target
0, // level
GL_RGBA8, // Internal format
TEXTURE_DIM, // width
TEXTURE_DIM, // height
1, // the number of layers
0, // 0 required
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
textures[1]); // zeroed memory
Am I missing some additional steps that glTexSubImage3D needs? What might be the problem?
Thanks for any pointers
The depth parameter has to be to be 1 in your case. Note, the size of the region which is updated is widht * height * depth. If depth is 0, then nothing is changed at all.

OpenGL texture format, create image/texture data for OpenGL

Ok so I need to create my own texture/image data and then display it onto a quad in OpenGL. I have the quad working and I can display a TGA file onto it with my own texture loader and it maps to the quad perfectly.
But how do I create my own "homemade image", that is 1000x1000 and 3 channels (RGB values) for each pixel? What is the format of the texture array, how do I for example set pixel (100,100) to black?
This is how I would imagine it for a completely white image/texture:
#DEFINE SCREEN_WIDTH 1000
#DEFINE SCREEN_HEIGHT 1000
unsigned int* texdata = new unsigned int[SCREEN_HEIGHT * SCREEN_WIDTH * 3];
for(int i=0; i<SCREEN_HEIGHT * SCREEN_WIDTH * 3; i++)
texdata[i] = 255;
GLuint t = 0;
glEnable(GL_TEXTURE_2D);
glGenTextures( 1, &t );
glBindTexture(GL_TEXTURE_2D, t);
// Set parameters to determine how the texture is resized
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_MIN_FILTER , GL_LINEAR_MIPMAP_LINEAR );
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_MAG_FILTER , GL_LINEAR );
// Set parameters to determine how the texture wraps at edges
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_WRAP_S , GL_REPEAT );
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_WRAP_T , GL_REPEAT );
// Read the texture data from file and upload it to the GPU
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCREEN_WIDTH, SCREEN_HEIGHT, 0,
GL_RGB, GL_UNSIGNED_BYTE, texdata);
glGenerateMipmap(GL_TEXTURE_2D);
EDIT: Below answers are correct but I also found that OpenGL doesn't handle normal ints which I used but it works fine with uint8_t. I assume it's because of the GL_RGB together with the GL_UNSIGNED_BYTE (which is only 8 bits and a normal int is not 8 bit) flag that I use when I upload to GPU.
But how do I create my own "homemade image", that is 1000x1000 and 3 channels (RGB values) for each pixel?
std::vector< unsigned char > image( 1000 * 1000 * 3 /* bytes per pixel */ );
What is the format of the texture array
Red byte, then green byte, then blue byte. Repeat.
how do I for example set pixel (100,100) to black?
unsigned int width = 1000;
unsigned int x = 100;
unsigned int y = 100;
unsigned int location = ( x + ( y * width ) ) * 3;
image[ location + 0 ] = 0; // R
image[ location + 1 ] = 0; // G
image[ location + 2 ] = 0; // B
Upload via:
// the rows in the image array don't have any padding
// so set GL_UNPACK_ALIGNMENT to 1 (instead of the default of 4)
// https://www.khronos.org/opengl/wiki/Pixel_Transfer#Pixel_layout
glPixelStorei( GL_UNPACK_ALIGNMENT, 1 );
glTexImage2D
(
GL_TEXTURE_2D, 0,
GL_RGB, 1000, 1000, 0,
GL_RGB, GL_UNSIGNED_BYTE, &image[0]
);
By default, each row of a texture should be aligned to 4 bytes.
The texture is an RGB texture, which needs 24 bits or 3 bytes for each texel and the texture is tightly packed especially the rows of the texture.
This means that the alignment of 4 bytes for the start of a line of the texture is disregarded (except 3 times the width of the texture is divisible by 4 without a remaining).
To deal with that the alignment has to be changed to 1.
This means the GL_UNPACK_ALIGNMENT paramter has to be set before loading a tightly packed texture to the GPU (glTexImage2D):
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Otherwise an offset of 0-3 bytes per line is gained, at texture lookup. This causes a continuously twisted or tilted texture.
Since you use the soure format GL_RGB in GL_UNSIGNED_BYTE, each pixel consits of 3 color channels (red, green and blue) and each color channel is stored in one byte in range [0, 255].
If you want to set a pixel at (x, y) to the color R, G and B, the this is done like this:
texdata[(y*WIDTH+x)*3+0] = R;
texdata[(y*WIDTH+x)*3+1] = G;
texdata[(y*WIDTH+x)*3+2] = B;

OpenGL glTexSubImage Data Packing

When using OpenGL's glTexSubImage2D and glTexSubImage3D functions, with a sub image that does not equal the actual dimensions of the texture, should the data pointer contain data packed to match the actual texture dimensions or the dimensions of the sub image?
For example, if you had a simple 3x3 texture, and you wanted to upload only the center pixel, that would be a sub image with x offset 1, y offset 1, width 1, and height 1, and you would call...
glTexSubImage2D(GL_TEXTURE_2D, 0, 1, 1, 1, 1, GL_RED, GL_UNSIGNED_BYTE, data)
Should data look like { 255 } or like { 0, 0, 0, 0, 255, 0, 0, 0, 0 } ?
The size of the texture doesn't matter.
The size of the subregion updated does. Specifically, glTexSubImage2D(target, level, xoffset, yoffset, width, height, format, type, data) expects data to point to a rectangular image of size (width, height) of appropriate type and format. The way the data is unpacked from the memory is governed by the GL_UNPACK_ALIGNMENT, GL_UNPACK_ROW_LENGTH, and friends. See the OpenGL specification ยง8.4 Pixel Rectangles.
In your particular case data has to point to a single value like { 255 }.

Getting exact pixel from texture

I have a question about textures in OpenGL. I am trying to use them for GPGPU operations but I am stuck at beggining. I have created a texture like this (4x4 int matrix).
OGLTexImageFloat dataTexImage = new OGLTexImageFloat(4, 4, 4);
dataTexImage.setPixel(0, 0, 0, 0);
dataTexImage.setPixel(0, 1, 0, 10);
dataTexImage.setPixel(0, 2, 0, 5);
dataTexImage.setPixel(0, 3, 0, 15);
dataTexImage.setPixel(1, 0, 0, 10);
dataTexImage.setPixel(1, 1, 0, 0);
dataTexImage.setPixel(1, 2, 0, 2);
dataTexImage.setPixel(1, 3, 0, 1000);
dataTexImage.setPixel(2, 0, 0, 5);
dataTexImage.setPixel(2, 1, 0, 2);
dataTexImage.setPixel(2, 2, 0, 0);
dataTexImage.setPixel(2, 3, 0, 2);
dataTexImage.setPixel(3, 0, 0, 15);
dataTexImage.setPixel(3, 1, 0, 1000);
dataTexImage.setPixel(3, 2, 0, 2);
dataTexImage.setPixel(3, 3, 0, 0);
texture = new OGLTexture2D(gl, dataTexImage);
Now I would like to add value from [1,1] matrix position to value of each pixel (matrix entry). As I am speaking about every picture I should probably do it in fragment shader. But i dont know how can i get exact pixel form texture ([1,1] entry from matrix). Can someone explain me, how to do this?
If you are trying to add a single constant value (i.e. a value from [1,1]) to the entire image (every pixel of the rendered image), then you should pass that constant value as a separate uniform value into your shader program.
Then in the fragment shader, add this constant value to the current pixel color. The current pixel color comes as an input vec4 from your vertex shader.