I am trying to create a texture to display. I have wxh array in which each pixel is 1 byte. I have looked at Can I use a grayscale image with the OpenGL glTexImage2D function? but I am not sure as to how to currently implement it. It looks like the GL_LUMINANCE is deprecated and I need to process the single channel independently . I am not sure how I should try this
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
I tried changing GL_RGBA to other formats like GL_R https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml. I still cannot get the image to display. Does anyone have any suggestions?
If you you have a source texture with 1 color channel, then you can use the format GL_RED and the base internal format GL_RED:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, image_width, image_height,
0, GL_RED, GL_UNSIGNED_BYTE, image_data);
Set the texture parameters GL_TEXTURE_SWIZZLE_G and GL_TEXTURE_SWIZZLE_B (see glTexParameteri) to read the green and blue color from the red color channel, too:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
Note, possibly GL_UNPACK_ALIGNMENT has to be set to 1, when the image is loaded to a texture object:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, ...);
By default the parameter is 4. This means that each line of the image is assumed to be aligned to a size which is a multiple of 4. If the image data is tightly packed then the alignment has to be changed.
If you use shader program, then the same can be achieved by Swizzling. e.g.:
vec3 color = texture(u_texture, uv).rrr;
Related
I am trying to display a JPG texture using OpenGL, but I have some problems. This is the important part of my code:
unsigned char* data = stbi_load("resources/triangle.jpg", &width, &height, &nrChannels, 0);
if (data)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
}
The JPG file that I am trying to load can be downloaded here. It works with certain JPG files but not this one, so it is clearly something regarding the formatting - but what and why?
This is how the texture is displayed
It works with certain JPG files but not this one, so it is clearly something regarding the formatting - but what and why?
By default OpenGL assumes that the size of each row of an image is aligned 4 bytes.
This is because the GL_UNPACK_ALIGNMENT parameter by default is 4.
Since the image has 3 color channels (because its a JPG), and is tightly packed the size of a row of the image may not be aligned to 4 bytes. Note if the width of an image would by 4, then it would be aligned to 4 bytes, because 4 * 3 = 12 bytes. But if the width would be 5, it wouldn't be aligned to 4, because 5 * 3 = 15 bytes.
This cause that the rows of the image seems to be misplaced. Set the GL_UNPACK_ALIGNMENT to 1, to solve your issue:
glPixelStore( GL_UNPACK_ALIGNMENT, 1 );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
Further note, you are assuming that the image has 3 color channels, because of the GL_RGB format parameter in glTexImage2D. In this case this works, because of the JPG file format.
stbi_load returns the number of color channels contained in the image buffer (nrChannels).
Take respect on it, by either using GL_RGB or GL_RGBA for the format parameter, somehow like that:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGB, width, height, 0,
nrChannels == 3 ? GL_RGB : GL_RGBA,
GL_UNSIGNED_BYTE, data);
When I rasterize out a font, my code gives me a single channel of visability for a texture. Currently, I just duplicate this out to 4 different channels, and send that as a texture. Now this works, but I want to try and avoid unnecessary memory allocations and de-alocations on the cpu.
unsigned char *bitmap = new unsigned char[width*height] //How this is populated is not the point.
bitmap, now contains a 2d graphic.
It seems this guy also has the same problem: Opengl: Use single channel texture as alpha channel to display text
I do the same thing as a work around for now, where I just multiply the array size by 4 and copy the data into it 4 times.
unsigned char* colormap = new unsigned char[width * height * 4];
int offset = 0;
for (int d = 0; d < width * height;d++)
{
for (int i = 0;i < 4;i++)
{
colormap[offset++] = bitmap[d];
}
}
WHen I multiply it out, I use:
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(gltype, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, colormap);
And get:
Which is what I want.
When i use only the single channel:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, bitmap);
And Get:
It has no transparency, only red ext. makes it hard to colorize and ext. later.
Instead of having to do what I feel is a unnecessary allocations on the cpu side id like the tell OpenGL: "Hey your getting just one channel. multiply it out for all 4 color channels."
Is there a command for that?
In your shader, it's trivial enough to just broadcast the r component to all four channels:
vec4 vals = texture(tex, coords).rrrr;
If you don't want to modify your shader (perhaps because you need to use the same shader for 4-channel textures too), then you can apply a texture swizzle mask to the texture:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
When mechanisms read from the fourth component of the texture, they'll get the value defined by the red component of that texture.
I am using stb_image to load a 32-bit PNG file (RGBA) and I am creating an OpenGL texture with it.
It works fine for 24-bit PNG files (with no alpha channel), but when I use a 32-bit PNG file, something goes wrong.
This is what the texture should look like:
And this is what it looks like when rendered with OpenGL (the black parts are meant to be transparent, and are when I enable blending):
This is how I load the texture:
int w;
int h;
int comp;
unsigned char* image = stbi_load(filename.c_str(), &w, &h, &comp, STBI_rgb);
if(image == nullptr)
throw(std::string("Failed to load texture"));
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
if(comp == 3)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
else if(comp == 4)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(image);
And these are the window parameters (using SDL)
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);
What is happening?
Your actual bug is that you use comp to determine your format (GL_RBA/GL_RGBA) parameter to glTexImage2D. This happens because when you load an image using stbi_load, the value returned in comp will always match the source image, not the image data returned.
More specifically, your bug is that you use STBI_rgb, causing stbi_load to return 3 byte pixels, but then you load it with glTexImage2D as 4 byte pixels with GL_RGBA because comp is 4 when you load a 32 bit image.
You must set the format in your call to glTexImage2D to GL_RGB if you use STBI_rgb and to GL_RGBA if you use STBI_rgb_alpha.
Bonus to other readers
Are you having a similar problem and the above still doesn't help? Then your image data might not have rows on the alignment OpenGL expects. Try glPixelStorei(GL_UNPACK_ALIGNMENT, 1); before you call glTexImage2D. See https://www.khronos.org/opengl/wiki/Pixel_Transfer#Pixel_layout for more information.
Changing the STBI_rgb to STBI_rgb_alpha in the stbi_load function call fixed it.
Probably best not to specify RGB when its RGBA :D
I'm just trying to feed a cvMat a texture that is generated by fragment shader, there is nothing appears on the screen, I don't know where is the problem, is this in the driver or glreadPixels.. I just loaded a TGA Image, to a fragment shader, then textured a quad, I wanted to feed that texture to a cvMat, so I used glReadPixesl then genereated a new texture, and drew it on the quad, but nothing appears.
Kindly note that the following code is executed at each frame.
cv::Mat pixels;
glPixelStorei(GL_PACK_ALIGNMENT, (pixels.step & 3) ? 1 : 4);
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels.data);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glGenTextures(1, &textureID);
//glDeleteTextures(1, &textureID);
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
1024, // Image width i.e. 640 for Kinect in standard mode
1024, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
GL_RGB, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
pixels.data); // The actual image data itself
glActiveTexture ( textureID );
glBindTexture ( GL_TEXTURE_2D,textureID );
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
textureID looks like an incomplete texture.
Set GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.
Or supply a complete set of mipmaps.
I'm just trying to feed a cvMat a texture that is generated by fragment shader, there is nothing appears on the screen, I don't know where is the problem, is this in the driver or glreadPixels.. I just loaded a TGA Image, to a fragment shader, then textured a quad, I wanted to feed that texture to a cvMat, so I used glReadPixesl then genereated a new texture, and drew it on the quad, but nothing appears.
Kindly note that the following code is executed at each frame.
cv::Mat pixels;
glPixelStorei(GL_PACK_ALIGNMENT, (pixels.step & 3) ? 1 : 4);
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels.data);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glGenTextures(1, &textureID);
//glDeleteTextures(1, &textureID);
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
1024, // Image width i.e. 640 for Kinect in standard mode
1024, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
GL_RGB, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
pixels.data); // The actual image data itself
glActiveTexture ( textureID );
glBindTexture ( GL_TEXTURE_2D,textureID );
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
textureID looks like an incomplete texture.
Set GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.
Or supply a complete set of mipmaps.