I want to use a grayscale image generated in OpenCV in a GLSL shader.
Based on the question on OpenCV image loading for OpenGL Texture, I've managed to come up with the code that passes RGB image to the shader:
cv::Mat image;
// ...acquire and process image somehow...
//create and bind a GL texture
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
image.cols, image.rows, // texture size
0, // Border width in pixels (can either be 1 or 0)
GL_BGR, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
image.ptr()); // The actual image data itself
glGenerateMipmap(GL_TEXTURE_2D);
and then in the fragment shader I just use this texture:
#version 330
in vec2 tCoord;
uniform sampler2D texture;
out vec4 color;
void main() {
color = texture2D(texture, tCoord);
}
and it all works great.
But now I want to do some grayscale processing on that image, starting with cv::cvtColor(image, image, CV_BGR2GRAY);, doing some more OpenCV stuff to it, and then passing the grayscale to the shaders.
I thought I should use GL_LUMINOSITY as the colour format to convert to, and probably as the input image format as well - but all I'm getting is a black screen.
Can anyone please help me with it?
input format
I'd use GL_RED, since the GL_LUMINANCE format has been deprecated
internalFormat
depends on what you want to do in your shader, although you should always specify a sized internal format, e.g. GL_RGBA8 which gives you 8 bits per channel. Although, with GL_RGBA8, the green, blue and alpha channels will be zero anyway since your input data only has a single channel, so you should probably use the GL_R8 format instead. Also, you can use texture swizzling:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
which will cause all channels to 'mirror' the red channel when you access the texture in the shader.
Related
I would like the fragment-shader to output a single byte value. Rendered to a one-channel texture attached to a framebuffer object. I have only ever used the default fragment shader to output a vec4 for color. Is this possible? If I initialized the texture bound to the fbo as color attachment like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
If so, would I change the fragment shaders out variable from:
out vec4 color
to:
out int color?
(I am trying to render a height map)
Well, your render target is not an integer texture; it's a normalized integer format, which counts as a float. So the corresponding output variable from your shader should be a floating-point type.
But you can use a vec4 if you like; any components that are not part of the render target will be discarded.
I am trying to create a texture to display. I have wxh array in which each pixel is 1 byte. I have looked at Can I use a grayscale image with the OpenGL glTexImage2D function? but I am not sure as to how to currently implement it. It looks like the GL_LUMINANCE is deprecated and I need to process the single channel independently . I am not sure how I should try this
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
I tried changing GL_RGBA to other formats like GL_R https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml. I still cannot get the image to display. Does anyone have any suggestions?
If you you have a source texture with 1 color channel, then you can use the format GL_RED and the base internal format GL_RED:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, image_width, image_height,
0, GL_RED, GL_UNSIGNED_BYTE, image_data);
Set the texture parameters GL_TEXTURE_SWIZZLE_G and GL_TEXTURE_SWIZZLE_B (see glTexParameteri) to read the green and blue color from the red color channel, too:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
Note, possibly GL_UNPACK_ALIGNMENT has to be set to 1, when the image is loaded to a texture object:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, ...);
By default the parameter is 4. This means that each line of the image is assumed to be aligned to a size which is a multiple of 4. If the image data is tightly packed then the alignment has to be changed.
If you use shader program, then the same can be achieved by Swizzling. e.g.:
vec3 color = texture(u_texture, uv).rrr;
I'm currently drawing a image from a pixel buffer object like this.
glClear(GL_COLOR_BUFFER_BIT);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, gl_pbo);
glDrawPixels(glDisplayWidth, glDisplayHeight, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, 0);
glutSwapBuffers();
glutReportErrors();
glutPostRedisplay();
This is my display loop. It produces a red-scale image from the pixel buffer object gl_pbo.
My question is, how do I change the color of the image to say, grayscale?
Copy the PBO contents to a texture (of the same size) and then draw the texture to the full screen quad using a fragment shader that uses outputs a grayscale color based on the textures red channel input.
I'm just trying to feed a cvMat a texture that is generated by fragment shader, there is nothing appears on the screen, I don't know where is the problem, is this in the driver or glreadPixels.. I just loaded a TGA Image, to a fragment shader, then textured a quad, I wanted to feed that texture to a cvMat, so I used glReadPixesl then genereated a new texture, and drew it on the quad, but nothing appears.
Kindly note that the following code is executed at each frame.
cv::Mat pixels;
glPixelStorei(GL_PACK_ALIGNMENT, (pixels.step & 3) ? 1 : 4);
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels.data);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glGenTextures(1, &textureID);
//glDeleteTextures(1, &textureID);
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
1024, // Image width i.e. 640 for Kinect in standard mode
1024, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
GL_RGB, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
pixels.data); // The actual image data itself
glActiveTexture ( textureID );
glBindTexture ( GL_TEXTURE_2D,textureID );
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
textureID looks like an incomplete texture.
Set GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.
Or supply a complete set of mipmaps.
I am still trying to read pixels from fragment shader and I have some questions.
I know that gl_FragColor returns with vec4 meaning RGBA, 4 channels.
After that, I am using glReadPixels to read FBO and write it in data
GLubyte *pixels = new GLubyte[640*480*4];
glReadPixels(0, 0, 640,480, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
This works fine but it really has speed issue. Instead of this, I want to just read RGB so ignore alpha channels. I tried:
GLubyte *pixels = new GLubyte[640*480*3];
glReadPixels(0, 0, 640,480, GL_RGB, GL_UNSIGNED_BYTE, pixels);
instead and this didn't work though. I guess it's because gl_FragColor returns 4 channels and maybe I should do something before this? Actually, since my returned image (gl_FragColor) is grayscale, I did something like
float gray = 0.5 //or some other values
gl_FragColor = vec4(gray,gray,gray,1.0);
So is there any efficient way to use glReadPixels instead of using the first 4 channels method? Any suggestion? By the way, this is on opengl es 2.0 code.
The OpenGL ES 2.0 spec says that there are two valid forms of the call:
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
or
GLint format, type;
glGetIntegerv(IMPLEMENTATION_COLOR_READ_FORMAT, &format);
glGetIntegerv(IMPLEMENTATION_COLOR_READ_TYPE, &type);
glReadPixels(x, y, w, h, format, type, pixels);
The possible combinations for format and type are (pic taken from the spec):
And the implementation will decide which is available to you.
However, it's likely that if you create a rendering surface of an appropriate format, then that will be the format you'll obtain here. See if you can modify your code to obtain a RGB framebuffer (i.e. with 0 bits for alpha channel). Or perhaps you might want to create an offscreen framebuffer object for that purpose?