OpenGL single channel viability to multiple channel - c++

When I rasterize out a font, my code gives me a single channel of visability for a texture. Currently, I just duplicate this out to 4 different channels, and send that as a texture. Now this works, but I want to try and avoid unnecessary memory allocations and de-alocations on the cpu.
unsigned char *bitmap = new unsigned char[width*height] //How this is populated is not the point.
bitmap, now contains a 2d graphic.
It seems this guy also has the same problem: Opengl: Use single channel texture as alpha channel to display text
I do the same thing as a work around for now, where I just multiply the array size by 4 and copy the data into it 4 times.
unsigned char* colormap = new unsigned char[width * height * 4];
int offset = 0;
for (int d = 0; d < width * height;d++)
{
for (int i = 0;i < 4;i++)
{
colormap[offset++] = bitmap[d];
}
}
WHen I multiply it out, I use:
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(gltype, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, colormap);
And get:
Which is what I want.
When i use only the single channel:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, bitmap);
And Get:
It has no transparency, only red ext. makes it hard to colorize and ext. later.
Instead of having to do what I feel is a unnecessary allocations on the cpu side id like the tell OpenGL: "Hey your getting just one channel. multiply it out for all 4 color channels."
Is there a command for that?

In your shader, it's trivial enough to just broadcast the r component to all four channels:
vec4 vals = texture(tex, coords).rrrr;
If you don't want to modify your shader (perhaps because you need to use the same shader for 4-channel textures too), then you can apply a texture swizzle mask to the texture:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
When mechanisms read from the fourth component of the texture, they'll get the value defined by the red component of that texture.

Related

Saving a single color OpenGL texture to a file results in a 64x64 black box

This code fragment is meant to create a GL texture with a single color, then save the raw pixel data to disk. I then convert that to PNG using ffmpeg. I have tried multiple ways of generating the texture, and multiple ways of saving the texture data, but the result is always the same - a 1920x1080 image with a 64x64 black box in the corner. What I expected was a 1920x1080 image of a single color.
What am I doing wrong?
Conversion command:
ffmpeg -pix_fmt rgba -s 1920x1080 -i texture.raw -f image2 output.png
Code:
gpu::gles2::GLES2Interface* gl = GetContextProvider()->ContextGL();
GLuint texture;
gl->GenTextures(1, &texture);
gl->BindTexture(GL_TEXTURE_2D, texture);
int width = 1920;
int height = 1080;
std::vector<unsigned char> data(width * height * 4, 0);
for (size_t i = 2; i < data.size(); i += 4) {
data[i] = 255; // blue channel
}
gl->TexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data.data());
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
std::vector<unsigned char> buffer(width * height * 4);
gl->ReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer.data());
std::ofstream file("texture.raw", std::ios::binary);
file.write(reinterpret_cast<char*>(buffer.data()), buffer.size());
file.close();
Take a look at the description for glReadPixels from the main documentation here: https://registry.khronos.org/OpenGL-Refpages/gl4/html/glReadPixels.xhtml.
Essentially glReadPixels is for getting the pixels from the current frame buffer, not from the currently bound GL_TEXTURE_2D. I can't 100% confidently answer without more code and context, but it looks like the code you have there is before anything is actually rendered to the frame buffer and you're only setting things up. It's most likely that you get a black box because the data getting saved to the buffer isn't valid.
Hope that helps.

How do I display a grayscale image with opengl texture

I am trying to create a texture to display. I have wxh array in which each pixel is 1 byte. I have looked at Can I use a grayscale image with the OpenGL glTexImage2D function? but I am not sure as to how to currently implement it. It looks like the GL_LUMINANCE is deprecated and I need to process the single channel independently . I am not sure how I should try this
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
I tried changing GL_RGBA to other formats like GL_R https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml. I still cannot get the image to display. Does anyone have any suggestions?
If you you have a source texture with 1 color channel, then you can use the format GL_RED and the base internal format GL_RED:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, image_width, image_height,
0, GL_RED, GL_UNSIGNED_BYTE, image_data);
Set the texture parameters GL_TEXTURE_SWIZZLE_G and GL_TEXTURE_SWIZZLE_B (see glTexParameteri) to read the green and blue color from the red color channel, too:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
Note, possibly GL_UNPACK_ALIGNMENT has to be set to 1, when the image is loaded to a texture object:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, ...);
By default the parameter is 4. This means that each line of the image is assumed to be aligned to a size which is a multiple of 4. If the image data is tightly packed then the alignment has to be changed.
If you use shader program, then the same can be achieved by Swizzling. e.g.:
vec3 color = texture(u_texture, uv).rrr;

OpenGL FreeType: weird texture

After I have initialized the library and loaded the texture I get http://postimg.org/image/4tzkq4uhl.
But when I added this line to the texture code:
std::vector<unsigned char> buffer(w * h, 0);
I get http://postimg.org/image/kqycmumvt.
Why is this happening when I add that specific code, and why does it seems like the letter is multiplied? I have searched examples and tutorials about FreeType and I saw that in some of them they change the buffer array, but I didn't really understand that, so if you can explain that to me, I may handle this better.
Texture Load:
Texture::Texture(FT_GlyphSlot slot) {
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
int w = slot->bitmap.width;
int h = slot->bitmap.rows;
// When I remove this line, the black rectangle below the letter reappears.
std::vector<unsigned char> buffer(w * h, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, slot->bitmap.width, slot->bitmap.rows, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
}
Fragment Shader:
#version 330
in vec2 uv;
in vec4 tColor;
uniform sampler2D tex;
out vec4 color;
void main () {
color = vec4(tColor.rgb, texture(tex, uv).a);
}
You're specifying GL_LUMINANCE_ALPHA for the format of the data you pass to glTexImage2D(). Based on the corresponding FreeType documentation I found here:
http://www.freetype.org/freetype2/docs/reference/ft2-basic_types.html#FT_Pixel_Mode
There is no FT_Pixel_Mode value specifying that the data in slot->bitmap.buffer is in fact luminance-alpha. GL_LUMINANCE_ALPHA is a format with 2 bytes per pixel, where the first byte is used for R, G, and B when the data is used to specify a RGBA image, and the second byte is used for A.
Based on the data you're showing, slot->bitmap.pixel_mode is most likely FT_PIXEL_MODE_GRAY, which means that the bitmap data is 1 byte per pixel. In this case, you need to use GL_ALPHA for the format:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, slot->bitmap.width, slot->bitmap.rows, 0,
GL_ALPHA, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
If the pixel_mode is something other than FT_PIXEL_MODE_GRAY, you'll have to adjust the format accordingly, or potentially create a copy of the data if it's a format that is not supported by glTexImage2D().
The reason you get garbage if you specify GL_LUMINANCE_ALPHA instead of GL_ALPHA is that it reads twice as much data as is contained in the data you pass in. The content of the data that is read beyond the allocated bitmap data is undefined, and may well change depending on what other variables you declare/allocate.
If you want to use texture formats that are still supported in the core profile instead of the deprecated GL_LUMINANCE_ALPHA or GL_ALPHA, you can use GL_R8 instead. Since this format has only one component, instead of the four in GL_RGBA, this will also use 75% less texture memory:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, slot->bitmap.width, slot->bitmap.rows, 0,
GL_RED, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
This will also require a slight change in the shader to read the r component instead of the a component:
color = vec4(tColor.rgb, texture(tex, uv).r);
Solved it. I added the following to my code and it works good.
GLubyte * data = new GLubyte[2 * w * h];
for( int y = 0; y < slot->bitmap.rows; y++ )
{
for( int x = 0; x < slot->bitmap.width; x++ )
{
data[2 * ( x + y * w )] = 255;
data[2 * ( x + y * w ) + 1] = slot->bitmap.buffer[x + slot->bitmap.width * y];
}
}
I don't know what happened with that particular line I added but now it works.

DevIL image not rendering correctly

I am using OpenGL, I can load tga files properly, but for some reason when i render jpg files, i do not see them correctly.
This is what the image is supposed to look like--
And this is what it looks like.. why is it stretched? is it because of the coordinates?
Here is the code i am using for drawing.
void Renderer::DrawJpg(GLuint tex, int xi, int yq, int width, int height) const
{
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0+xi, 0+xi);
glTexCoord2i(0, 1); glVertex2i(0+xi, height+xi);
glTexCoord2i(1, 1); glVertex2i(width+xi, height+xi);
glTexCoord2i(1, 0); glVertex2i(width+xi, 0+xi);
glEnd();
}
This is how i am loading the image...
imagename=s;
ILboolean success;
ilInit();
ilGenImages(1, &id);
ilBindImage(id);
success = ilLoadImage((const ILstring)imagename.c_str());
if (success)
{
success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); /* Convert every colour component into
unsigned byte. If your image contains alpha channel you can replace IL_RGB with IL_RGBA */
if (!success)
{
printf("image conversion failed.");
}
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I probably should mention this, but some images did get rendered properly, I thought it was because width != height. But that is not the case, images with width != height also get loaded fine.
But for other images i still get this problem.
You probably need to call
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before uploading the texture data with glTexImage2D.
From the reference pages:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row
in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start
on double-word boundaries).
The default value for the alignment is 4 and your image loading library probably returns pixel data with byte-aligned rows, which explains why some of your images look OK (when the width is a multiple of four).
Always try to have the images width and height of the power of two because some GPU support textures only in NPOT resolution. (for example 128x128, 512x512 but not 123x533, 128x532)
And i think that here instead of GL_REPEAT you should use GL_CLAMP_TO_EDGE :)
GL_REPEAT is used when your texture coordinates are > 1.0f, CLAMP_TO_EDGE too but guarantees the image will fill the polygon without unwanted lines on edges. (it's blocking your linear filtering on edges)
Remember to try out code where floats are used (sample from comment) :)
Here is good explanation http://open.gl/textures :)

Texture loading with DevIL, equivalent code to texture loading with Qt?

I am working with opengl and glsl, in visual studio c++ 2010. I am writing shaders and I need
to load a texture. I am reading code from a book and in there they load textures with Qt, but I
need to do it with DevIl, can someone please write the equivalent code for texture loading with DevIL? I am new to DevIL and I don't know how to translate this.
// Load texture file
const char * texName = "texture/brick1.jpg";
QImage timg = QGLWidget::convertToGLFormat(QImage(texName,"JPG"));
// Copy file to OpenGL
glActiveTexture(GL_TEXTURE0);
GLuint tid;
glGenTextures(1, &tid);
glBindTexture(GL_TEXTURE_2D, tid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, timg.width(), timg.height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, timg.bits());
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Given that DevIL is no longer maintained, and the ILUT part assumes the requirement for power-of-2 texture dimensions and does rescale the images in its convenience functions, it actually makes sense to take the detour of doing it manually.
First loading a image from a file with DevIL happens quite similar to loading a texture from an image in OpenGL. First you create a DevIL image name and bind it
GLuint loadImageToTexture(char const * const thefilename)
{
ILuint imageID;
ilGenImages(1, &imageID);
ilBindImage(imageID);
now you can load an image from a file
ilLoadImage(thefilename);
check that the image does offer data, if not so, clean up
void data = ilGetData();
if(!data) {
ilBindImage(0);
ilDeleteImages(1, &imageID);
return 0;
}
retrieve the important parameters
int const width = ilGetInteger(IL_IMAGE_WIDTH);
int const height = ilGetInteger(IL_IMAGE_HEIGHT);
int const type = ilGetInteger(IL_IMAGE_TYPE); // matches OpenGL
int const format = ilGetInteger(IL_IMAGE_FORMAT); // matches OpenGL
Generate a texture name
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
next we set the pixel store paremeters (your original code missed that crucial step)
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0); // rows are tightly packed
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // pixels are tightly packed
finally we can upload the texture image and return the ID
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, type, data);
next, for convenience we set the minification filter to GL_LINEAR, so that we don't have to supply mipmap levels.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
finally return the textureID
return textureID;
}
If you want to use mipmapping you can use the OpenGL glGenerateMipmap later on; use glTexParameter GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOD to control the span of the image pyramid generated.