Opengl: Use single channel texture as alpha channel to display text - c++

What I'm trying to do is load a texture into hardware from a single channel data array and use it's alpha channel to draw text onto an object. I am using opengl 4.
If I try to do this using a 4 channel RGBA texture it works perfectly fine but for whatever reason when I try and load in a single channel only I get a garbled image and I can't figure out why.
I create the texture by combing texture bitmap data for a series of glyphs with the following code into a single texture:
int texture_height = max_height * new_font->num_glyphs;
int texture_width = max_width;
new_texture->datasize = texture_width * texture_height;
unsigned char* full_texture = new unsigned char[new_texture->datasize];
// prefill texture as transparent
for (unsigned int j = 0; j < new_texture->datasize; j++)
full_texture[j] = 0;
for (unsigned int i = 0; i < glyph_textures.size(); i++) {
// set height offset for glyph
new_font->glyphs[i].height_offset = max_height * i;
for (unsigned int j = 0; j < new_font->glyphs[i].height; j++) {
int full_disp = (new_font->glyphs[i].height_offset + j) * texture_width;
int bit_disp = j * new_font->glyphs[i].width;
for (unsigned int k = 0; k < new_font->glyphs[i].width; k++) {
full_texture[(full_disp + k)] =
glyph_textures[i][bit_disp + k];
}
}
}
Then I load the texture data calling:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture));
My fragment shader executes the following code:
#version 330
uniform sampler2D texture;
in vec2 texcoord;
in vec4 pass_colour;
out vec4 out_colour;
void main()
{
float temp = texture2D(texture, texcoord).r;
out_colour = vec4(pass_colour[0], pass_colour[1], pass_colour[2], temp);
}
I get an image that I can tell is generated from the texture but it is terribly distorted and I'm unsure why. Btw I'm using GL_RED because GL_ALPHA was removed from Opengl 4.
What really confuses me is why this works fine when I generate a 4 RGBA texture from the glyphs and then use it's alpha channel??

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture));
This is technically legal but never a good idea.
First, you need to understand what the third parameter to glTexImage2D is. That's the actual image format of the texture. You are not creating a texture with one channel; you're creating a texture with four channels.
Next, you need to understand what the last three parameters do. These are the pixel transfer parameters; they describe what the pixel data you're giving to OpenGL looks like.
This command is saying, "create a 4 channel texture, then upload some data to just the red channel. This data is stored as an array of unsigned bytes." Uploading data to only some of the channels of a texture is technically legal, but almost never a good idea. If you're creating a single-channel texture, you should use a single-channel texture. And that means a proper image format.
Next, things get more confusing:
new_texture->datasize = texture_width * texture_height*4;
Your use of "*4" strongly suggests that you're creating four-channel pixel data. But you're only uploading one-channel data. The rest of your computations agree with this; you don't seem to ever fill in any data pass full_texture[texture_width * texture_height]. So you're probably allocating more memory than you need.
One last thing: always use sized internal formats. Never just use GL_RGBA; use GL_RGBA8 or GL_RGBA4 or whatever. Don't let the driver pick and hope it gives you a good one.
So, the correct upload would be this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, full_texture);
FYI: the reinterpret_cast is unnecessary; even in C++, pointers can implicitly be converted into void*.

I think you swapped the "internal format" and "format" parameters of glTexImage2d(). That is, you told it that you want RGBA in the texture object, but only had RED in the file data rather than vice-versa.
Try to replace your call with the following:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, texture->x, texture->y, 0, GL_RGBA, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture));

Related

Shader texture values not the same as written when creating the texture

I have created a texture and filled it with ones:
size_t size = width * height * 4;
float *pixels = new float[size];
for (size_t i = 0; i < size; ++i) {
pixels[i] = 1.0f;
}
glTextureStorage2D(texture_id, 1, GL_RGBA16F, width,
height);
glTextureSubImage2D(texture_id, 0, 0, 0, width, height, GL_RGBA,
GL_FLOAT, pixels);
I use linear filtering (GL_LINEAR) and clamp to border.
But when I draw the image:
color = texture(atlas, uv);
the last row looks like it has alpha values of less than 1. If in the shader I set the alpha to 1:
color.a = 1.0f;
it draws it correctly. What could be the reason for this?
The problem comes from the combination of GL_LINEAR and GL_CLAMP_TO_BORDER:
Clamp to border means that every texture coordinate outside of [0, 1]
will return the border color. This color can be set with
glTexParameterf(..., GL_TEXTURE_BORDER_COLOR, ...) and is black by
default.
Linear filter will take into account pixels that are adjacent to the
sampling location (unless sampling happens exactly at texel centers1),
and will thus also read border color texels (which are here black).
If you don't want this behavior, the simplest solution would be to use GL_CLAMP_TO_EDGE instead which will repeat the last row/column of texels to infinity. The different wrapping modes are explained very well explained at open.gl.
1) Sampling happens most probably not at pixel centers as explained in this answer.

rgba arrays to OpenGL texture

For the gui for my game, I have a custom texture object that stores the rgba data for a texture. Each GUI element registered by my game adds to the final GUI texture, and then that texture is overlayed onto the framebuffer after post-processing.
I'm having trouble converting my Texture object to an openGL texture.
First I create a 1D int array that goes rgbargbargba... etc.
public int[] toIntArray(){
int[] colors = new int[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = r[x][y];
colors[i+1] = g[x][y];
colors[i+2] = b[x][y];
colors[i+3] = a[x][y];
i += 4;
}
}
return colors;
}
Where r, g, b, and a are jagged int arrays from 0 to 255. Next I create the int buffer and the texture.
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
IntBuffer iBuffer = BufferUtils.createIntBuffer(((width * height)*4));
int[] data = toIntArray();
iBuffer.put(data);
iBuffer.rewind();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_INT, iBuffer);
glBindTexture(GL_TEXTURE_2D, 0);
After that I add a 50x50 red square into the upper left of the texture, and bind the texture to the framebuffer shader and render the fullscreen rect that displays my framebuffer.
frameBuffer.unbind(window.getWidth(), window.getHeight());
postShaderProgram.bind();
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, guiManager.texture()); // this gets the texture id that was created
postShaderProgram.setUniform("gui_texture", 1);
mesh.render();
postShaderProgram.unbind();
And then in my fragment shader, I try displaying the GUI:
#version 330
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D gui_texture;
void main()
{
outColor = texture(gui_texture, Texcoord);
}
But all it outputs is a black window!
I added a red 50x50 rectangle into the upper left corner and verified that it exists, but for some reason it isn't showing in the final output.
That gives me reason to believe that I'm not converting my texture into an opengl texture with glTexImage2D correctly.
Can you see anything I'm doing wrong?
Update 1:
Here I saw them doing a similar thing using a float array, so I tried converting my 0-255 to a 0-1 float array and passing it as the image data like so:
public float[] toFloatArray(){
float[] colors = new float[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = (( r[x][y] * 1.0f) / 255);
colors[i+1] = (( g[x][y] * 1.0f) / 255);
colors[i+2] = (( b[x][y] * 1.0f) / 255);
colors[i+3] = (( a[x][y] * 1.0f) / 255);
i += 4;
}
}
return colors;
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, toFloatArray());
And it works!
I'm going to leave the question open however as I want to learn why the int buffer wasn't working :)
When you specified GL_UNSIGNED_INT as the type of the "Host" data, OpenGL expected 32 bits allocated for each color. Since OpenGL only maps the output colors in the default framebuffer to the range [0.0f, 1.0f], it'll take your input color values (mapped in the range [0, 255]) and divide all of them by the maximum size of an int (about 4.2 Billion) to get the final color displayed on screen. As an exercise, using your original code, set the "clear" color of the screen to white, and see that a black rectangle is getting drawn on screen.
You have two options. The first is to convert the color values to the range specified by GL_UNSIGNED_INT, which means for each color value, multiply them by Math.pow((long)2, 24), and trust that the integer overflow of multiplying by that value will behave correctly (since Java doesn't have unsigned integer types).
The other, far safer option, is to store each 0-255 value in a byte[] object (do not use char. char is 1 byte in C/C++/OpenGL, but is 2 bytes in Java) and specify the type of the elements as GL_UNSIGNED_BYTE.

Loading opengl texture using Boost.GIL

I wrote a simple app that load model using OpenGL, Assimp and Boost.GIL.
My model contains a PNG texture. When I load it using GIL and render it through OPENGL I got a wrong result. Thank of powel of codeXL, I found my texture loaded in OpenglGL is completely different from the image itself.
Here is a similar question and I followed its steps but still got same mistake.
Here are my codes:
// --------- image loading
std::experimental::filesystem::path path(pathstr);
gil::rgb8_image_t img;
if (path.extension() == ".jpg" || path.extension() == ".jpeg" || path.extension() == ".png")
{
if (path.extension() == ".png")
gil::png_read_and_convert_image(path.string(), img);
else
gil::jpeg_read_and_convert_image(path.string(), img);
_width = static_cast<int>(img.width());
_height = static_cast<int>(img.height());
typedef decltype(img)::value_type pixel;
auto srcView = gil::view(img);
//auto view = gil::interleaved_view(
// img.width(), img.height(), &*gil::view(img).pixels(), img.width() * sizeof pixel);
auto pixeldata = new pixel[_width * _height];
auto dstView = gil::interleaved_view(
img.width(), img.height(), pixeldata, img.width() * sizeof pixel);
gil::copy_pixels(srcView, dstView);
}
// ---------- texture loading
{
glBindTexture(GL_TEXTURE_2D, handle());
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
image.width(), image.height(),
0, GL_RGB, GL_UNSIGNED_BYTE,
reinterpret_cast<const void*>(image.data()));
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
And my texture is:
When it runs, my codeXL debugger reports me that the texture became:
all other textures of this model went wrong too.
Technically this is a FAQ, asked already several times. Essentially you're running into an alignment issue. By default (you can change it) OpenGL expects image rows to be aligned on 4 byte boundaries. If your image data doesn't match this, you get this skewed result. Adding a call to glPixelStorei(GL_UNPACK_ALIGNMENT, 1); right before the call to glTexImage… will do the trick for you. Of course you should retrieve the actual alignment from the image metadata.
The image being "upside down" is caused by OpenGL putting the origin of textures into the lower left (if all transformation matrices are left at default or have positive determinant). That is unlike most image file formats (but not all) which have it in the upper left. Just flip the vertical texture coordinate and you're golden.

Freetype glyphs wrap when loaded into openGL

I'm trying to load TrueTypeFonts through freetype and display them using openGL and this is what I'm getting:
As you can see mostly it's fine but if you look closer each individual glyph seems to have some small incongruities around the borders. I noticed that these strange lines are actually carried over pixels from the other side. Look at the 'T' and 'h' in particular where you can see small bars corresponding with the opposite side of the texture. This happens with different fonts as well. Here is the code responsible for copying the glyph bitmap buffer into openGL:
void load(FT_GlyphSlot glyphSlot, double loadedHeight){
this->loadedHeight = loadedHeight;
int width = glyphSlot->bitmap.width;
int height = glyphSlot->bitmap.rows;
sizeRatios = Vector2D(width / loadedHeight, height / loadedHeight);
offsetRatios = Vector2D(glyphSlot->bitmap_left / loadedHeight, glyphSlot->metrics.horiBearingY / loadedHeight);
advanceRatios = Vector2D((glyphSlot->advance.x >> 6) / loadedHeight, (glyphSlot->advance.y >> 6) / loadedHeight);
std::cout << width << ", " << height << std::endl;
GLubyte * textureData = new GLubyte[width * height * 4];
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
for(int i = 0; i < 4; i++){
textureData[(x + y * width) * 4 + i] = glyphSlot->bitmap.buffer[x + width * y];
}
}
}
texture.bind();
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
texture.load(GL_TEXTURE_2D, 0, GL_RGBA, glyphSlot->bitmap.width, glyphSlot->bitmap.rows, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
// glGenerateMipmap(GL_TEXTURE_2D);
delete [] textureData;
}
The size of the font face is set elsewhere and passed into this method along with the glyph slot that I want to load. The texture object is just a class that creates a texture handle and keeps track of it, load just passes the parameters directly into glTexImage2D().
I've tried shifting the pixels by one using modulus rotation and it worked vertically but not horizontally. I have also tried loading the texture by passing the buffer directly into load by changing the format to GL_RED as described here but the problem doesn't go away, so I think that maybe it might even be a flaw in freetype?
I wonder is there is some basic element of texture loading that I do not understand.
If you need some additional source code to understand what is wrong please ask.
GL_TEXTURE_WRAP_S/GL_TEXTURE_WRAP_T default to GL_REPEAT.
Use GL_CLAMP_TO_EDGE instead.

Uploading alternate rows of Pixel Data OpenGL

I am uploading an interlaced image to an OpenGL texture using glTexImage2D which of course uploads whole image. What I need is to upload only alternate rows, so on first texture odd rows and on second even rows.
I don't want to create another copy of the Pixel Data on CPU.
You can set GL_UNPACK_ROW_LENGTH to twice the actual row length. This will effectively skip every second row. If the size of your texture is width x height:
glPixelStorei(GL_UNPACK_ROW_LENGTH, 2 * width);
glBindTexture(GL_TEXTURE_2D, tex1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, width);
glBindTexture(GL_TEXTURE_2D, tex2);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
Instead of setting GL_UNPACK_SKIP_PIXELS to skip the first row, you can also increment the data pointer accordingly.
There is an ancient SGI extension (GL_SGIX_interlace) for transferring interlaced pixel data, but it is probably not supported on your implementation.
An alternative you might consider is memory mapping a Pixel Buffer Object. You can fill this buffer over two passes and then use it as the source of image data in a call to glTexImage2D (...). You essentially do the de-interlacing yourself, but since this is done by mapping a buffer object's memory you are not making an unnecessary copy of the image on the CPU.
Pseudo code showing how to do this:
GLuint deinterlace_pbo;
glGenBuffers (1, &deinterlace_pbo);
// `GL_PIXEL_UNPACK_BUFFER`, when non-zero is the source of memory for `glTexImage2D`
glBindBuffer (GL_PIXEL_UNPACK_BUFFER, deinterlace_pbo);
// Reserve memory for the de-interlaced image
glBufferData (GL_PIXEL_UNPACK_BUFFER, sizeof (pixel) * interlaced_rows * width * 2,
NULL, GL_STATIC_DRAW);
// Returns a pointer to the ***GL-managed memory*** where you will write the image
void* pixel_data = glMapBuffer (GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
// Odd Rows First
for (int i = 0; i < interlaced_rows; i++) {
for (int j = 0; j < width; j++) {
//Fill in pixel_data for each pixel in row (i*2+1)
}
}
// Even Rows
for (int i = 0; i < interlaced_rows; i++) {
for (int j = 0; j < width; j++) {
//Fill in pixel_data for each pixel in row (i*2)
}
}
glUnmapBuffer ();
// This will read the memory in the object bound to `GL_PIXEL_UNPACK_BUFFER`
glTexImage2D (..., NULL);
glBindBuffer (GL_PIXEL_UNPACK_BUFFER, 0);