texture formats in glTeximage3D - opengl

I am using teximage3D with gl_texture_3D and gl_texture_2D_array as a targets.
I am creating 4 layers of colors and applying that on a sphere. So i am expecting that it will apply 4 layers on sphere equally.
But for GL_TEXTURE_3D, it repeats all the layers 2 times. whereas for gl_texture_2D_array it applies those layers only once as per expected.
int w = 4, h = 4, d = 4;
size_t size = w * h * d;
*format=GL_RGBA;
GLubyte *dataRGBA=new GLubyte[4*size];
for (int i=0; i<size/4; i++)
{
dataRGBA[4*i]=200;
dataRGBA[4*i+1]=0;
dataRGBA[4*i+2]=0;
dataRGBA[4*i+3]=255;
}
for (int i=size/4; i<size/2; i++)
{
dataRGBA[4*i]=0;
dataRGBA[4*i+1]=255;
dataRGBA[4*i+2]=0;
dataRGBA[4*i+3]=255;
}
for ( int i=size/2; i<(3*size)/4; i++)
{
dataRGBA[4*i]=0;
dataRGBA[4*i+1]=0;
dataRGBA[4*i+2]=255;
dataRGBA[4*i+3]=255;
}
for ( int i=(3*size)/4; i<size; i++)
{
dataRGBA[4*i]=255;
dataRGBA[4*i+1]=0;
dataRGBA[4*i+2]=255;
dataRGBA[4*i+3]=255;
}
glGenTextures(1,&id);
glBindTexture(*target11, id);
glTexParameteri(*target11, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// when this texture needs to be magnified to fit on a big polygon, use linear interpolation of the texels to determine the color
glTexParameteri(*target11, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// we want the texture to repeat over the S axis, so if we specify coordinates out of range we still get textured.
glTexParameteri(*target11, GL_TEXTURE_WRAP_S, GL_REPEAT);
// same as above for T axis
glTexParameteri(*target11, GL_TEXTURE_WRAP_T, GL_REPEAT);
// same as above for R axis
glTexParameteri(*target11, GL_TEXTURE_WRAP_R, GL_REPEAT);
glTexImage3D(*target11, 0, *format, w, h, d, 0, GL_RGBA, GL_UNSIGNED_BYTE, dataRGBA);

I believe that the issue is that a 2d texture array uses integers as texture indices, while a 3d texture scales the R axis 0-1 like the S and T axes.

Related

OpenGL single channel viability to multiple channel

When I rasterize out a font, my code gives me a single channel of visability for a texture. Currently, I just duplicate this out to 4 different channels, and send that as a texture. Now this works, but I want to try and avoid unnecessary memory allocations and de-alocations on the cpu.
unsigned char *bitmap = new unsigned char[width*height] //How this is populated is not the point.
bitmap, now contains a 2d graphic.
It seems this guy also has the same problem: Opengl: Use single channel texture as alpha channel to display text
I do the same thing as a work around for now, where I just multiply the array size by 4 and copy the data into it 4 times.
unsigned char* colormap = new unsigned char[width * height * 4];
int offset = 0;
for (int d = 0; d < width * height;d++)
{
for (int i = 0;i < 4;i++)
{
colormap[offset++] = bitmap[d];
}
}
WHen I multiply it out, I use:
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(gltype, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, colormap);
And get:
Which is what I want.
When i use only the single channel:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, bitmap);
And Get:
It has no transparency, only red ext. makes it hard to colorize and ext. later.
Instead of having to do what I feel is a unnecessary allocations on the cpu side id like the tell OpenGL: "Hey your getting just one channel. multiply it out for all 4 color channels."
Is there a command for that?
In your shader, it's trivial enough to just broadcast the r component to all four channels:
vec4 vals = texture(tex, coords).rrrr;
If you don't want to modify your shader (perhaps because you need to use the same shader for 4-channel textures too), then you can apply a texture swizzle mask to the texture:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
When mechanisms read from the fourth component of the texture, they'll get the value defined by the red component of that texture.

OpenGL Texture corruption

i am rendering simple pixel buffer in OpenGL. First, i create a quad, then i create a texture. It works correctly if there is no changes in buffer. When i change my buffer and add new buffer into texture by glTexSubImage2D or glTexImage2D my texture's top section corrupts like image.
I create my buffer like this.
int length = console->width * console->height * 3;
GLubyte buf[length];
for(int i = 0; i < length; i += 3) {
buf[i] = 0;
buf[i + 1] = 0;
buf[i + 2] = 0;
}
console->buffer = buf;
I create texture like this.
glGenTextures(1, &console->textureID);
glBindTexture(GL_TEXTURE_2D, console->textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, console->width, console->height, 0, GL_RGB, GL_UNSIGNED_BYTE, console->buffer);
tpUseShader(console); // -> calls glUseShader(console->programID);
glUniform1i(glGetUniformLocation(console->programID, "texture"), 0);
I update texture like this.
glBindTexture(GL_TEXTURE_2D, console->textureID);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, console->width, console->height, GL_RGB, GL_UNSIGNED_BYTE, console->buffer);
For testing i change my buffer like this in render function
if(console->buffer[6] == 255) {
console->buffer[6] = 0; // 6 is second pixel's red value.
console->buffer[10] = 255; // 10 is third pixel's green value
} else {
console->buffer[6] = 255;
console->buffer[10] = 0;
}
Then i call tpUseShader and render my quad.
How can i fix this problem?
I changed my console size to 10x10 and run again this time i got same results but, in image you can see from bottom left 3rd pixel is dark blue. When i print printf("3rd pixel: %d- %d - %d\n", console->buffer[12], console->buffer[13], console->buffer[14]);. value i got red: 0 green: 0 blue: 0 values. That means my buffer is normal.
I got the solution. As pleluron said in comments of question. I changed buf in to console->buffer, and it worked!. Now my buffer initialization code is like this:
console->buffer = malloc(sizeof(GLubyte) * length);
for(int i = 0; i < length; i += 3) {
console->buffer[i] = 0;
console->buffer[i + 1] = 0;
console->buffer[i + 2] = 0;
}

OpenGL FreeType: weird texture

After I have initialized the library and loaded the texture I get http://postimg.org/image/4tzkq4uhl.
But when I added this line to the texture code:
std::vector<unsigned char> buffer(w * h, 0);
I get http://postimg.org/image/kqycmumvt.
Why is this happening when I add that specific code, and why does it seems like the letter is multiplied? I have searched examples and tutorials about FreeType and I saw that in some of them they change the buffer array, but I didn't really understand that, so if you can explain that to me, I may handle this better.
Texture Load:
Texture::Texture(FT_GlyphSlot slot) {
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
int w = slot->bitmap.width;
int h = slot->bitmap.rows;
// When I remove this line, the black rectangle below the letter reappears.
std::vector<unsigned char> buffer(w * h, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, slot->bitmap.width, slot->bitmap.rows, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
}
Fragment Shader:
#version 330
in vec2 uv;
in vec4 tColor;
uniform sampler2D tex;
out vec4 color;
void main () {
color = vec4(tColor.rgb, texture(tex, uv).a);
}
You're specifying GL_LUMINANCE_ALPHA for the format of the data you pass to glTexImage2D(). Based on the corresponding FreeType documentation I found here:
http://www.freetype.org/freetype2/docs/reference/ft2-basic_types.html#FT_Pixel_Mode
There is no FT_Pixel_Mode value specifying that the data in slot->bitmap.buffer is in fact luminance-alpha. GL_LUMINANCE_ALPHA is a format with 2 bytes per pixel, where the first byte is used for R, G, and B when the data is used to specify a RGBA image, and the second byte is used for A.
Based on the data you're showing, slot->bitmap.pixel_mode is most likely FT_PIXEL_MODE_GRAY, which means that the bitmap data is 1 byte per pixel. In this case, you need to use GL_ALPHA for the format:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, slot->bitmap.width, slot->bitmap.rows, 0,
GL_ALPHA, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
If the pixel_mode is something other than FT_PIXEL_MODE_GRAY, you'll have to adjust the format accordingly, or potentially create a copy of the data if it's a format that is not supported by glTexImage2D().
The reason you get garbage if you specify GL_LUMINANCE_ALPHA instead of GL_ALPHA is that it reads twice as much data as is contained in the data you pass in. The content of the data that is read beyond the allocated bitmap data is undefined, and may well change depending on what other variables you declare/allocate.
If you want to use texture formats that are still supported in the core profile instead of the deprecated GL_LUMINANCE_ALPHA or GL_ALPHA, you can use GL_R8 instead. Since this format has only one component, instead of the four in GL_RGBA, this will also use 75% less texture memory:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, slot->bitmap.width, slot->bitmap.rows, 0,
GL_RED, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
This will also require a slight change in the shader to read the r component instead of the a component:
color = vec4(tColor.rgb, texture(tex, uv).r);
Solved it. I added the following to my code and it works good.
GLubyte * data = new GLubyte[2 * w * h];
for( int y = 0; y < slot->bitmap.rows; y++ )
{
for( int x = 0; x < slot->bitmap.width; x++ )
{
data[2 * ( x + y * w )] = 255;
data[2 * ( x + y * w ) + 1] = slot->bitmap.buffer[x + slot->bitmap.width * y];
}
}
I don't know what happened with that particular line I added but now it works.

Can't load image into OpenGL texture if it is larger than 256x128

I am trying to load a jpeg image into a texture which I will use for volume rendering. However, the image fails to load (only white rectangle shown) whenever I try to load a jpeg larger than 256x128 pixels.
I am using OpenCV to convert the jpeg into raw values. This sounds like overkill, but I had OpenCV already. I am open to using another library
My code may seem strange, but it is because I am using luminance values, but also using an alpha value. As a result, I am taking the luminance value and using it across the RGBA channels for now.
This code worked when I used raw luminance data. But now, I am just trying to load a single jpeg image. (It works when my image is 256x128, but fails if it is bigger)
My texture loading code:
unsigned char* chRGBABuffer = new unsigned char[IMAGEWIDTH * IMAGEHEIGHT * IMAGECOUNT * 4];
//Only create 1 3D texture now
glGenTextures(1, (GLuint*)&textureID3D);
// Set the properties of the texture.
glBindTexture(GL_TEXTURE_3D, textureID3D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Convert the data to RGBA data.
// Here we are simply putting the same value to R, G, B and A channels.
// This can be changed depending on the source data
// Usually for raw data, the alpha value will
// be constructed by a threshold value given by the user
for (int i = 0; i < IMAGECOUNT; ++i)
{
cv::Mat image;
image = cv::imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
unsigned char * chBuffer = image.data;
if (!image.data) // Check for invalid input
{
fprintf(stderr, "Could not open or find image\n");
return -1;
}
for (int nIndx = 0; nIndx < IMAGEWIDTH * IMAGEHEIGHT; ++nIndx)
{
chRGBABuffer[nIndx * 4] = chBuffer[nIndx];
chRGBABuffer[nIndx * 4 + 1] = chBuffer[nIndx];
chRGBABuffer[nIndx * 4 + 2] = chBuffer[nIndx];
chRGBABuffer[nIndx * 4 + 3] = chBuffer[nIndx];
}
}
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, IMAGEWIDTH, IMAGEHEIGHT, IMAGECOUNT, 0,
GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)chRGBABuffer);
glBindTexture(GL_TEXTURE_3D, 0);
Probably you're simply running out of texture memory. Volumetric images are memory hogs and only few OpenGL implementations are capable of swapping in portions of a 3D texture on demand (at a significant performance drop penality).

Texture Transparency in OpenGL

I am making a texture in my environment that excludes all white pixels. I read in a ppm file and the fourth value is always set to 0 if it is a white pixel. Everything seems to be in order, I have set up my view correctly and so forth. The texture image is visible with my current code, however the image as a whole is not fully opaque. It is highly see through. Is this a problem with how I am setting up my GL_Blend? Why is the entire texture not opaque as it should be only excluding the white pixels?
first three values are read in as rgb values and fourth value is not in file, it is selected depending on the total value of the three previous numbers from rgb. This texture is not loaded every time I render it is in a display list so only done once.
glPushMatrix();
FILE *inFile3;
char dump3[3];
int max3, k3 = 0;
inFile3 = fopen("tree.ppm", "r");
int x3;
int y3;
fscanf(inFile3, "%s", dump3);
fscanf(inFile3, "%d %d", &x3, &y3);
fscanf(inFile3, "%d", &max3);
int arraySize3 = y3*(4*x3);
int pixel3, rgb = 0;
GLubyte data3[arraySize3];
for (int i = 0; i < x3; i++) {
for (int j = 0; j < y3; j++) {
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
data3[k3++] = ((rgb) > 760) ? 0 : 255;
rgb = 0;
}
}
fclose(inFile3);
glGenTextures(1,&texture3);
glBindTexture(GL_TEXTURE_2D,texture3);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST_MIPMAP_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D,4,x3,y3,GL_RGBA,GL_UNSIGNED_BYTE,data3);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture3 );
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glRotatef(180, 0, 0, 0);
glTranslatef(0, -19, 0);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex3f(30,0,10);
glTexCoord2d(0,1); glVertex3f(30,20,10);
glTexCoord2d(1,1); glVertex3f(30,20,-10);
glTexCoord2d(1,0); glVertex3f(30,0,-10);
glEnd();
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glPopMatrix();
Screen shots:
If you move the viewpoint really close to the tree, does it become opaque? Or does it if you disable mipmapping?
edit
By moving the eye closer to a tree, the level 0 mipmap (your original image) is used. To select a mipmap, OpenGL computes which level provides the best match between the size of its texels and the size of a pixel.
To disable mipmapping generation, you must use glTexImage2D to upload your texture instead of gluBuild2DMipmaps.
To disable usage of mipmaps, change the following line
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
to
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
You could also do as Jim Buck suggests and use
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
to force the highest mipmap level used to 0. The lowest being by default 0, this effectively disables mipmapping. By setting GL_TEXTURE_MAX_LEVEL and GL_TEXTURE_BASE_LEVEL to the same value, you will be able to see the content of that specific mipmap level.
All this is to confirm if it's a problem with the mipmaps.
In addition to what #bernie pointed out about GluByte arrays and including glBegin/end, the problem was in..
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
GL_MODULATE needed to be replaced with GL_REPLACE. This fixed the issue. Thanks for your guys' help.