What internal format combinations would work for this following code example, if my intention is to have raw storage allocated as a non compressed texture and the texture view interpreting this as BC5 / RGTC ?
GLuint texId;
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_3D, texId);
glTexStorage3D(GL_TEXTURE_3D, 1, GL_RGBA32UI, 4, 4, 16);
glBindTexture(GL_TEXTURE_3D, 0);
assertNoError();
GLuint viewId;
glGenTextures(1, &viewId);
glTextureView(viewId, GL_TEXTURE_3D, texId, GL_COMPRESSED_RG_RGTC2, 0, 1, 0, 1);
assertNoError();
glDeleteTextures(1, &viewId);
glDeleteTextures(1, &texId);
assertNoError();
This example failed with INVALID_OPERATION and the GL debug output message says:
Internal formats neither compatible nor identical.
To narrow my question by exclusion:
glCompressed* with pixel unpack buffer is not an option.
TexStorage cannot have the compressed internal format. This is GL 4.5 and that has been removed.
OpenGL spec says this following pair is compatible: GL_RGTC2_RG, GL_COMPRESSED_RG_RGTC2. However GL_RGTC2_RG is not a GL define or defined value in any header or the spec.
You cannot allocate storage of a non-compressed format and view it with a compressed format. Or vice-versa. You can copy between compressed and uncompressed formats via glCopyImageSubData. But you can't do the kind of "casting" that you're trying to do.
Furthermore:
TexStorage cannot have the compressed internal format. This is GL 4.5 and that has been removed.
You cannot use generic compressed image formats, but specific formats (like GL_COMPRESSED_RG_RGTC2) are still available. Just not for 3D textures (BPTC can work with 3D textures, but not RGTC).
Vulkan has a mechanism for creating a VkImage of a compressed format from which you can then create a VkImageView with an appropriate uncompressed format (the reverse isn't allowed, but that doesn't really matter all that much). To do this, the image has to be created with the VK_IMAGE_CREATE_BLOCK_TEXEL_VIEW_COMPATIBLE_BIT creation flag, and the view must use a 32-bit unsigned int format, with sufficient components for each pixel of the view to correspond to the block byte size for the format.
Related
So I have a texture that has the external format GL_RED, and the internal format GL_RGBA.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0, layout, GL_UNSIGNED_BYTE, bitmap->data);
I would like to have the textured stored as (1,1,1,r) instead of (r,0,0,0).
I wouldn't like to recompute the entire bitmap as an RGBA one, and I don't want to create a new shader. Is it possible to tell OpenGL how to interpret the uploaded data?
You should avoid such divergences between internal format and the data you pass. If you want your texture to have a single color channel that is a normalized, unsigned byte, the correct way to spell that is with GL_R8 as the internal format. The texture will be stored as a single value of red, with the other channels getting filled in at texture access time with 0, 0, 1 in that order.
You can modify how texture data is accessed with the texture swizzle setting. This is a per-texture setting. If you want to receive the data in the shader as (1, 1, 1, r), you can do that with this swizzle setting:
GLint swizzleMask[] = {GL_ONE, GL_ONE, GL_ONE, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
Note that thos doesn't change how the data is "stored"; the texture will always be a single-channel, 8-bit unsigned normalized texture. It affects how the shader accesses the texture's data.
Note that you could do this within the shader itself, but really, it's easier to employ a swizzle mask.
Just use GL_RED for the internal format.
When you sample the texture in the shader, fill the rest of components (GBA, no R) with the value you wish.
I'm a bit confused about the internal format, format and type. So what's about the depth attachment point?
If I'm using a RenderBuffer, I think this is the valid code if I don't want to use stencil:
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32F, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRB);
However if I want to be able to read the depth values, I have to attach a texture to the depth attachment point. So I have to call a glTexImage2D function with parameters "internal format", "format" and "type".
In this case which internal format, format and type should I choose? Can I use the following combinations for a depth attachment? (in the order of: internal format, format and type)
GL_R32F, GL_RED, GL_FLOAT
GL_DEPTH_COMPONENT32, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT
GL_DEPTH_COMPONENT32F, GL_DEPTH_COMPONENT, GL_FLOAT
Is the GL_UNSIGNED_INT type valid for the 2nd case? What does that really mean? Will it allocate 4 bytes per fragment? In some tutorials they are using GL_UNSIGNED_BYTE for the type parameter. Which is the correct one?
Thanks
Edit:
Clarified my question about which parameters I'm interested in.
Depth values are not color values. As such, if you want to store depth values in a texture, the texture must use an internal format that contains depth information.
The pixel transfer format/type parameters, even if you're not actually passing data, must still be reasonable with respect to the internal format. Since the internal format contains depth information, your pixel transfer format must specify depth information: GL_DEPTH_COMPONENT.
As for the pixel transfer type, you should read 32F back as GL_FLOAT.
I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.
I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.
I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.