Lets say I have a 32bbp pixel array, but I am using only the blue channel/component from the pixels. I need to upload this pixel array to a texture in a grayscale/luminance format. For example if a have a color (a:0,r:0,g:0,b:x) it needs to become (0,x,x,x) in the texture.
I am using Opengl v1.5
OpenGL up to version 2 had the texture internal format GL_LUMINANCE, which does exactly what you want.
In OpenGL-3 this was replaced with the internal format GL_R (GL_RED), which is a single component texture. In a shader you can use a swizzle like
gl_FrontColor.rgb = texture().rrr;
But there's also the option to set a "static" you may call it swizzle in the texture parameters:
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_R, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_B, GL_RED);
Related
So I have a texture that has the external format GL_RED, and the internal format GL_RGBA.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0, layout, GL_UNSIGNED_BYTE, bitmap->data);
I would like to have the textured stored as (1,1,1,r) instead of (r,0,0,0).
I wouldn't like to recompute the entire bitmap as an RGBA one, and I don't want to create a new shader. Is it possible to tell OpenGL how to interpret the uploaded data?
You should avoid such divergences between internal format and the data you pass. If you want your texture to have a single color channel that is a normalized, unsigned byte, the correct way to spell that is with GL_R8 as the internal format. The texture will be stored as a single value of red, with the other channels getting filled in at texture access time with 0, 0, 1 in that order.
You can modify how texture data is accessed with the texture swizzle setting. This is a per-texture setting. If you want to receive the data in the shader as (1, 1, 1, r), you can do that with this swizzle setting:
GLint swizzleMask[] = {GL_ONE, GL_ONE, GL_ONE, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
Note that thos doesn't change how the data is "stored"; the texture will always be a single-channel, 8-bit unsigned normalized texture. It affects how the shader accesses the texture's data.
Note that you could do this within the shader itself, but really, it's easier to employ a swizzle mask.
Just use GL_RED for the internal format.
When you sample the texture in the shader, fill the rest of components (GBA, no R) with the value you wish.
Is it possible to use glBlitFramebuffer to copy the alpha component from a read framebuffer with RGBA color attachment to a red component of draw framebuffer with R8 color attachment? If not, how would you do this?
Apparently the swizzle mask isn't used by glBlitFramebuffer.
The only way to do this (without pulling the memory from the GPU back to the CPU) is with some form of rendering operation. A blit can't do it.
You can use a compute shader, if that's available. Just bind the source and destination images via Image Load Store and read/write to them based on the compute shader's invocation index.
However, it's not clear exactly what you hope to gain by doing this copy operation. If you only plan to read from the GL_R8 texture as a bound texture/image, then all you need to do is create a view texture of the RGBA image. The reason to create a view of it is to be able to give the view a different swizzle mask. Simply swizzle the alpha into the red, and set green/blue/alpha to 0,0,1.
Obviously it won't be as efficient to access as a true GL_R8 texture. But you won't have to execute a potentially expensive copy operation either.
Using GL 4.5 Direct State Access calls:
GLuint alpha_tex;
glCreateTexture(GL_TEXTURE_2D, &alpha_tex);
glTextureView(alpha_tex, GL_TEXTURE_2D, rgba_tex, GL_RGBA8, 0, num_mipmaps(rgba_tex), 0, 1)
GLint swizzleMask[] = {GL_ALPHA, GL_ZERO, GL_ZERO, GL_ONE};
glTexureParameteriv(alpha_tex, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
Is it possible to draw a RGB texture as grayscale without using fragment shaders, using only fixed pipeline openGL?
Otherwise I'd have to create two versions of texture, one in color and one in black and white.
I don't know how to do this with an RGB texture and the fixed function pipeline.
If you create the texture from RGB source data but specify the internal format as GL_LUMINANCE, OpenGL will convert the color data into greyscale for you. Use the standard white material and MODULATE mode.
Hope this helps.
No. Texture environment combiners are not capable of performing a dot product without doing the scale/bias operation. That is, it always pretends that [0, 1] values are encoded as [-1, 1] values. Since you can't turn that off, you can't do a proper dot product.
How to render a texture with alpha?
I have a texture, and need to render it with different alpha values at different locations. Any way to do so? (My texture is GL_RGBA)
If not possible to change alpha value on the fly, I have to create different textures for different alpha levels?
First, make sure that your texture has an alpha channel. You mention you are loading an RGBA format, but it's always good to check the original file in an image editing program. Then make sure your texture is ready for rendering in openGL. A common mistake is to forget to set up the texture's filtering mode through glTexParameter*. It starts on a setting requiring mipmaps, so I find that it's easiest to start with:
glTexParameteri(GL_TEXTURE_2D, GL_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_MAG_FILTER, GL_LINEAR);
Secondly, you will need to set up openGL to be ready for blending. This involves a glEnable call with GL_BLEND and a glBlendFunc call. Most of the time, you will want the function call to be glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), as most other combinations of tokens will give you effects you are not after (see the glBlendFunc spec page for more info).
Finally, ensure you are sampling your texture at different points. If you are using immediate mode (you are using glVertex* to draw your scene), you will need to either use glTexGen* or manually specify texture points using glTexCoord* before calls to glVertex*. If using array data to draw your scene, make sure you have enabled the texture pointer using glEnableClientState(GL_TEXTURE_COORD_ARRAY) and glTexCoordPointer.
Your texture is GL_RGBA so it has a different alpha value for each texel.
If you want to change the alpha value used for render, I can think of the following methods:
Change the texture alpha values (not sure if you say that you don't want to do that).
Use glColor4f to change the alpha value of the vertices. It will multiply the texture values. You may need to use glEnable(GL_COLOR_MATERIAL) and/or glColorMaterial().
Use a vertex shader to change the vertex alpha values. It will multiply the texture values.
Use a fragment shader to change the sampled texture values on the fly.
Use two texture stages and multiply them. The second one will have the modified alpha values (see glActiveTexture() and friends).
Use a fragment shader and two (or more) texture stages. This is the coolest!
I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.